A theoritical question about SIFT implementation
Posted by Xin ZHOU on May 21, 2010; 11:57am
URL: http://imagej.273.s1.nabble.com/A-theoritical-question-about-SIFT-implementation-tp3688193.html
Hello, everyone
I'm now preparing a theoretical explanation of SIFT implementation.
I have two theoretical questions about SIFT.
I wonder if I could get help here:
1st :
- Generally to find edges or corners or salient points, mathematically we are looking for
gradient (1st derivative) extrema, or Laplacien (2nd derivative) Zero-crossing points
Question is, DoG simulated LoG in such a way that we can almost take it as a Laplacien filter.
Then why we use Non-Maximum Suppression (NMS)?
Isn't it Zero-crossing that we should look for? When we look for Maximum??
2nd :
- When we extract SIFT descriptors, we do it with gradient map .....
It is strange, as what we have is n Gaussian maps and n-1 DoG map, the latter are Laplacien ones, not a gradient maps....
So if I understand correctly, DoG is used for salient point detection.
But for descriptor, it is extracted from Gaussian maps, but with a certain gradient filter (Sobel for example). Am I right?
If so, then why? This augments computing complexity.
Why could not we just use values in DoG maps(2nd order derivation)??
Mathematically, 2nd derivation is less stable / robust / repesentitive ???
I hope I formulate correctly the questions, in fact it is all about 1st and 2nd order derivation.
I just need a reasonable explaination.
cheers, Xin
--
Mr. Xin Zhou,
Doctorant Assistant
HUG - SIM
Rue Gabrielle-Perret-Gentil 4
CH-1211 GENEVE 14
tel +41 22 37 26279
mobile +41 765 307 960