Posted by
Xin ZHOU on
Sep 30, 2008; 4:50pm
URL: http://imagej.273.s1.nabble.com/Java-SIFT-plugin-for-ImageJ-Question-on-parametage-tp3694919p3694920.html
Hello, Stephan,
Thanks for all your answers.
The article I've already red but I myself feel that I didn't understand
it totally.
This is the reason why I have still following questions.
Q1:
I'm actually working on a project which use the salient points detector
based on wavelet transformation.
My question comes from the fact that now I want to use SIFT detector and
I just wonder what will be the difference between the these two.
I'm only talking about detection, not the alignment, so no RASNAN
transformation to reduce the points.
I explain :
Salient points detector based on wavelet transformation is doing the
wavelet transformation with multi-resolution.
SIFT for me is something that using the DoG for different resolution and
detect salient points.
And DoG is very similar to the Mexical hat wavelet,
*so can we say that SIFT is just a approximatively the same thing that
if I use a Mexical hat wavelet to detect the salient points? *
I mean, *only for the detection*, *is there any difference?* or big
difference? (For example, the 500 first salient points, will they be in
the same position?)
That's why I tried your SIFT detector. And in reality it seems to be a
difference. I think it is a parameter setting problem, because for
wavelet, it performs on scale T, and the DoG, in fact it performs on a
resolution which is the difference of two octaves (say T0-T1).
I try to make T0-T1=T, and then the result should be similar, no?
Q2 :
How did SIFT sort the features? In the result, it seems that it sorts
the features by scale.
Does this mean that larger features first and then small features?
(means salient points in low frequency domain and then high frequency
domain)
Any help will be appreciated. Especially for the difference between
these two detectors.
cheers, Xin
Stephan Saalfeld a écrit :
> Hi Xin,
>
> SIFT interest points are blobs of arbitrary size. They are detected by
> searching the whole scale space of the image for such points. You can
> think about the scale space as shrinking the image continuously. All
> blob like structures have at some point the size of exactly one pixel
> and this is where they are detected. Obviously it doesn't make sense to
> shrink the image smaller than 1 pixel, so the natural limits of the
> scale space of an image are the original pixel resolution as the
> "smallest scale" and the whole image in 1 pixel as the "largest scale".
>
> That is, allowing some minimal image size larger than 1 pixel would not
> detect blobs that are still larger than 1 pixel at this resolution.
> Limiting the maximal image size will not detect blobs that are already
> smaller than one pixel at this resolution. The explanation at Albert's
> page is correct, but here we have a different formulation :)
>
> In detail---the "shrinkage" is "simulated" by Gaussian smoothing and
> actual resizing is only performed at doubling of the Gaussian kernel
> size. This means that only image sizes of the original image size
> divided by a power of two will occur. The "shrinking" between these
> steps are in the scale octaves whereas each octave has the mentioned
> number of steps. If you are interested in more detail, you can have a
> look into the original paper of the inventor of SIFT, David Lowe, which
> is very nice to read:
>
>
http://www.cs.ubc.ca/~lowe/papers/ijcv04.pdf>
> or have a look at this figure where two exemplary scale octaves are
> shown:
>
>
http://fly.mpi-cbg.de/~saalfeld/scale_octaves.png>
> I beg your pardon for not having a comprehensive documentation written
> yet. This will happen at some point---I promise!
>
> If you are interested, you could have a look at fiji:
>
>
http://pacific.mpi-cbg.de/wiki/index.php/Main_Page>
> It contains a slightly more mature implementation and some plugins for
> further use of the detection results in ImageJ.
>
> Best regards,
> Stephan
>
>