Blur Detection

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
7 messages Options
Reply | Threaded
Open this post in threaded view
|

Blur Detection

Benjamin Grant
Hi All,
Does anyone have any suggestions on detecting degree of blur of an
image? I am recording videos and want only frames that are in
focus/nonblurred to a user defined threshold. So far I've been using a
method that employs the haar wavelet transform
(http://www.cs.cmu.edu/~htong/pdf/ICME04_tong.pdf) but have had no
success. As far as I can tell the magnitude of the n+1 decomposition
level of the haar transform >> magnitude of the nth decomposition
transform, making the rules they use in this paper not work. It may
just be a coding mistake on my behalf.

Either way, if anyone has any resources they' recommend for selecting
quality frames from a video (the video is from a microendescope and is
flourescent cellular data -> similar to a confocal in grayscale).
Thanks in advance
Ben
Reply | Threaded
Open this post in threaded view
|

Re: Blur Detection

Olivier Burri
Hello,

Wavelet-based analyses are extremely useful tools, you could have a look at the implementation of the Extended Depth of Field from the BioImaging Group at EPFL
http://bigwww.epfl.ch/demo/edf/
(You should get the developer distribution)
In this case they use it to flatten out 3D data by picking in-focus areas of each slice. Your implementation should be a lot simpler, needing only to give a global score to the image.



Another method that is classically used is a maximum contrast search algorithm. You should have a look at the attached image, borrowed from
Applying Watershed Algorithms to the Segmentation
of Clustered Nuclei
Norberto Malpica, Francisco del Pozo, Cytometry 28:289–297 (1997)

And some other references from that paper.

Boddeke FR, van Vliet LJ, Netten H, Young IT: Autofocusing in
microscopy based on the OTF and sampling. Bioimaging 2:193–203,
1994.

Vollath D: The influence of the scene parameters and of noise on the
behaviour of automatic focusing algorithms. J Microsc 151:133–146,
1988.

Best

Oli

Olivier Burri
Engineer, Development in Image Processing
BioImaging and Optics platform (PTBIOP)
Tel: [+4121 69] 39629

________________________________________
From: ImageJ Interest Group [[hidden email]] on behalf of Benjamin Grant [[hidden email]]
Sent: Tuesday, February 07, 2012 5:22 AM
To: [hidden email]
Subject: Blur Detection

Hi All,
Does anyone have any suggestions on detecting degree of blur of an
image? I am recording videos and want only frames that are in
focus/nonblurred to a user defined threshold. So far I've been using a
method that employs the haar wavelet transform
(http://www.cs.cmu.edu/~htong/pdf/ICME04_tong.pdf) but have had no
success. As far as I can tell the magnitude of the n+1 decomposition
level of the haar transform >> magnitude of the nth decomposition
transform, making the rules they use in this paper not work. It may
just be a coding mistake on my behalf.

Either way, if anyone has any resources they' recommend for selecting
quality frames from a video (the video is from a microendescope and is
flourescent cellular data -> similar to a confocal in grayscale).
Thanks in advance
Ben

maxContrast.jpg (38K) Download Attachment
Reply | Threaded
Open this post in threaded view
|

Re: Blur Detection

Benjamin Grant
Oli,
thanks for the great response.  One issue I'm having with all of these
is that for images of similar brightness I'm able to get good results.
However, if I have an image with a large saturated region then
typically the maxcontrast result is favorable as is the blur
coefficient computed using haar wavelets (I fixed my code).  I tried
calculating the normalized variance
(http://www.iris.ethz.ch/msrl/publications/files/0021_manuscript.pdf)
as well, which had similar limitations. The paper I just listed does
cite maximum contrast as the best method for flourescent imaging, but
I'm still getting results that seem to favor having huge bright spots.
Generally my images are dark with scattered cell nuclei, but
occasionally there will be high background signal from nonspecific
staining and these images are always reported to be in great focus and
have optimal contrast - even though they're not in focus at all.  Is
there anyway to compensate for this other than just throwing out
images based on another metric that looks at overall intensity?

Thanks
Ben

On Tue, Feb 7, 2012 at 2:34 AM, Burri Olivier <[hidden email]> wrote:

> Hello,
>
> Wavelet-based analyses are extremely useful tools, you could have a look at the implementation of the Extended Depth of Field from the BioImaging Group at EPFL
> http://bigwww.epfl.ch/demo/edf/
> (You should get the developer distribution)
> In this case they use it to flatten out 3D data by picking in-focus areas of each slice. Your implementation should be a lot simpler, needing only to give a global score to the image.
>
>
>
> Another method that is classically used is a maximum contrast search algorithm. You should have a look at the attached image, borrowed from
> Applying Watershed Algorithms to the Segmentation
> of Clustered Nuclei
> Norberto Malpica, Francisco del Pozo, Cytometry 28:289–297 (1997)
>
> And some other references from that paper.
>
> Boddeke FR, van Vliet LJ, Netten H, Young IT: Autofocusing in
> microscopy based on the OTF and sampling. Bioimaging 2:193–203,
> 1994.
>
> Vollath D: The influence of the scene parameters and of noise on the
> behaviour of automatic focusing algorithms. J Microsc 151:133–146,
> 1988.
>
> Best
>
> Oli
>
> Olivier Burri
> Engineer, Development in Image Processing
> BioImaging and Optics platform (PTBIOP)
> Tel: [+4121 69] 39629
>
> ________________________________________
> From: ImageJ Interest Group [[hidden email]] on behalf of Benjamin Grant [[hidden email]]
> Sent: Tuesday, February 07, 2012 5:22 AM
> To: [hidden email]
> Subject: Blur Detection
>
> Hi All,
> Does anyone have any suggestions on detecting degree of blur of an
> image? I am recording videos and want only frames that are in
> focus/nonblurred to a user defined threshold. So far I've been using a
> method that employs the haar wavelet transform
> (http://www.cs.cmu.edu/~htong/pdf/ICME04_tong.pdf) but have had no
> success. As far as I can tell the magnitude of the n+1 decomposition
> level of the haar transform >> magnitude of the nth decomposition
> transform, making the rules they use in this paper not work. It may
> just be a coding mistake on my behalf.
>
> Either way, if anyone has any resources they' recommend for selecting
> quality frames from a video (the video is from a microendescope and is
> flourescent cellular data -> similar to a confocal in grayscale).
> Thanks in advance
> Ben
Reply | Threaded
Open this post in threaded view
|

Re: Blur Detection

dscho
Dear Benjamin,

On Tue, 7 Feb 2012, Benjamin Grant wrote:

> Generally my images are dark with scattered cell nuclei, but
> occasionally there will be high background signal from nonspecific
> staining and these images are always reported to be in great focus and
> have optimal contrast - even though they're not in focus at all.  Is
> there anyway to compensate for this other than just throwing out images
> based on another metric that looks at overall intensity?

Please note that there is no scientifically sound method to put back
information that was lost in the process of recording the images.

Sure, if you have background information such as that you know the
background is always dark and your objects of interest have certain
features, you can come up with a method that looks as if it put back the
information you desire; however, it is just mixing the expectation with
the observation.

And here lies the problem: if your expectations do not meet the
observation, any "de-blurred" result will be fantasy, not reality.

The general case you describe is just such a nightmare scenario: it is
underspecified. Indeed, it is easy to come up with two original, different
settings that would result in the same, "blurred" image. For example, a
semi-uniform brigh background could be both due to background fluorescence
of stuff you are not interested in, or due to heavily defocused stuff that
you *are* interested in.

So unless you can change the conditions of the experiment to enforce dark
background, I fear you have to reconsider the whole experiment. You cannot
put information back via image processing when that information is neither
in the images nor in the theoretical context of the experiment.

Ciao,
Johannes
Reply | Threaded
Open this post in threaded view
|

Re: Blur Detection

Benjamin Grant
Hi Johannes,
I'm sorry for not being more clear. The scenario is the folllowing: I
record 10s videos of 15 to 20 fps, and record about 20 of these videos
during one clinic session.  I just want frames that have uniform
background and are not blurry.  Historically, my lab has sat down and
done this manually. It takes a significant amount of time to go
through every frame and throw out all the bad ones (this is an invivo
experiment and the recording continues even if the probe is lifted off
the tissue, turned sideways, etc., most of the frames are not good).
We are totally fine with throwing away 90% of the frames, we're not
looking for a video we simply want to performa analysis on the good
frames.

My point is I have no interest in "recovering" or "de-blurring
images". My object is to throw out blurred images automatically.  My
problem is that the algorithms I've been employing to detect image
blur seem to not give uniform results for images of different
intensity.  For instance, an image in perfect focus with a dark
background will have a very low blur coefficient (ranging from 0 to 1,
1 being completely blurred) of around .01.  An image in poor focus
with a dark background will have a high blur coefficient or around .6
or higher.  However, an image with say half bright background with
poor focus results in a blur coefficient of around .01 to .1 as well,
even though it is not in focus. Now I could just go through in throw
out all images with high background intensity, but then I do lose a
bit of good data.  I'm imaging the cervix where much of the tissue is
squamous, but some is columnar. Columnar tissue, using our
fluorescence modality, does have higher overall image intensity which
is fine.  When manually selecting frames, I can pick out which ones
are still in focus. I do not de-blurring or "recovery" of lost
information and process them with the same segmentation routines as my
other images with no problems.  My only current dilemna is automating
the tedious process of picking frames that are both in focus and
without motion artifact

Ben

On Tue, Feb 7, 2012 at 10:09 AM, Johannes Schindelin
<[hidden email]> wrote:

> Dear Benjamin,
>
> On Tue, 7 Feb 2012, Benjamin Grant wrote:
>
>> Generally my images are dark with scattered cell nuclei, but
>> occasionally there will be high background signal from nonspecific
>> staining and these images are always reported to be in great focus and
>> have optimal contrast - even though they're not in focus at all.  Is
>> there anyway to compensate for this other than just throwing out images
>> based on another metric that looks at overall intensity?
>
> Please note that there is no scientifically sound method to put back
> information that was lost in the process of recording the images.
>
> Sure, if you have background information such as that you know the
> background is always dark and your objects of interest have certain
> features, you can come up with a method that looks as if it put back the
> information you desire; however, it is just mixing the expectation with
> the observation.
>
> And here lies the problem: if your expectations do not meet the
> observation, any "de-blurred" result will be fantasy, not reality.
>
> The general case you describe is just such a nightmare scenario: it is
> underspecified. Indeed, it is easy to come up with two original, different
> settings that would result in the same, "blurred" image. For example, a
> semi-uniform brigh background could be both due to background fluorescence
> of stuff you are not interested in, or due to heavily defocused stuff that
> you *are* interested in.
>
> So unless you can change the conditions of the experiment to enforce dark
> background, I fear you have to reconsider the whole experiment. You cannot
> put information back via image processing when that information is neither
> in the images nor in the theoretical context of the experiment.
>
> Ciao,
> Johannes
>
Reply | Threaded
Open this post in threaded view
|

Re: Blur Detection

dscho
Hi Benjamin,

On Tue, 7 Feb 2012, Benjamin Grant wrote:

> I'm sorry for not being more clear. The scenario is the folllowing: I
> record 10s videos of 15 to 20 fps, and record about 20 of these videos
> during one clinic session.  I just want frames that have uniform
> background and are not blurry.  Historically, my lab has sat down and
> done this manually. It takes a significant amount of time to go through
> every frame and throw out all the bad ones (this is an invivo experiment
> and the recording continues even if the probe is lifted off the tissue,
> turned sideways, etc., most of the frames are not good).  We are totally
> fine with throwing away 90% of the frames, we're not looking for a video
> we simply want to performa analysis on the good frames.

Okay, sorry, I misunderstood.

So here are a couple of ideas (read: food for thought), hopefully they are
helpful:

- it looks as if treating dark and bright images differently made sense
  (even if the bright images would still have to be sorted through
  manually, it would be an improvement, no?)

- One intuitive measure of blurriness I found out is to look at the
  differences between neighboring pixels. If the differences are small, my
  collaborators frequently called the images "blurry".

  This could be tested by looking at the histogram of the image convoluted
  by a matrix such as this one:

        1 -1
        0 0

  If the histogram shows a substantial number of non-zeroes, it is likely
  "crisper" than if the histogram's far left bins are dominant.

  This is only a crude method, but the idea is to test things first that
  might work and then refine them when they show promise. (For example,
  "Find Edges" instead of the convolution might work better.)

- Instead of looking for the overall distribution of neighbor differences,
  it might make sense to just look for the maximal difference between
  neighbors (possibly after cutting off a certain percentile to guard
  against outliers, maybe 0.5% would be a good number to start with).

  This follows the intuition that the "crispness" of images comes mostly
  from clear boundaries around objects of interest.

- Since you are interested in the bright structures, and the background
  always is sort of "blurred", you might need to adjust the analysis to
  restrict the calculation of the blurriness factor by thresholding
  (although the automatic thresholding algorithm needs to be chosen with
  care to allow for your different image modalities).

Ciao,
Johannes
Reply | Threaded
Open this post in threaded view
|

Re: Blur Detection

Gabriel Landini
In reply to this post by Benjamin Grant
On Tuesday 07 Feb 2012 16:52:41 Benjamin Grant wrote:
> My point is I have no interest in "recovering" or "de-blurring
> images". My object is to throw out blurred images automatically.

Hi,
As far as I know there is no reliable way of doing this for an arbitrary
image. You can compare focus in 2 (or more) images of the same scene where the
focus changes. That is how autofocus algorithms work... all of them, so any
review of autofocus methods would give you a pile of ways of testing this.
However, I fear that you will struggle to get a robust descriptor of whether a
single arbitrary image is in the best focus, without previous knowledge of
what you will find in it.

If you look for the fraction of high frequency components, you might end up
accepting as good an image that is poorly focused but with lots of contrasted
bits in it.

Maybe the images you take are very similar to each other, in which case you
could do a running autofocus over the sequence (so you take the previous
frame[s] as reference), but I guess that it will be more often wrong than
right, specially if the contents of the image varies.

Cheers
G.