Measure/Unit for Goodness of Segmentation?

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
3 messages Options
Reply | Threaded
Open this post in threaded view
|

Measure/Unit for Goodness of Segmentation?

Rebecca Keller
Dear Fellow Imagers,



Has anyone come across a reliable method for quantifying the goodness of
segmentation, or at least a discussion addressing this issue? I realize
there are about a billion ways to segment an image, and all segmentation
problems have their unique issues, but it seems to me that it would be good
to have a way to quantify or benchmark various methods objectively. I
improvised recently a sort of z-score of the segmented ROIs versus
background, but I am pretty sure this would not work for all cases. Any
thoughts about this?



All the best,



Jacob Keller

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
jni
Reply | Threaded
Open this post in threaded view
|

Re: Measure/Unit for Goodness of Segmentation?

jni
Hi Jacob,




This is a large and fertile research topic. =) Your ideal evaluation method strongly depends on what you're segmenting and why. For roundish objects where you want to get the shape mostly right, Luis Pedro Coelho recommends Hausdorff distance or his own normalised sum of distances (NSD) metric, and I tend to agree with his assessment:


https://metarabbit.wordpress.com/2013/09/11/nuclear-segmentation-in-microscope-cell-images/


http://metarabbit.wordpress.com/2013/09/16/why-pixel-counting-is-not-adequate-for-evaluating-segmentation/




For neurons or other complex objects, boundary-based metrics can be a bit more problematic, but the less said about the popular Rand Index, the better. I've advocated for the Variation of Information before, for a few reasons, but most compellingly: it's measured in bits and easily interpretable, and it decomposes very naturally into oversegmentation (false splits) and undersegmentation (false merges). See discussion here, under "Results/Evaluation", including Fig 4:


http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0071715#s3




Sorry for the #shamelessselfpromotion, but since I've written extensively on the topic I'd rather not repeat myself. ;)




As I mentioned, this is a deep field and others may weigh in with different opinions. Hope this helps!




Juan.

On Thu, Nov 6, 2014 at 12:36 AM, Rebecca Keller <[hidden email]>
wrote:

> Dear Fellow Imagers,
> Has anyone come across a reliable method for quantifying the goodness of
> segmentation, or at least a discussion addressing this issue? I realize
> there are about a billion ways to segment an image, and all segmentation
> problems have their unique issues, but it seems to me that it would be good
> to have a way to quantify or benchmark various methods objectively. I
> improvised recently a sort of z-score of the segmented ROIs versus
> background, but I am pretty sure this would not work for all cases. Any
> thoughts about this?
> All the best,
> Jacob Keller
> --
> ImageJ mailing list: http://imagej.nih.gov/ij/list.html

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: Measure/Unit for Goodness of Segmentation?

gankaku
In reply to this post by Rebecca Keller
Dear Jacob, dear all

I had the same problem a while ago and thought about a combination of a
color-coded method for better visual inspection of the final segmentation
(speaking of binarization, not multi-class segmentation) with a
semi-quantitative evaluation. The latter would give an quality-value
defined according to pixels considered true-positive vs. false-positive and
false-negative pixels relative to a reference intensity chosen by the user
(here we have the old user-bias again ;-) )

Potentially, you might want to read about the implementation of the idea in
the following publication:

Qualitative and Quantitative Evaluation of Two New Histogram Limiting
Binarization Algorithm
<https://www.google.de/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&cad=rja&uact=8&ved=0CCMQFjAA&url=http%3A%2F%2Fwww.cscjournals.org%2Fmanuscript%2FJournals%2FIJIP%2Fvolume8%2FIssue2%2FIJIP-829.pdf&ei=6yxaVLOmF4GwPc--gJAI&usg=AFQjCNHC3XuS0iLtL2-ZHDwX32InrH8RpQ&sig2=KhRYm2KuHj2aRn4WRk4q3g&bvm=bv.78677474,d.ZWU>
.
J. Brocher, Int. J. Image Process. 8(2), 2014 pp. 30-48

The plugin is available as part of the BioVoxxel Toolbox ("Threshold Check")
http://fiji.sc/BioVoxxel_Toolbox#Threshold_Check

I thought this might be a good method to do a comparison between the
original image vs. its different autothresholding outputs (as available in
ImageJ/Fiji).
The user-based reference selection was chosen here because the "quality" of
the outcome might also depend on what is the final purpose of the
extraction process.
Other methods often talk about ground truth images to compare to. The
problem with ground truth in my eyes is the comparability with different
images and it is not less biased as a reference intensity selection.
Nevertheless, I am more than open and interested in any discussion
regarding this topic.

I am well aware that the described method is not perfect, but would also
like to get some feedback about potential improvement.

Kind regards,
Jan


2014-11-05 14:35 GMT+01:00 Rebecca Keller <[hidden email]>:

> Dear Fellow Imagers,
>
>
>
> Has anyone come across a reliable method for quantifying the goodness of
> segmentation, or at least a discussion addressing this issue? I realize
> there are about a billion ways to segment an image, and all segmentation
> problems have their unique issues, but it seems to me that it would be good
> to have a way to quantify or benchmark various methods objectively. I
> improvised recently a sort of z-score of the segmented ROIs versus
> background, but I am pretty sure this would not work for all cases. Any
> thoughts about this?
>
>
>
> All the best,
>
>
>
> Jacob Keller
>
> --
> ImageJ mailing list: http://imagej.nih.gov/ij/list.html
>



--

CEO: Dr. rer. nat. Jan Brocher
phone:  +49 (0)6234 917 03 39
mobile: +49 (0)176 705 746 81
e-mail: [hidden email]
info: [hidden email]
inquiries: [hidden email]
web: www.biovoxxel.de

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html