Dear mailing list,
I have developed a nice macro for identifying colocalized signals for z-stack confocal images with multiple channels/colors. However, my advisor/professor has now come to question my method for setting a threshold for signal/no-signal in the infividual channels. My manual method has been to simply raise the threshold above what I relatively confidently can see is background, like large areas with no apparent staining. The reason I did it manually is because when I played around with the automatic thresholding methods in ImageJ I decided that they were not any better than manual and could be subject to mistakes. My supervisor now feels that this sounds too subjective and would not look good in a paper. He therefore asked me to try to find a way that was more guided e.g. by the histogram or something, anything that is less subjective (not sure if he is worried about accuracy or how it sounds in a paper). What is the current standard for this kind of analysis in scientific journals, in particular with regards to the acceptability of manual thresholding of immunofluorescent brain sections stained with various antibodies (and nuclear markers and neuron trancers)? Is there a preference for automated, manual or some hybrid methods? Could I "get-away" with something like this: "Thresholds were set manually at a level that excluded most pixels in assumed background areas. Inspection of the assigned threshold level in the ImageJ intensity histogram showed that the thresholds were set at where the main peak (background pixels) started to or had reached a minimum value." Image set that Im working on: I am working with images of brain sections with 4 colors/channels: nuclear stain, two immunofluorescence staining for transciption factors (nuclear localization), and a retrograde nerve cell staning (nuclear + cytoplasm staining). Greateful for any advice! -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html |
Very interesting question indeed. From my experience I know, that journals
lika PNAS can get along with manual thresholding. At least if it is not the very core of a work. On the other hand I myself always shift between auto and manual method, trying to find balance between subjectivity and well... more accurate threshold. One very serious professor in this field have given me advice to threshold each image with the same numbers for each experiment, obtained from image of "empty view". Maybe this advice can be applied to your work too. 2014-12-06 18:00 GMT+03:00 Anders Lunde <[hidden email]>: > Dear mailing list, > > I have developed a nice macro for identifying colocalized signals for > z-stack confocal images with multiple channels/colors. However, my > advisor/professor has now come to question my method for setting a > threshold for signal/no-signal in the infividual channels. > > My manual method has been to simply raise the threshold above what I > relatively confidently can see is background, like large areas with no > apparent staining. The reason I did it manually is because when I played > around with the automatic thresholding methods in ImageJ I decided that > they were not any better than manual and could be subject to mistakes. > > My supervisor now feels that this sounds too subjective and would not look > good in a paper. He therefore asked me to try to find a way that was more > guided e.g. by the histogram or something, anything that is less subjective > (not sure if he is worried about accuracy or how it sounds in a paper). > > What is the current standard for this kind of analysis in scientific > journals, in particular with regards to the acceptability of manual > thresholding of immunofluorescent brain sections stained with various > antibodies (and nuclear markers and neuron trancers)? Is there a preference > for automated, manual or some hybrid methods? Could I "get-away" with > something like this: "Thresholds were set manually at a level that > excluded most pixels in assumed background areas. Inspection of the > assigned threshold level in the ImageJ intensity histogram showed that the > thresholds were set at where the main peak (background pixels) started to > or had reached a minimum value." > > Image set that Im working on: > > I am working with images of brain sections with 4 colors/channels: nuclear > stain, two immunofluorescence staining for transciption factors (nuclear > localization), and a retrograde nerve cell staning (nuclear + cytoplasm > staining). > > Greateful for any advice! > > -- > ImageJ mailing list: http://imagej.nih.gov/ij/list.html > -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html |
In reply to this post by MrScientistman2
I guess for example the triangle method in ImageJ looks pretty okay. Would it be more aceptable to use for example this instead of manual threshold?
Im not quite sure what you mean about empty image. Do you mean a non-stained negative control? -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html |
I mean just to take an image of something what is "background" in general.
And than use this value to minus it from all image and count it like thresholding. Depends on your object, I guess. As for me, I have tested all thresholding methods and only Shanbhag works quite well for me. Still, manual method was better :( For now I am doing so: find manually threshold for 1 image and than use exactly the same values for all images in experiment. My objects are supposed to have the same noise and background levels in 1 experiment. 2014-12-06 21:26 GMT+03:00 MrScienctistMan (Original poster) < [hidden email]>: > I guess for example the triangle method in ImageJ looks pretty okay. Would > it be more aceptable to use for example this instead of manual threshold? > > Im not quite sure what you mean about empty image. Do you mean a > non-stained negative control? > > -- > ImageJ mailing list: http://imagej.nih.gov/ij/list.html > -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html |
In reply to this post by MrScientistman2
In my experience, it is worth the time to find the best auto-threshold for your data set, and then stick to it. Of course, which method is “the best” varies with the images being analyzed. You just have to test them, do a few trial analyses, and find one that produces reasonable numbers that reflect what you see qualitatively. I agree with your advisor. It is not worth the risk using manual eyeball methods and then having a reviewer raise a problem with bias, whether it be real or perceived.
If you are analyzing a Z-stack, be sure you move to a bright image in the middle of the stack before applying the threshold to all images in the stack. If you apply a threshold to the first image in the stack and it has almost no signal, the thresholding will not be optimal. A recent example of how we applied Triangle successfully for colocalization analysis can be found in Traffic 15:1344 (2014). This method was robust and the numbers made sense with respect to the biology. For some of the data, which were acquired using a different method, Triangle did not work well, but Otsu did. Your mileage may vary. Michael > On Dec 6, 2014, at 10:00 AM, Anders Lunde <[hidden email]> wrote: > > Dear mailing list, > > I have developed a nice macro for identifying colocalized signals for > z-stack confocal images with multiple channels/colors. However, my > advisor/professor has now come to question my method for setting a > threshold for signal/no-signal in the infividual channels. > > My manual method has been to simply raise the threshold above what I > relatively confidently can see is background, like large areas with no > apparent staining. The reason I did it manually is because when I played > around with the automatic thresholding methods in ImageJ I decided that > they were not any better than manual and could be subject to mistakes. > > My supervisor now feels that this sounds too subjective and would not look > good in a paper. He therefore asked me to try to find a way that was more > guided e.g. by the histogram or something, anything that is less subjective > (not sure if he is worried about accuracy or how it sounds in a paper). > > What is the current standard for this kind of analysis in scientific > journals, in particular with regards to the acceptability of manual > thresholding of immunofluorescent brain sections stained with various > antibodies (and nuclear markers and neuron trancers)? Is there a preference > for automated, manual or some hybrid methods? Could I "get-away" with > something like this: "Thresholds were set manually at a level that > excluded most pixels in assumed background areas. Inspection of the > assigned threshold level in the ImageJ intensity histogram showed that the > thresholds were set at where the main peak (background pixels) started to > or had reached a minimum value." > > Image set that Im working on: > > I am working with images of brain sections with 4 colors/channels: nuclear > stain, two immunofluorescence staining for transciption factors (nuclear > localization), and a retrograde nerve cell staning (nuclear + cytoplasm > staining). > > Greateful for any advice! > > -- > ImageJ mailing list: http://imagej.nih.gov/ij/list.html -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html |
In reply to this post by MrScientistman2
Segmenting images with multiple features is a problem
Manual thresholding provides scope for bias - But blinded manual thresholding, when the person setting the threshold is unaware of which experimental group the images belong, is acceptable. I would also suggest recording the thresholds and making these and the original images available after publication. A related question is which images, of all those that could have been acquired, were used in the analysis and how were they chosen. This should form part of the methods section but is rarely described. ________________________________________ From: ImageJ Interest Group [[hidden email]] on behalf of Anders Lunde [[hidden email]] Sent: 06 December 2014 16:00 To: [hidden email] Subject: Is manual thresholding methods accepted by scientific journals? Dear mailing list, I have developed a nice macro for identifying colocalized signals for z-stack confocal images with multiple channels/colors. However, my advisor/professor has now come to question my method for setting a threshold for signal/no-signal in the infividual channels. My manual method has been to simply raise the threshold above what I relatively confidently can see is background, like large areas with no apparent staining. The reason I did it manually is because when I played around with the automatic thresholding methods in ImageJ I decided that they were not any better than manual and could be subject to mistakes. My supervisor now feels that this sounds too subjective and would not look good in a paper. He therefore asked me to try to find a way that was more guided e.g. by the histogram or something, anything that is less subjective (not sure if he is worried about accuracy or how it sounds in a paper). What is the current standard for this kind of analysis in scientific journals, in particular with regards to the acceptability of manual thresholding of immunofluorescent brain sections stained with various antibodies (and nuclear markers and neuron trancers)? Is there a preference for automated, manual or some hybrid methods? Could I "get-away" with something like this: "Thresholds were set manually at a level that excluded most pixels in assumed background areas. Inspection of the assigned threshold level in the ImageJ intensity histogram showed that the thresholds were set at where the main peak (background pixels) started to or had reached a minimum value." Image set that Im working on: I am working with images of brain sections with 4 colors/channels: nuclear stain, two immunofluorescence staining for transciption factors (nuclear localization), and a retrograde nerve cell staning (nuclear + cytoplasm staining). Greateful for any advice! -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html |
In reply to this post by MrScientistman2
Dear Anders,
There are two very good publications on this topic that come to mind: Rossner, M., and Yamada, K. M. (2004). What's in a picture? The temptation of image manipulation. J. Cell Biol., 166, 11-15. Cromey, D. (2010). Avoiding twisted pixels: Ethical guidelines for the appropriate use and manipulation of scientific digital images. I can't find any specific comments about thresholding, but the "5th commandment" listed in Cromey's publications states: 5. Digital Images that will be Compared to one Another Should be Acquired under Identical Conditions, and any Post-acquisition Image Processing Should also be Identical: "When images are to be compared to one another, the processing of the individual images should be identical. This includes acquisition techniques such as background subtraction or white-level balancing, which should be documented in the methods section. The same principle applies to publication figures, especially if multiple images will be published together in a single figure. This assists the reader in understanding how each image relates to the others in the group. Individual images within a figure should only be processed differently if there are compelling reasons to do so. In such cases, the differences must be explained in the methods section or the figure legend. Honesty, and completeness, are the best policies." I think it is implied here that having an automatic / computer-guided thresholding workflow as opposed to one that requires human intervention is preferred so that all data is treated equally. But, I think as long as you are clear in your methods section about why and how you chose manual thresholding and state that you have done so, I think this would be acceptable in a paper. Ask yourself, however, would someone else have thresholded your images the same way you did? If not, they might get different results and interpretation of your analysis. Bottom line, you should supply as much information describing how you performed your image analysis especially if the crux of the paper hinges on it. Providing raw data in the supplementary materials is another way for reviewers to check the validity of your manual thresholding argument. John Oreopoulos On 2014-12-06, at 10:00 AM, Anders Lunde wrote: > Dear mailing list, > > I have developed a nice macro for identifying colocalized signals for > z-stack confocal images with multiple channels/colors. However, my > advisor/professor has now come to question my method for setting a > threshold for signal/no-signal in the infividual channels. > > My manual method has been to simply raise the threshold above what I > relatively confidently can see is background, like large areas with no > apparent staining. The reason I did it manually is because when I played > around with the automatic thresholding methods in ImageJ I decided that > they were not any better than manual and could be subject to mistakes. > > My supervisor now feels that this sounds too subjective and would not look > good in a paper. He therefore asked me to try to find a way that was more > guided e.g. by the histogram or something, anything that is less subjective > (not sure if he is worried about accuracy or how it sounds in a paper). > > What is the current standard for this kind of analysis in scientific > journals, in particular with regards to the acceptability of manual > thresholding of immunofluorescent brain sections stained with various > antibodies (and nuclear markers and neuron trancers)? Is there a preference > for automated, manual or some hybrid methods? Could I "get-away" with > something like this: "Thresholds were set manually at a level that > excluded most pixels in assumed background areas. Inspection of the > assigned threshold level in the ImageJ intensity histogram showed that the > thresholds were set at where the main peak (background pixels) started to > or had reached a minimum value." > > Image set that Im working on: > > I am working with images of brain sections with 4 colors/channels: nuclear > stain, two immunofluorescence staining for transciption factors (nuclear > localization), and a retrograde nerve cell staning (nuclear + cytoplasm > staining). > > Greateful for any advice! > > -- > ImageJ mailing list: http://imagej.nih.gov/ij/list.html -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html |
In reply to this post by MrScientistman2
Dear all,
This is an interesting question. For a starter, there are many different types of automatic thresholding each with their own assumptions. There is no universal thresholding because the images statistics differ very much form the type of image (i.e. fluorescence, bright field, derivative) illumination or acquisition conditions and the properties of the background/scene. So what you can do is try out several algos until you find the best for your data and then tweak it manually. For most cases I have given up intensity thresholding years ago. Nowadays I am mostly doing image segmentation using texture features or second order statistics. Best regards, Dimiter -----Original Message----- From: ImageJ Interest Group [mailto:[hidden email]] On Behalf Of IMAGEJ automatic digest system Sent: 07 December, 2014 6:00 To: [hidden email] Subject: IMAGEJ Digest - 5 Dec 2014 to 6 Dec 2014 (#2014-353) There are 17 messages totaling 1178 lines in this issue. Topics of the day: 1. imaging enhancements (2) 2. How to use setLocation and measure 3. Hough transform problem Edge detection 4. Where can I get Adaptive Histogram Equalization (AHE) Plugin in ImageJ 5. Is manual thresholding methods accepted by scientific journals? (7) 6. flip ROI selection (3) 7. Latest Bio-Formats in Fiji 8. Error when using BioFormats Import for Macro -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html ---------------------------------------------------------------------- Date: Fri, 5 Dec 2014 22:33:24 -0800 From: Mohammad Faizal Hassan <[hidden email]> Subject: Re: imaging enhancements Hi Phanikanth , For AHE and CLAHE, you can download them from this link respectively 1. http://svg.dmi.unict.it/iplab/imagej/Plugins/Forensics/Histogram%20Equalization/HistogramEqualization.html 2. http://rsbweb.nih.gov/ij/plugins/clahe/index.html By the way, I also have a problem in getting the AHE plugin that can be implemented in the ImageJ software. I have been searching it from many sources in the internet but still did not find any. It is much appreciated if you can tell me from where I can get the plugin of AHE for free in the internet. Looking forward for your reply. Please help me. Thank you in advance. ----- Mohammad Faizal Hassan -- View this message in context: http://imagej.1557.x6.nabble.com/imaging-enhancements-tp3691329p5010810.html Sent from the ImageJ mailing list archive at Nabble.com. -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html ------------------------------ Date: Sat, 6 Dec 2014 10:05:01 +0200 From: Avital Steinberg <[hidden email]> Subject: Re: How to use setLocation and measure Thanks, but I don't want to create thousands of ROIs. I succeeded in moving the ROI around using: rm.select(img,selectIndex); rm.translate(offsetX,offsetY); So the selected ROI has been moved to the place I wanted to move it to. The Image Plus object is associated with the ROI I'm moving around. Therefore, using: ImageStatistics stats = img.getStatistics(); Works well. But - I'd like to hide the image, and img is the Image Plus object. So I would have liked to work on the ImageProcessor object, like this: ImageProcessor ip = img.getProcessor(); ImageStatistics stats = ImageStatistics.getStatistics(ip, MEAN, cal); But this gives me strange values. Any idea how to link the Image Processor object with the currently selected ROI? The ImagePlus object is linked to the selected ROI. Thank you, Avital On Fri, Dec 5, 2014 at 11:33 PM, Rasband, Wayne (NIH/NIMH) [E] < [hidden email]> wrote: > On Dec 5, 2014, at 12:24 PM, Avital Steinberg > <[hidden email]> > wrote: > > > > Hi, > > I am trying to move an ROI around in a java plugin and measure the > > mean intensity. I am measuring the ImageStatistics on an > > ImageProcessor, > since I > > don't want to display the image. The problem is that it is not > > actually moving the ROI around. Here is my code: (img is ImagePlus) > > Create a new ROI for each location. The following example measures the > mean of a circular ROI at 10 different locations. > > -wayne > > img = IJ.createImage("Untitled", "16-bit ramp", 500, 500, 1); > w = h = 50; > for (i=0; i<10; i++) { > roi = new OvalRoi(i*50, i*50, w, h); > img.setRoi(roi); > stats = img.getStatistics(); > IJ.log(i+" "+stats.mean); > } > > > > I am giving a simple example with two different locations, but my > > purpose is to move the ROI to many different locations and measure > > the mean intensities. > > > > ImageProcessor ip = img.getProcessor(); Calibration cal = > > img.getCalibration(); > > img.setRoi(new OvalRoi(598, 380, 32, 59)); Roi roi = > > img.getRoi(); > > ImageStatistics stats = ImageStatistics.getStatistics(ip, > > MEAN, cal); IJ.log("The mean intensity is: " + stats.mean); > > roi.setLocation(1056, 304); stats = > > ImageStatistics.getStatistics(ip, MEAN, cal); IJ.log("The mean > > intensity is now: " + stats.mean); > > > > How should I change my code so that it would really move the ROI to > > a new location and measure the mean intensity in this location? > > > > Thanks, > > Avital > > -- > ImageJ mailing list: http://imagej.nih.gov/ij/list.html > -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html ------------------------------ Date: Sat, 6 Dec 2014 06:52:03 -0500 From: Nizar SGHAIER <[hidden email]> Subject: Re: Hough transform problem Edge detection Good afternoon, thank you all for your replies, so I am trying to detect edges and not circles, and I have used the Burger Hough Linear transform and I did not found the disired result, I am attaching some pictures to this e-mail to explain more what I am looking for. Big thanks To you Wilhelm for the plugin posted on your dropbox, I will test it today and tell you if it returns the desired results. I am attaching images and the found java source code that gave me the disired results. PS : the firth link contains the java source code you can put the two files together in the same directory and compile and run using ImageJ to get it work. The fifth link conatins the obtained results using the java source code. thank you all cheers -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html ------------------------------ Date: Sat, 6 Dec 2014 04:39:24 -0800 From: Mohammad Faizal Hassan <[hidden email]> Subject: Where can I get Adaptive Histogram Equalization (AHE) Plugin in ImageJ Hi, I am Faizal Hassan a final year student from Universiti Teknologi MARA, Malaysia. Currently I am doing my final year project about enhancement of medical images using *Adaptive Histogram Equalization (AHE) *in *ImageJ software* . However, I have a difficulty in getting the AHE plugin that can be implemented in the software. I have been searching it from many sources in the internet but still did not find any. It is much appreciated if you can tell me from where I can get the plugin of AHE for free in the internet. I really need your help. FYI, I have found a plugin named Histogram Equalization (HE) that can be used in ImageJ (I have given the URL below) and I wish I could get the same thing for the AHE plugin. Looking forward for your reply. Please help me. Thank you in advance. *The HE URL: * http://svg.dmi.unict.it/iplab/imagej/Plugins/Forensics/Histogram%20Equalization/HistogramEqualization.html ----- Mohammad Faizal Hassan -- View this message in context: http://imagej.1557.x6.nabble.com/Where-can-I-get-Adaptive-Histogram-Equalization-AHE-Plugin-in-ImageJ-tp5010813.html Sent from the ImageJ mailing list archive at Nabble.com. -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html ------------------------------ Date: Sat, 6 Dec 2014 15:00:22 +0000 From: Anders Lunde <[hidden email]> Subject: Is manual thresholding methods accepted by scientific journals? Dear mailing list, I have developed a nice macro for identifying colocalized signals for z-stack confocal images with multiple channels/colors. However, my advisor/professor has now come to question my method for setting a threshold for signal/no-signal in the infividual channels. My manual method has been to simply raise the threshold above what I relatively confidently can see is background, like large areas with no apparent staining. The reason I did it manually is because when I played around with the automatic thresholding methods in ImageJ I decided that they were not any better than manual and could be subject to mistakes. My supervisor now feels that this sounds too subjective and would not look good in a paper. He therefore asked me to try to find a way that was more guided e.g. by the histogram or something, anything that is less subjective (not sure if he is worried about accuracy or how it sounds in a paper). What is the current standard for this kind of analysis in scientific journals, in particular with regards to the acceptability of manual thresholding of immunofluorescent brain sections stained with various antibodies (and nuclear markers and neuron trancers)? Is there a preference for automated, manual or some hybrid methods? Could I "get-away" with something like this: "Thresholds were set manually at a level that excluded most pixels in assumed background areas. Inspection of the assigned threshold level in the ImageJ intensity histogram showed that the thresholds were set at where the main peak (background pixels) started to or had reached a minimum value." Image set that Im working on: I am working with images of brain sections with 4 colors/channels: nuclear stain, two immunofluorescence staining for transciption factors (nuclear localization), and a retrograde nerve cell staning (nuclear + cytoplasm staining). Greateful for any advice! -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html ------------------------------ Date: Sat, 6 Dec 2014 18:24:32 +0300 From: Василий Попков <[hidden email]> Subject: Re: Is manual thresholding methods accepted by scientific journals? Very interesting question indeed. From my experience I know, that journals lika PNAS can get along with manual thresholding. At least if it is not the very core of a work. On the other hand I myself always shift between auto and manual method, trying to find balance between subjectivity and well... more accurate threshold. One very serious professor in this field have given me advice to threshold each image with the same numbers for each experiment, obtained from image of "empty view". Maybe this advice can be applied to your work too. 2014-12-06 18:00 GMT+03:00 Anders Lunde <[hidden email]>: > Dear mailing list, > > I have developed a nice macro for identifying colocalized signals for > z-stack confocal images with multiple channels/colors. However, my > advisor/professor has now come to question my method for setting a > threshold for signal/no-signal in the infividual channels. > > My manual method has been to simply raise the threshold above what I > relatively confidently can see is background, like large areas with no > apparent staining. The reason I did it manually is because when I > played around with the automatic thresholding methods in ImageJ I > decided that they were not any better than manual and could be subject to mistakes. > > My supervisor now feels that this sounds too subjective and would not > look good in a paper. He therefore asked me to try to find a way that > was more guided e.g. by the histogram or something, anything that is > less subjective (not sure if he is worried about accuracy or how it sounds in a paper). > > What is the current standard for this kind of analysis in scientific > journals, in particular with regards to the acceptability of manual > thresholding of immunofluorescent brain sections stained with various > antibodies (and nuclear markers and neuron trancers)? Is there a > preference for automated, manual or some hybrid methods? Could I > "get-away" with something like this: "Thresholds were set manually at > a level that excluded most pixels in assumed background areas. > Inspection of the assigned threshold level in the ImageJ intensity > histogram showed that the thresholds were set at where the main peak > (background pixels) started to or had reached a minimum value." > > Image set that Im working on: > > I am working with images of brain sections with 4 colors/channels: > nuclear stain, two immunofluorescence staining for transciption > factors (nuclear localization), and a retrograde nerve cell staning > (nuclear + cytoplasm staining). > > Greateful for any advice! > > -- > ImageJ mailing list: http://imagej.nih.gov/ij/list.html > -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html ------------------------------ Date: Sat, 6 Dec 2014 10:27:01 -0500 From: Liz Bless <[hidden email]> Subject: flip ROI selection Hello - I have images of brain sections - sometimes the right hemisphere and sometimes the left hemisphere. I have an ROI that I would like to analyze. How can I get my ROI from the left hemisphere to flip (not rotate) so that it fits onto the right hemisphere? In other words I would like a mirror image of the ROI I have made. Any advice would be very helpful! Thank you -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html ------------------------------ Date: Sat, 6 Dec 2014 19:50:58 +0300 From: Василий Попков <[hidden email]> Subject: Re: flip ROI selection Maybe it would be easier to mirior image itself? 2014-12-06 18:27 GMT+03:00 Liz Bless <[hidden email]>: > Hello - I have images of brain sections - sometimes the right > hemisphere and sometimes the left hemisphere. I have an ROI that I > would like to analyze. How can I get my ROI from the left hemisphere > to flip (not rotate) so that it fits onto the right hemisphere? In > other words I would like a mirror image of the ROI I have made. Any advice would be very helpful! > Thank you > > -- > ImageJ mailing list: http://imagej.nih.gov/ij/list.html > -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html ------------------------------ Date: Sat, 6 Dec 2014 12:08:51 -0500 From: Tiago Ferreira <[hidden email]> Subject: Re: flip ROI selection Dear Liz, On Dec 6, 2014, at 10:27, Liz Bless <[hidden email]> wrote: > I would like a mirror image of the ROI I have made. The two macros below would do it: Install them, create a selection, and press F1/F2 to obtain dorsal-caudal/ventral-dorsal mirrors of the active ROI. These used to be present in earlier versions of the ROI Manager Tools[1], but somehow got removed as were deemed too specific (they were mainly used for manual quantification of in-situ slices). They could be re-introduced if you think they are of broader scope. In that case, they would be made available through the BAR update site[2]. To install: Copy the snippet below, paste it into ImageJ, then used the Install command in the editor window. Hope it helps, -tiago [1] http://imagej.net//plugins/roi-manager-tools/ [2] http://fiji.sc/BAR // Begin of snippet macro "Contralateral in X [F1]" { mirrorROI(-1, 1); } macro "Contralateral in Y [F2]" { mirrorROI(1, -1); } function mirrorROI(fx, fy) { roi = selectionType(); if (roi>8 || roi<0) exit("Cannot mirror current selection"); getSelectionBounds(x, y, width, height); originalX = x; originalY = y; getSelectionCoordinates(x, y); aX= newArray(x.length); aY= newArray(x.length); for (i=0; i<x.length; i++) { aX[i] = fx*x[i]; aY[i] = fy*y[i]; } makeSelection(roi, aX, aY); setSelectionLocation(originalX, originalY); } // End of snippet -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html ------------------------------ Date: Sat, 6 Dec 2014 13:26:07 -0500 From: "MrScienctistMan (Original poster)" <[hidden email]> Subject: Re: Is manual thresholding methods accepted by scientific journals? I guess for example the triangle method in ImageJ looks pretty okay. Would it be more aceptable to use for example this instead of manual threshold? Im not quite sure what you mean about empty image. Do you mean a non-stained negative control? -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html ------------------------------ Date: Sat, 6 Dec 2014 19:27:34 +0200 From: Aryeh Weiss <[hidden email]> Subject: Re: Latest Bio-Formats in Fiji On 12/6/14, 12:16 AM, Curtis Rueden wrote: > Hi Aryeh, > > > About two weeks ago, when 5.0.6 was announced, I allowed the Fiji > > updater to update formats-gpl. The updated version correctly read my > > multifield ND2, so I thought that this bug was resolved. However, > > yesterday, following an update, the problem reappeared. Again,the > > workaround was to use the 5.0.2 version of formats-gpl. > > That is very odd. It would be great if you could somehow isolate the > difference between the working version of Fiji vs. the non-working > one—especially if they both used Bio-Formats 5.0.6. > > What happens if you enable the Bio-Formats update site? Still non-working? > It fails both ways. Here is the info form teh advanced mode of the updater for the two bioformats-gpl files: bioformats from fiji site bioformats-gpl-5.0.6 Modified 11 Nov 2014 bioformats from bioformats site bioformats-gpl Modified 04 Nov 2014 Both fail the same way. > If you get a chance, you could also test with the "showinf" command > line tool [1], which would rule out any Fiji-specific installation > issues. And of course testing from multiple different machines might > also shed some light on things. > The showinf output has the same error -- it is not Fiji specific. I can also tell you that the same problem occurs on a Windows 7-64bit OS, and the workaroudn is identical (fall back to gpl-5.0.2 As long as the workaround works, it is ok. However, that seems like living on borrowed time... > Regards, > Curtis > Thank you for your attention to this. Best regards, --aryeh > [1] > http://openmicroscopy.org/site/support/bio-formats5/users/comlinetools/ > > On Thu, Nov 27, 2014 at 1:45 AM, Aryeh Weiss <[hidden email] > <mailto:[hidden email]>> wrote: > > On 10/6/14, 11:17 PM, Aryeh Weiss wrote: > > Hi Melissa > > On 10/6/14, 4:27 PM, Melissa Linkert wrote: > > Hi Aryeh, > > You are correct. We will be switching over to > 5.1.x, but in the mean time > it looks like the nd2 fix > <https://github.com/openmicroscopy/bioformats/commit/ > 3c857c2e5743e9b23801408bd61de49697716b4d> > you're looking for only affected the > formats-gpl component. You should be > able to just download that specific jar, and > replace the > formats-gpl-5.0.5.jar in your > Fiji.app/jars/bioformats directory (the old > version will just be called "formats-gpl.jar" > if you still have the > Bio-Formats update site enabled). > > How can I turn this into the formats-gpl.jar file? > If I can do that, I will replace my version 5.0.2 > file, and see if it > works properly > > Instead of compiling from source, you can also download > the latest 5.1 > builds from: > > http://ci.openmicroscopy.org/view/Bio-Formats/job/BIOFORMATS-5.1-latest/lastSuccessfulBuild/artifact/artifacts/ > > > For anyone trying this though, please do note that mixing > 5.1 and 5.0 > files is not supported and very unlikely to work, so you > will need to > update all files to 5.1. Be aware that this can cause > problems with > other plugins that use Bio-Formats, as there are API > differences between > the two versions (though we have not yet released 5.1.0). > > > Thank you (and also Mark) for your replies. > I downloaded the 5.1 builds for all of the jar files in the > jars/bio-formats directory of my Fiji distribution. > When I dragged one of my nd2 files into Fiji, bio-formats > opened, but had the same bug as reported previously. > When I restored the old bio-formats directory with the 5.0.2 > version of gpl, it worked properly. > I did not try the 5.0.2 gpl with the 5.1 release, because of > Melissa warning concerning mixing the two versions. > > > Best regards > --aryeh > > > We are also considering this fix for a 5.0.6 release of > Bio-Formats, which > would eliminate the need to manually update files. > > Regards, > -Melissa > > > About two weeks ago, when 5.0.6 was announced, I allowed the Fiji > updater to update formats-gpl. > The updated version correctly read my multifield ND2, so I thought > that this bug was resolved. > However, yesterday, following an update, the problem reappeared. > Again,the workaround was to use the 5.0.2 version of > formats-gpl. > > I have not had a chance to search my recent backups for this file > to see if I can determine exactly when it broke again -- I will > try to do that. > > > Best regards, > --aryeh > > > > On Mon, Oct 06, 2014 at 07:52:39AM -0500, Mark Hiner wrote: > > Hi Aryeh, > > How can I turn this into the formats-gpl.jar file? > > Great question! I'm cc'ing the list because I think > this information is of > general interest. > > The Bio-Formats site has a page with instructions for > downloading and > building the jars from source: > https://www.openmicroscopy.org/site/support/bio-formats5/developers/building-bioformats.html > > > After cloning the Bio-Formats repository > <http://git-scm.com/book/en/Git-Basics-Getting-a-Git-Repository#Cloning-an-Existing-Repository>, > > you can either: > * Check out a specific tag > <http://stackoverflow.com/questions/791959/download-a-specific-tag-with-git> > > or > * Check out a specific commit > <http://stackoverflow.com/questions/2007662/rollback-to-an-old-commit-using-git> > > > Or it should be fine to just build the "develop" > branch, since the patch in > question was merged. > > After building Bio-Formats, all the output .jars will > be in the /artifacts > sub-directory of your Bio-Formats checkout. > > Let me know if you have any more questions, and thank > you for taking the > time to test this! > > Regards, > Mark > > > On Sat, Oct 4, 2014 at 3:03 PM, Aryeh Weiss > <[hidden email] <mailto:[hidden email]>> wrote: > > Hi Mark > > On 10/1/14, 10:21 PM, Mark Hiner wrote: > > Hi Christophe, > > As far as I understand, the current dailies > are 5.0, not 5.1 yet > You are correct. We will be switching over to > 5.1.x, but in the mean time > it looks like the nd2 fix > <https://github.com/openmicroscopy/bioformats/commit/ > 3c857c2e5743e9b23801408bd61de49697716b4d> > you're looking for only affected the > formats-gpl component. You should be > able to just download that specific jar, and > replace the > formats-gpl-5.0.5.jar in your > Fiji.app/jars/bioformats directory (the old > version will just be called "formats-gpl.jar" > if you still have the > Bio-Formats update site enabled). > > How can I turn this into the formats-gpl.jar file? > If I can do that, I will replace my version 5.0.2 > file, and see if it > works properly > > thanks and best regards > --aryeh > > > > > > > -- > Aryeh Weiss > Faculty of Engineering > Bar Ilan University > Ramat Gan 52900 Israel > > Ph: 972-3-5317638 > FAX: 972-3-7384051 > > -- > ImageJ mailing list: http://imagej.nih.gov/ij/list.html > > -- Aryeh Weiss Faculty of Engineering Bar Ilan University Ramat Gan 52900 Israel Ph: 972-3-5317638 FAX: 972-3-7384051 -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html ------------------------------ Date: Sat, 6 Dec 2014 21:53:11 +0300 From: Василий Попков <[hidden email]> Subject: Re: Is manual thresholding methods accepted by scientific journals? I mean just to take an image of something what is "background" in general. And than use this value to minus it from all image and count it like thresholding. Depends on your object, I guess. As for me, I have tested all thresholding methods and only Shanbhag works quite well for me. Still, manual method was better :( For now I am doing so: find manually threshold for 1 image and than use exactly the same values for all images in experiment. My objects are supposed to have the same noise and background levels in 1 experiment. 2014-12-06 21:26 GMT+03:00 MrScienctistMan (Original poster) < [hidden email]>: > I guess for example the triangle method in ImageJ looks pretty okay. Would > it be more aceptable to use for example this instead of manual threshold? > > Im not quite sure what you mean about empty image. Do you mean a > non-stained negative control? > > -- > ImageJ mailing list: http://imagej.nih.gov/ij/list.html > -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html ------------------------------ Date: Sat, 6 Dec 2014 14:35:13 -0500 From: Michael Schell <[hidden email]> Subject: Re: Is manual thresholding methods accepted by scientific journals? In my experience, it is worth the time to find the best auto-threshold for your data set, and then stick to it. Of course, which method is “the best” varies with the images being analyzed. You just have to test them, do a few trial analyses, and find one that produces reasonable numbers that reflect what you see qualitatively. I agree with your advisor. It is not worth the risk using manual eyeball methods and then having a reviewer raise a problem with bias, whether it be real or perceived. If you are analyzing a Z-stack, be sure you move to a bright image in the middle of the stack before applying the threshold to all images in the stack. If you apply a threshold to the first image in the stack and it has almost no signal, the thresholding will not be optimal. A recent example of how we applied Triangle successfully for colocalization analysis can be found in Traffic 15:1344 (2014). This method was robust and the numbers made sense with respect to the biology. For some of the data, which were acquired using a different method, Triangle did not work well, but Otsu did. Your mileage may vary. Michael > On Dec 6, 2014, at 10:00 AM, Anders Lunde <[hidden email]> wrote: > > Dear mailing list, > > I have developed a nice macro for identifying colocalized signals for > z-stack confocal images with multiple channels/colors. However, my > advisor/professor has now come to question my method for setting a > threshold for signal/no-signal in the infividual channels. > > My manual method has been to simply raise the threshold above what I > relatively confidently can see is background, like large areas with no > apparent staining. The reason I did it manually is because when I played > around with the automatic thresholding methods in ImageJ I decided that > they were not any better than manual and could be subject to mistakes. > > My supervisor now feels that this sounds too subjective and would not look > good in a paper. He therefore asked me to try to find a way that was more > guided e.g. by the histogram or something, anything that is less subjective > (not sure if he is worried about accuracy or how it sounds in a paper). > > What is the current standard for this kind of analysis in scientific > journals, in particular with regards to the acceptability of manual > thresholding of immunofluorescent brain sections stained with various > antibodies (and nuclear markers and neuron trancers)? Is there a preference > for automated, manual or some hybrid methods? Could I "get-away" with > something like this: "Thresholds were set manually at a level that > excluded most pixels in assumed background areas. Inspection of the > assigned threshold level in the ImageJ intensity histogram showed that the > thresholds were set at where the main peak (background pixels) started to > or had reached a minimum value." > > Image set that Im working on: > > I am working with images of brain sections with 4 colors/channels: nuclear > stain, two immunofluorescence staining for transciption factors (nuclear > localization), and a retrograde nerve cell staning (nuclear + cytoplasm > staining). > > Greateful for any advice! > > -- > ImageJ mailing list: http://imagej.nih.gov/ij/list.html -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html ------------------------------ Date: Sat, 6 Dec 2014 20:17:46 +0000 From: Jeremy Adler <[hidden email]> Subject: Re: Is manual thresholding methods accepted by scientific journals? Segmenting images with multiple features is a problem Manual thresholding provides scope for bias - But blinded manual thresholding, when the person setting the threshold is unaware of which experimental group the images belong, is acceptable. I would also suggest recording the thresholds and making these and the original images available after publication. A related question is which images, of all those that could have been acquired, were used in the analysis and how were they chosen. This should form part of the methods section but is rarely described. ________________________________________ From: ImageJ Interest Group [[hidden email]] on behalf of Anders Lunde [[hidden email]] Sent: 06 December 2014 16:00 To: [hidden email] Subject: Is manual thresholding methods accepted by scientific journals? Dear mailing list, I have developed a nice macro for identifying colocalized signals for z-stack confocal images with multiple channels/colors. However, my advisor/professor has now come to question my method for setting a threshold for signal/no-signal in the infividual channels. My manual method has been to simply raise the threshold above what I relatively confidently can see is background, like large areas with no apparent staining. The reason I did it manually is because when I played around with the automatic thresholding methods in ImageJ I decided that they were not any better than manual and could be subject to mistakes. My supervisor now feels that this sounds too subjective and would not look good in a paper. He therefore asked me to try to find a way that was more guided e.g. by the histogram or something, anything that is less subjective (not sure if he is worried about accuracy or how it sounds in a paper). What is the current standard for this kind of analysis in scientific journals, in particular with regards to the acceptability of manual thresholding of immunofluorescent brain sections stained with various antibodies (and nuclear markers and neuron trancers)? Is there a preference for automated, manual or some hybrid methods? Could I "get-away" with something like this: "Thresholds were set manually at a level that excluded most pixels in assumed background areas. Inspection of the assigned threshold level in the ImageJ intensity histogram showed that the thresholds were set at where the main peak (background pixels) started to or had reached a minimum value." Image set that Im working on: I am working with images of brain sections with 4 colors/channels: nuclear stain, two immunofluorescence staining for transciption factors (nuclear localization), and a retrograde nerve cell staning (nuclear + cytoplasm staining). Greateful for any advice! -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html ------------------------------ Date: Sat, 6 Dec 2014 15:27:44 -0500 From: John Oreopoulos <[hidden email]> Subject: Re: Is manual thresholding methods accepted by scientific journals? Dear Anders, There are two very good publications on this topic that come to mind: Rossner, M., and Yamada, K. M. (2004). What's in a picture? The temptation of image manipulation. J. Cell Biol., 166, 11-15. Cromey, D. (2010). Avoiding twisted pixels: Ethical guidelines for the appropriate use and manipulation of scientific digital images. I can't find any specific comments about thresholding, but the "5th commandment" listed in Cromey's publications states: 5. Digital Images that will be Compared to one Another Should be Acquired under Identical Conditions, and any Post-acquisition Image Processing Should also be Identical: "When images are to be compared to one another, the processing of the individual images should be identical. This includes acquisition techniques such as background subtraction or white-level balancing, which should be documented in the methods section. The same principle applies to publication figures, especially if multiple images will be published together in a single figure. This assists the reader in understanding how each image relates to the others in the group. Individual images within a figure should only be processed differently if there are compelling reasons to do so. In such cases, the differences must be explained in the methods section or the figure legend. Honesty, and completeness, are the best policies." I think it is implied here that having an automatic / computer-guided thresholding workflow as opposed to one that requires human intervention is preferred so that all data is treated equally. But, I think as long as you are clear in your methods section about why and how you chose manual thresholding and state that you have done so, I think this would be acceptable in a paper. Ask yourself, however, would someone else have thresholded your images the same way you did? If not, they might get different results and interpretation of your analysis. Bottom line, you should supply as much information describing how you performed your image analysis especially if the crux of the paper hinges on it. Providing raw data in the supplementary materials is another way for reviewers to check the validity of your manual thresholding argument. John Oreopoulos On 2014-12-06, at 10:00 AM, Anders Lunde wrote: > Dear mailing list, > > I have developed a nice macro for identifying colocalized signals for > z-stack confocal images with multiple channels/colors. However, my > advisor/professor has now come to question my method for setting a > threshold for signal/no-signal in the infividual channels. > > My manual method has been to simply raise the threshold above what I > relatively confidently can see is background, like large areas with no > apparent staining. The reason I did it manually is because when I played > around with the automatic thresholding methods in ImageJ I decided that > they were not any better than manual and could be subject to mistakes. > > My supervisor now feels that this sounds too subjective and would not look > good in a paper. He therefore asked me to try to find a way that was more > guided e.g. by the histogram or something, anything that is less subjective > (not sure if he is worried about accuracy or how it sounds in a paper). > > What is the current standard for this kind of analysis in scientific > journals, in particular with regards to the acceptability of manual > thresholding of immunofluorescent brain sections stained with various > antibodies (and nuclear markers and neuron trancers)? Is there a preference > for automated, manual or some hybrid methods? Could I "get-away" with > something like this: "Thresholds were set manually at a level that > excluded most pixels in assumed background areas. Inspection of the > assigned threshold level in the ImageJ intensity histogram showed that the > thresholds were set at where the main peak (background pixels) started to > or had reached a minimum value." > > Image set that Im working on: > > I am working with images of brain sections with 4 colors/channels: nuclear > stain, two immunofluorescence staining for transciption factors (nuclear > localization), and a retrograde nerve cell staning (nuclear + cytoplasm > staining). > > Greateful for any advice! > > -- > ImageJ mailing list: http://imagej.nih.gov/ij/list.html -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html ------------------------------ Date: Sat, 6 Dec 2014 14:40:50 -0800 From: JNHBoon <[hidden email]> Subject: Error when using BioFormats Import for Macro I'm trying to open a stack of images (400) in .dm4 format using the BioFormats plugin. Tried using the macro recorder and this is what it gave me: run("Bio-Formats", "open=[D:\\oxygen_3e-5_1 (missing Minute 3)\\Hour_00\\Minute_00\\Second_00\\oxygen_3e-5_1_Hour_00_Minute_00_Second_00_Frame_0000.dm4] color_mode=Default group_files open_all_series view=Hyperstack stack_order=XYCZT use_virtual_stack axis_1_number_of_images=400 axis_1_axis_first_image=0 axis_1_axis_increment=1 file=[] pattern=[D:\\\\oxygen_3e-5_1 (missing Minute 3)\\\\Hour_00\\\\Minute_00\\\\Second_00\\\\oxygen_3e-5_1_Hour_00_Minute_00_Second_00_Frame_0<000-399>.dm4]"); Unfortunately, when I tried running the macro, it gave me this error. Does anyone know what might be wrong here? java.lang.NullPointerException at ij.Macro.trimKey(Macro.java:154) at ij.gui.GenericDialog.getNextBoolean(GenericDialog.java:936) at loci.plugins.in.FilePatternDialog.harvestResults(FilePatternDialog.java:192) at loci.plugins.in.ImporterDialog.showDialog(ImporterDialog.java:83) at loci.plugins.in.ImporterPrompter.promptFilePattern(ImporterPrompter.java:135) at loci.plugins.in.ImporterPrompter.statusUpdated(ImporterPrompter.java:84) at loci.plugins.in.ImportProcess.notifyListeners(ImportProcess.java:475) at loci.plugins.in.ImportProcess.step(ImportProcess.java:751) at loci.plugins.in.ImportProcess.execute(ImportProcess.java:146) at loci.plugins.in.Importer.showDialogs(Importer.java:141) at loci.plugins.in.Importer.run(Importer.java:79) at loci.plugins.LociImporter.run(LociImporter.java:81) at ij.IJ.runUserPlugIn(IJ.java:202) at ij.IJ.runPlugIn(IJ.java:166) at ij.Executer.runCommand(Executer.java:131) at ij.Executer.run(Executer.java:61) at ij.IJ.run(IJ.java:275) at ij.macro.Functions.doRun(Functions.java:591) at ij.macro.Functions.doFunction(Functions.java:89) at ij.macro.Interpreter.doStatement(Interpreter.java:226) at ij.macro.Interpreter.doStatements(Interpreter.java:214) at ij.macro.Interpreter.run(Interpreter.java:111) at ij.macro.Interpreter.run(Interpreter.java:81) at ij.macro.Interpreter.run(Interpreter.java:92) at ij.plugin.Macro_Runner.runMacro(Macro_Runner.java:153) at ij.IJ.runMacro(IJ.java:119) at ij.IJ.runMacro(IJ.java:108) at net.imagej.legacy.IJ1Helper.runMacro(IJ1Helper.java:782) at net.imagej.legacy.plugin.IJ1MacroEngine.eval(IJ1MacroEngine.java:116) at net.imagej.legacy.plugin.IJ1MacroEngine.eval(IJ1MacroEngine.java:156) at org.scijava.script.ScriptModule.run(ScriptModule.java:175) at org.scijava.module.ModuleRunner.run(ModuleRunner.java:167) at org.scijava.module.ModuleRunner.call(ModuleRunner.java:126) at org.scijava.module.ModuleRunner.call(ModuleRunner.java:65) at org.scijava.thread.DefaultThreadService$2.call(DefaultThreadService.java:164) at java.util.concurrent.FutureTask.run(FutureTask.java:262) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) -- View this message in context: http://imagej.1557.x6.nabble.com/Error-when-using-BioFormats-Import-for-Macro-tp5010825.html Sent from the ImageJ mailing list archive at Nabble.com. -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html ------------------------------ Date: Sat, 6 Dec 2014 22:19:45 -0500 From: Stephan Saalfeld <[hidden email]> Subject: Re: imaging enhancements Hi Mohammed and Phanikanth, CLAHE is AHE with a contrast limit. That is, if you set the contrast limit in CLAHE to a very high value (the maximum intensity value of your image is sufficient), you get AHE. That said, all you need is the CLAHE implementation available in Fiji: http://fiji.sc/Enhance_Local_Contrast_%28CLAHE%29 Best, Stephan On Fri, 2014-12-05 at 22:33 -0800, Mohammad Faizal Hassan wrote: > Hi Phanikanth , > > For AHE and CLAHE, you can download them from this link respectively > 1. > http://svg.dmi.unict.it/iplab/imagej/Plugins/Forensics/Histogram%20Equalization/HistogramEqualization.html > > 2. http://rsbweb.nih.gov/ij/plugins/clahe/index.html > > By the way, I also have a problem in getting the AHE plugin that can be > implemented in the ImageJ software. I have been searching it from many > sources in the internet but still did not find any. It is much appreciated > if you can tell me from where I can get the plugin of AHE for free in the > internet. > > Looking forward for your reply. Please help me. Thank you in advance. > > > > ----- > Mohammad Faizal Hassan > -- > View this message in context: http://imagej.1557.x6.nabble.com/imaging-enhancements-tp3691329p5010810.html > Sent from the ImageJ mailing list archive at Nabble.com. > > -- > ImageJ mailing list: http://imagej.nih.gov/ij/list.html -- Stephan Saalfeld, Ph.D. Group Leader Janelia Farm Research Campus 19700 Helix Drive | Ashburn, VA 20147 Phone: 571-209-4184 | Fax: 571-209-4946 [hidden email] -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html ------------------------------ End of IMAGEJ Digest - 5 Dec 2014 to 6 Dec 2014 (#2014-353) *********************************************************** -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html |
In reply to this post by MrScientistman2
Dear Anders, dear all,
potentially I can also add some personal opinion to this indeed interesting and moreover important question, since many people face the problem on deciding for a "proper" method (however you want to define this) as I often figure out talking to students about these topics. Your question actually has two parts which I think need to be addressed (thresholding and co-ocalization): 1.) Thresholding: Generally, I would not (never) decide on a method just because you will "get away with it" or because a specific journal would accept it. Just because it will be accepted by a few people does still not necessarily mean that it is a suitable or appropriate analysis method. That said, here some considerations. As Dimiter pointed it out already.... thresholding is not easy and sometimes might not lead you to a satisfyingly accurate result. Nevertheless, I had so far mostly good experiences using automatic thresholding methods. But in many cases you will only get a good feature extraction if you pre-process the images. This in turn might/will lead to an alteration of feature outlines, forms and sizes. So, you first need to figure out such processing steps and check if this would still be acceptable regarding the features you need to extract. A possible help in deciding for a suitable filtering and thresholding might be the "Filter Check" as well as the "Threshold Check" (using the available auto thresholds in ImageJ and Fiji) from the BioVoxxel Toolbox ( http://fiji.sc/BioVoxxel_Toolbox). I further agree with Michael Schell that it is worth investing the time in finding suitable auto-thresholds if possible, or one of the methods Dimiter mentioned. Because you reduce user bias and improve the extraction result. Manual thresholds are not really suitable if you try to analyze a bigger set of data for several reasons. In terms of comparability you need to apply the same threshold to all images in your experiment to keep the user bias at least a little lower (besides your decision for the initial threshold). Due to natural variability in your images a manual threshold with one or even two (an upper and lower) cut-off value(s) will not work on a full set of data under most conditions. An automatic threshold might also fail to achieve this but since those algorithms consider the image histogram they account for those variabilities and the chance to find suitable cut-offs is way higher. If you manually threshold each image with different cut-off values you actually loose comparability in your experiment completely due to massive user bias. Another problem, since you where talking about separating signal from no-signal, is identifying a suitable separation of those two parts. Your background in a intensity based fluorescent image is in most cases very dark. Our vision unfortunately is very prone to mis-interpret different intensities which is getting especially difficult the darker those are. Additionally, we perceive different colors with different visual sensitivities. Thus, it is also important if you look at a grayscale image (with a gray LUT) or at the same image in one of different false colors. Gray is always preferable in this context because you will see differences best. Nevertheless, manual thresholding is very subjective and should be avoided whenever possible. Even without those it is already difficult enough to achieve objective analyses for many studies (my opinion). 2.) Co-localization If I got it correctly, your initial aim is a co-localization study. In this context, I would not rely only on a pixel based overlap determination. This might give you a hint and is partially used during object-based co-localization studies. Nevertheless, you should consider additional parameters like resolution limit and your actual image resolution and the intensity distribution in your images/features. Therefore, I would pay attention first to a proper imaging setup, with a good nyquist-sampled image and usage of the full dynamic range (besides other parameters). To this end the following paper might be helpful: Jennifer C. Waters, J Cell Biol. Jun 29, 2009; 185(7): 1135-1148. Accuracy and precision in quantitative fluorescence microscopy. Analysis wise there are two very excellent tools available in Fiji which is the Coloc2 and the JACoP. The latter implements two object-based methods including a thresholding. In this context it might also be a possibility to use binary images as result of an auto-thresholding and mask your original images with them (e.g. with >Edit >Paste Control or the >Process > Image Calculator). This is similar in applying a ROI as possible in the Coloc2 plugin. As a suggestion for co-localization studies... I would not rely on a single output method only, but rather combine several suitable ones as is possible in Coloc2 and JACoP to get a better confidentiality about a potential co-localization. So, to not create more confusion here some interesting reading and important literature regarding co-localization: There was an interesting discussion about different co-localization analyses on the list a few month ago with some papers suggested already ( https://list.nih.gov/cgi-bin/wa.exe?A2=ind1403&L=IMAGEJ&P=R31566&1=IMAGEJ&9=A&I=-3&J=on&d=No+Match%3BMatch%3BMatches&z=4 ) Recommendable, especially if you use the JACoP tool: Bolte and Cordelieres, J Microsc. 2006 Dec;224(Pt 3):213-32. A guided tour into subcellular colocalization analysis in light microscopy Furthermore: Dunn et al. Am J Physiol Cell Physiol. 2011 Apr;300(4):C723-42. A practical guide to evaluating colocalization in biological microscopy There are many more papers regarding the topic e.g. from Elise Stanley's lab, Ingela Parmryd and Jeremy Adler. And many more I might not be aware of. So, in the worst case I misunderstood your question and overloaded you with unnecessary answers (but they might be helpful to others). In the best case you have a lot of reading suggestions and potentially a clearer picture on how you want to start your analysis. Kind regards, Jan 2014-12-06 16:00 GMT+01:00 Anders Lunde <[hidden email]>: > Dear mailing list, > > I have developed a nice macro for identifying colocalized signals for > z-stack confocal images with multiple channels/colors. However, my > advisor/professor has now come to question my method for setting a > threshold for signal/no-signal in the infividual channels. > > My manual method has been to simply raise the threshold above what I > relatively confidently can see is background, like large areas with no > apparent staining. The reason I did it manually is because when I played > around with the automatic thresholding methods in ImageJ I decided that > they were not any better than manual and could be subject to mistakes. > > My supervisor now feels that this sounds too subjective and would not look > good in a paper. He therefore asked me to try to find a way that was more > guided e.g. by the histogram or something, anything that is less subjective > (not sure if he is worried about accuracy or how it sounds in a paper). > > What is the current standard for this kind of analysis in scientific > journals, in particular with regards to the acceptability of manual > thresholding of immunofluorescent brain sections stained with various > antibodies (and nuclear markers and neuron trancers)? Is there a preference > for automated, manual or some hybrid methods? Could I "get-away" with > something like this: "Thresholds were set manually at a level that > excluded most pixels in assumed background areas. Inspection of the > assigned threshold level in the ImageJ intensity histogram showed that the > thresholds were set at where the main peak (background pixels) started to > or had reached a minimum value." > > Image set that Im working on: > > I am working with images of brain sections with 4 colors/channels: nuclear > stain, two immunofluorescence staining for transciption factors (nuclear > localization), and a retrograde nerve cell staning (nuclear + cytoplasm > staining). > > Greateful for any advice! > > -- > ImageJ mailing list: http://imagej.nih.gov/ij/list.html > -- CEO: Dr. rer. nat. Jan Brocher phone: +49 (0)6234 917 03 39 mobile: +49 (0)176 705 746 81 e-mail: [hidden email] info: [hidden email] inquiries: [hidden email] web: www.biovoxxel.de -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html |
I think that Jan hit it right on the head. I just want to add that if you
don't need to automate the analysis, and you're getting segmentations that you want, then your method is sound. For images with objects significantly brighter than the background and of uniform brightness, manual thresholding works great. You could use Fiji's thresholding plugin to test 25 methods in tandem, and maybe you'll find that several of them also segment your image nicely. If that's the case, then you could switch, or at least in the paper mention the other methods that worked. I just released a preprint on this exact subject when applied to SEM images of nanoparticles, and I think it's relevant enough that I'm going to share it shamelessly. The conclusions that Jan reached are basically the same as what I said in the Thresholding section of my paper. The paper also goes on to look at other types of segmentation, and how to classify objects once you've segmented them. https://peerj.com/preprints/671/ On Mon, Dec 8, 2014 at 8:39 AM, BioVoxxel <[hidden email]> wrote: > Dear Anders, dear all, > > potentially I can also add some personal opinion to this indeed interesting > and moreover important question, since many people face the problem on > deciding for a "proper" method (however you want to define this) as I often > figure out talking to students about these topics. > > Your question actually has two parts which I think need to be addressed > (thresholding and co-ocalization): > > 1.) Thresholding: > > Generally, I would not (never) decide on a method just because you will > "get away with it" or because a specific journal would accept it. Just > because it will be accepted by a few people does still not necessarily mean > that it is a suitable or appropriate analysis method. That said, here some > considerations. > > As Dimiter pointed it out already.... thresholding is not easy and > sometimes might not lead you to a satisfyingly accurate result. > Nevertheless, I had so far mostly good experiences using automatic > thresholding methods. But in many cases you will only get a good feature > extraction if you pre-process the images. This in turn might/will lead to > an alteration of feature outlines, forms and sizes. So, you first need to > figure out such processing steps and check if this would still be > acceptable regarding the features you need to extract. A possible help in > deciding for a suitable filtering and thresholding might be the "Filter > Check" as well as the "Threshold Check" (using the available auto > thresholds in ImageJ and Fiji) from the BioVoxxel Toolbox ( > http://fiji.sc/BioVoxxel_Toolbox). > I further agree with Michael Schell that it is worth investing the time in > finding suitable auto-thresholds if possible, or one of the methods Dimiter > mentioned. Because you reduce user bias and improve the extraction result. > > Manual thresholds are not really suitable if you try to analyze a bigger > set of data for several reasons. > In terms of comparability you need to apply the same threshold to all > images in your experiment to keep the user bias at least a little lower > (besides your decision for the initial threshold). Due to natural > variability in your images a manual threshold with one or even two (an > upper and lower) cut-off value(s) will not work on a full set of data under > most conditions. An automatic threshold might also fail to achieve this but > since those algorithms consider the image histogram they account for those > variabilities and the chance to find suitable cut-offs is way higher. > If you manually threshold each image with different cut-off values you > actually loose comparability in your experiment completely due to massive > user bias. > > Another problem, since you where talking about separating signal from > no-signal, is identifying a suitable separation of those two parts. Your > background in a intensity based fluorescent image is in most cases very > dark. Our vision unfortunately is very prone to mis-interpret different > intensities which is getting especially difficult the darker those are. > Additionally, we perceive different colors with different visual > sensitivities. Thus, it is also important if you look at a grayscale image > (with a gray LUT) or at the same image in one of different false colors. > Gray is always preferable in this context because you will see differences > best. > > Nevertheless, manual thresholding is very subjective and should be avoided > whenever possible. Even without those it is already difficult enough to > achieve objective analyses for many studies (my opinion). > > > 2.) Co-localization > If I got it correctly, your initial aim is a co-localization study. In this > context, I would not rely only on a pixel based overlap determination. This > might give you a hint and is partially used during object-based > co-localization studies. Nevertheless, you should consider additional > parameters like resolution limit and your actual image resolution and the > intensity distribution in your images/features. Therefore, I would pay > attention first to a proper imaging setup, with a good nyquist-sampled > image and usage of the full dynamic range (besides other parameters). To > this end the following paper might be helpful: > > Jennifer C. Waters, J Cell Biol. Jun 29, 2009; 185(7): 1135-1148. Accuracy > and precision in quantitative fluorescence microscopy. > > Analysis wise there are two very excellent tools available in Fiji which is > the Coloc2 and the JACoP. The latter implements two object-based methods > including a thresholding. In this context it might also be a possibility to > use binary images as result of an auto-thresholding and mask your original > images with them (e.g. with >Edit >Paste Control or the >Process > Image > Calculator). This is similar in applying a ROI as possible in the Coloc2 > plugin. > As a suggestion for co-localization studies... I would not rely on a single > output method only, but rather combine several suitable ones as is possible > in Coloc2 and JACoP to get a better confidentiality about a potential > co-localization. > > So, to not create more confusion here some interesting reading and > important literature regarding co-localization: > > There was an interesting discussion about different co-localization > analyses on the list a few month ago with some papers suggested already ( > > https://list.nih.gov/cgi-bin/wa.exe?A2=ind1403&L=IMAGEJ&P=R31566&1=IMAGEJ&9=A&I=-3&J=on&d=No+Match%3BMatch%3BMatches&z=4 > ) > > Recommendable, especially if you use the JACoP tool: Bolte and Cordelieres, > J Microsc. 2006 Dec;224(Pt 3):213-32. A guided tour into subcellular > colocalization analysis in light microscopy > > Furthermore: Dunn et al. Am J Physiol Cell Physiol. 2011 > Apr;300(4):C723-42. A practical guide to evaluating colocalization in > biological microscopy > > There are many more papers regarding the topic e.g. from Elise Stanley's > lab, Ingela Parmryd and Jeremy Adler. And many more I might not be aware > of. > > So, in the worst case I misunderstood your question and overloaded you with > unnecessary answers (but they might be helpful to others). In the best case > you have a lot of reading suggestions and potentially a clearer picture on > how you want to start your analysis. > > Kind regards, > Jan > > > > 2014-12-06 16:00 GMT+01:00 Anders Lunde <[hidden email]>: > > > Dear mailing list, > > > > I have developed a nice macro for identifying colocalized signals for > > z-stack confocal images with multiple channels/colors. However, my > > advisor/professor has now come to question my method for setting a > > threshold for signal/no-signal in the infividual channels. > > > > My manual method has been to simply raise the threshold above what I > > relatively confidently can see is background, like large areas with no > > apparent staining. The reason I did it manually is because when I played > > around with the automatic thresholding methods in ImageJ I decided that > > they were not any better than manual and could be subject to mistakes. > > > > My supervisor now feels that this sounds too subjective and would not > look > > good in a paper. He therefore asked me to try to find a way that was more > > guided e.g. by the histogram or something, anything that is less > subjective > > (not sure if he is worried about accuracy or how it sounds in a paper). > > > > What is the current standard for this kind of analysis in scientific > > journals, in particular with regards to the acceptability of manual > > thresholding of immunofluorescent brain sections stained with various > > antibodies (and nuclear markers and neuron trancers)? Is there a > preference > > for automated, manual or some hybrid methods? Could I "get-away" with > > something like this: "Thresholds were set manually at a level that > > excluded most pixels in assumed background areas. Inspection of the > > assigned threshold level in the ImageJ intensity histogram showed that > the > > thresholds were set at where the main peak (background pixels) started to > > or had reached a minimum value." > > > > Image set that Im working on: > > > > I am working with images of brain sections with 4 colors/channels: > nuclear > > stain, two immunofluorescence staining for transciption factors (nuclear > > localization), and a retrograde nerve cell staning (nuclear + cytoplasm > > staining). > > > > Greateful for any advice! > > > > -- > > ImageJ mailing list: http://imagej.nih.gov/ij/list.html > > > > > > -- > > CEO: Dr. rer. nat. Jan Brocher > phone: +49 (0)6234 917 03 39 > mobile: +49 (0)176 705 746 81 > e-mail: [hidden email] > info: [hidden email] > inquiries: [hidden email] > web: www.biovoxxel.de > > -- > ImageJ mailing list: http://imagej.nih.gov/ij/list.html > -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html |
Hi everyone,
I invite interested community members to contribute these valuable insights to the ImageJ wiki's "image processing principles" page at: http://imagej.net/IP_Principles I think it would make this information easier to find, benefiting many researchers. Regards, Curtis On Mon, Dec 8, 2014 at 1:51 PM, Adam Hughes <[hidden email]> wrote: > I think that Jan hit it right on the head. I just want to add that if you > don't need to automate the analysis, and you're getting segmentations that > you want, then your method is sound. For images with objects significantly > brighter than the background and of uniform brightness, manual thresholding > works great. You could use Fiji's thresholding plugin to test 25 methods > in tandem, and maybe you'll find that several of them also segment your > image nicely. If that's the case, then you could switch, or at least in > the paper mention the other methods that worked. > > I just released a preprint on this exact subject when applied to SEM images > of nanoparticles, and I think it's relevant enough that I'm going to share > it shamelessly. The conclusions that Jan reached are basically the same as > what I said in the Thresholding section of my paper. The paper also goes > on to look at other types of segmentation, and how to classify objects once > you've segmented them. > > https://peerj.com/preprints/671/ > > On Mon, Dec 8, 2014 at 8:39 AM, BioVoxxel <[hidden email]> > wrote: > > > Dear Anders, dear all, > > > > potentially I can also add some personal opinion to this indeed > interesting > > and moreover important question, since many people face the problem on > > deciding for a "proper" method (however you want to define this) as I > often > > figure out talking to students about these topics. > > > > Your question actually has two parts which I think need to be addressed > > (thresholding and co-ocalization): > > > > 1.) Thresholding: > > > > Generally, I would not (never) decide on a method just because you will > > "get away with it" or because a specific journal would accept it. Just > > because it will be accepted by a few people does still not necessarily > mean > > that it is a suitable or appropriate analysis method. That said, here > some > > considerations. > > > > As Dimiter pointed it out already.... thresholding is not easy and > > sometimes might not lead you to a satisfyingly accurate result. > > Nevertheless, I had so far mostly good experiences using automatic > > thresholding methods. But in many cases you will only get a good feature > > extraction if you pre-process the images. This in turn might/will lead to > > an alteration of feature outlines, forms and sizes. So, you first need to > > figure out such processing steps and check if this would still be > > acceptable regarding the features you need to extract. A possible help in > > deciding for a suitable filtering and thresholding might be the "Filter > > Check" as well as the "Threshold Check" (using the available auto > > thresholds in ImageJ and Fiji) from the BioVoxxel Toolbox ( > > http://fiji.sc/BioVoxxel_Toolbox). > > I further agree with Michael Schell that it is worth investing the time > in > > finding suitable auto-thresholds if possible, or one of the methods > Dimiter > > mentioned. Because you reduce user bias and improve the extraction > result. > > > > Manual thresholds are not really suitable if you try to analyze a bigger > > set of data for several reasons. > > In terms of comparability you need to apply the same threshold to all > > images in your experiment to keep the user bias at least a little lower > > (besides your decision for the initial threshold). Due to natural > > variability in your images a manual threshold with one or even two (an > > upper and lower) cut-off value(s) will not work on a full set of data > under > > most conditions. An automatic threshold might also fail to achieve this > but > > since those algorithms consider the image histogram they account for > those > > variabilities and the chance to find suitable cut-offs is way higher. > > If you manually threshold each image with different cut-off values you > > actually loose comparability in your experiment completely due to massive > > user bias. > > > > Another problem, since you where talking about separating signal from > > no-signal, is identifying a suitable separation of those two parts. Your > > background in a intensity based fluorescent image is in most cases very > > dark. Our vision unfortunately is very prone to mis-interpret different > > intensities which is getting especially difficult the darker those are. > > Additionally, we perceive different colors with different visual > > sensitivities. Thus, it is also important if you look at a grayscale > image > > (with a gray LUT) or at the same image in one of different false colors. > > Gray is always preferable in this context because you will see > differences > > best. > > > > Nevertheless, manual thresholding is very subjective and should be > avoided > > whenever possible. Even without those it is already difficult enough to > > achieve objective analyses for many studies (my opinion). > > > > > > 2.) Co-localization > > If I got it correctly, your initial aim is a co-localization study. In > this > > context, I would not rely only on a pixel based overlap determination. > This > > might give you a hint and is partially used during object-based > > co-localization studies. Nevertheless, you should consider additional > > parameters like resolution limit and your actual image resolution and the > > intensity distribution in your images/features. Therefore, I would pay > > attention first to a proper imaging setup, with a good nyquist-sampled > > image and usage of the full dynamic range (besides other parameters). To > > this end the following paper might be helpful: > > > > Jennifer C. Waters, J Cell Biol. Jun 29, 2009; 185(7): 1135-1148. > Accuracy > > and precision in quantitative fluorescence microscopy. > > > > Analysis wise there are two very excellent tools available in Fiji which > is > > the Coloc2 and the JACoP. The latter implements two object-based methods > > including a thresholding. In this context it might also be a possibility > to > > use binary images as result of an auto-thresholding and mask your > original > > images with them (e.g. with >Edit >Paste Control or the >Process > Image > > Calculator). This is similar in applying a ROI as possible in the Coloc2 > > plugin. > > As a suggestion for co-localization studies... I would not rely on a > single > > output method only, but rather combine several suitable ones as is > possible > > in Coloc2 and JACoP to get a better confidentiality about a potential > > co-localization. > > > > So, to not create more confusion here some interesting reading and > > important literature regarding co-localization: > > > > There was an interesting discussion about different co-localization > > analyses on the list a few month ago with some papers suggested already ( > > > > > https://list.nih.gov/cgi-bin/wa.exe?A2=ind1403&L=IMAGEJ&P=R31566&1=IMAGEJ&9=A&I=-3&J=on&d=No+Match%3BMatch%3BMatches&z=4 > > ) > > > > Recommendable, especially if you use the JACoP tool: Bolte and > Cordelieres, > > J Microsc. 2006 Dec;224(Pt 3):213-32. A guided tour into subcellular > > colocalization analysis in light microscopy > > > > Furthermore: Dunn et al. Am J Physiol Cell Physiol. 2011 > > Apr;300(4):C723-42. A practical guide to evaluating colocalization in > > biological microscopy > > > > There are many more papers regarding the topic e.g. from Elise Stanley's > > lab, Ingela Parmryd and Jeremy Adler. And many more I might not be aware > > of. > > > > So, in the worst case I misunderstood your question and overloaded you > with > > unnecessary answers (but they might be helpful to others). In the best > case > > you have a lot of reading suggestions and potentially a clearer picture > on > > how you want to start your analysis. > > > > Kind regards, > > Jan > > > > > > > > 2014-12-06 16:00 GMT+01:00 Anders Lunde <[hidden email]>: > > > > > Dear mailing list, > > > > > > I have developed a nice macro for identifying colocalized signals for > > > z-stack confocal images with multiple channels/colors. However, my > > > advisor/professor has now come to question my method for setting a > > > threshold for signal/no-signal in the infividual channels. > > > > > > My manual method has been to simply raise the threshold above what I > > > relatively confidently can see is background, like large areas with no > > > apparent staining. The reason I did it manually is because when I > played > > > around with the automatic thresholding methods in ImageJ I decided that > > > they were not any better than manual and could be subject to mistakes. > > > > > > My supervisor now feels that this sounds too subjective and would not > > look > > > good in a paper. He therefore asked me to try to find a way that was > more > > > guided e.g. by the histogram or something, anything that is less > > subjective > > > (not sure if he is worried about accuracy or how it sounds in a paper). > > > > > > What is the current standard for this kind of analysis in scientific > > > journals, in particular with regards to the acceptability of manual > > > thresholding of immunofluorescent brain sections stained with various > > > antibodies (and nuclear markers and neuron trancers)? Is there a > > preference > > > for automated, manual or some hybrid methods? Could I "get-away" with > > > something like this: "Thresholds were set manually at a level that > > > excluded most pixels in assumed background areas. Inspection of the > > > assigned threshold level in the ImageJ intensity histogram showed that > > the > > > thresholds were set at where the main peak (background pixels) started > to > > > or had reached a minimum value." > > > > > > Image set that Im working on: > > > > > > I am working with images of brain sections with 4 colors/channels: > > nuclear > > > stain, two immunofluorescence staining for transciption factors > (nuclear > > > localization), and a retrograde nerve cell staning (nuclear + cytoplasm > > > staining). > > > > > > Greateful for any advice! > > > > > > -- > > > ImageJ mailing list: http://imagej.nih.gov/ij/list.html > > > > > > > > > > > -- > > > > CEO: Dr. rer. nat. Jan Brocher > > phone: +49 (0)6234 917 03 39 > > mobile: +49 (0)176 705 746 81 > > e-mail: [hidden email] > > info: [hidden email] > > inquiries: [hidden email] > > web: www.biovoxxel.de > > > > -- > > ImageJ mailing list: http://imagej.nih.gov/ij/list.html > > > > -- > ImageJ mailing list: http://imagej.nih.gov/ij/list.html > -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html |
Hi everyone,
just to make things more confusing, I am writing a macro which segments yeast cells from >10000 images. Since these images do not all have the same number of cells (can vary from 50 to 500 cells per image) and where obtained using slightly different microscope settings I need to change the threshold between images to get the maximum possible number of cells from each. Do you think this is reasonable? I am only trying to get the total cell number. Thanks |
Hi.
As for me I think you have no other options, but to use one of auto thr. methods. But the same for all 1000 images. 2014-12-09 6:24 GMT+03:00 giuseppe3 <[hidden email]>: > Hi everyone, > > just to make things more confusing, I am writing a macro which segments > yeast cells from >10000 images. Since these images do not all have the same > number of cells (can vary from 50 to 500 cells per image) and where > obtained > using slightly different microscope settings I need to change the threshold > between images to get the maximum possible number of cells from each. > > Do you think this is reasonable? I am only trying to get the total cell > number. > > Thanks > > > > -- > View this message in context: > http://imagej.1557.x6.nabble.com/Is-manual-thresholding-methods-accepted-by-scientific-journals-tp5010814p5010853.html > Sent from the ImageJ mailing list archive at Nabble.com. > > -- > ImageJ mailing list: http://imagej.nih.gov/ij/list.html > -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html |
In reply to this post by MrScientistman2
As long as reproducibility is fulfilled your method is compliant to scientific principles and should be acceptable to most publishers. However, as manual thresholding does not necessarily lend itself easily to reproducibility, an automated method is to be preferred.
Automatic methods, like manual ones, are subject to variability. Both accuracy and precision needs to be quantified and reported. We usually report the thresholding sensitivity, that is the change in response to a unit change in threshold, within the range of automatically or manually obtained thresholds. I find this is often neglected, rendering the results difficult, if not impossible, to interpret with any confidence. Sincerely, Pål Baggethun Elkem AS, Materials Characterization Group -----Original Message----- From: ImageJ Interest Group [mailto:[hidden email]] On Behalf Of Anders Lunde Sent: 6. desember 2014 16:00 To: [hidden email] Subject: Is manual thresholding methods accepted by scientific journals? Dear mailing list, I have developed a nice macro for identifying colocalized signals for z-stack confocal images with multiple channels/colors. However, my advisor/professor has now come to question my method for setting a threshold for signal/no-signal in the infividual channels. My manual method has been to simply raise the threshold above what I relatively confidently can see is background, like large areas with no apparent staining. The reason I did it manually is because when I played around with the automatic thresholding methods in ImageJ I decided that they were not any better than manual and could be subject to mistakes. My supervisor now feels that this sounds too subjective and would not look good in a paper. He therefore asked me to try to find a way that was more guided e.g. by the histogram or something, anything that is less subjective (not sure if he is worried about accuracy or how it sounds in a paper). What is the current standard for this kind of analysis in scientific journals, in particular with regards to the acceptability of manual thresholding of immunofluorescent brain sections stained with various antibodies (and nuclear markers and neuron trancers)? Is there a preference for automated, manual or some hybrid methods? Could I "get-away" with something like this: "Thresholds were set manually at a level that excluded most pixels in assumed background areas. Inspection of the assigned threshold level in the ImageJ intensity histogram showed that the thresholds were set at where the main peak (background pixels) started to or had reached a minimum value." Image set that Im working on: I am working with images of brain sections with 4 colors/channels: nuclear stain, two immunofluorescence staining for transciption factors (nuclear localization), and a retrograde nerve cell staning (nuclear + cytoplasm staining). Greateful for any advice! -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html NOTICE: Please immediately e-mail back to sender if you are not the intended recipient. Thereafter delete the e-mail along with any attachments without making copies. The sender reserves all rights of privilege, confidentiality and copyright. -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html |
In reply to this post by ctrueden
Hi Curtis,
I take your invitation and will start to put some information about the thresholding topic together. I might be able to do this over Chistmas and New Year. But I am already glad about any additional information in advance, like publications regarding qualitative and quantitative assessment of thresholding qualities, etc. Since a well checked procedure for such a guideline is critical, once the basic information is setup I will drop a message to invite especially people involved in threshold development or image segmentation to take my basic text skeleton and further improve it via interactive discussions to finally end up with a feedback controlled guideline. cheers, Jan 2014-12-08 22:03 GMT+01:00 Curtis Rueden <[hidden email]>: > Hi everyone, > > I invite interested community members to contribute these valuable insights > to the ImageJ wiki's "image processing principles" page at: > > http://imagej.net/IP_Principles > > I think it would make this information easier to find, benefiting many > researchers. > > Regards, > Curtis > > On Mon, Dec 8, 2014 at 1:51 PM, Adam Hughes <[hidden email]> > wrote: > > > I think that Jan hit it right on the head. I just want to add that if > you > > don't need to automate the analysis, and you're getting segmentations > that > > you want, then your method is sound. For images with objects > significantly > > brighter than the background and of uniform brightness, manual > thresholding > > works great. You could use Fiji's thresholding plugin to test 25 methods > > in tandem, and maybe you'll find that several of them also segment your > > image nicely. If that's the case, then you could switch, or at least in > > the paper mention the other methods that worked. > > > > I just released a preprint on this exact subject when applied to SEM > images > > of nanoparticles, and I think it's relevant enough that I'm going to > share > > it shamelessly. The conclusions that Jan reached are basically the same > as > > what I said in the Thresholding section of my paper. The paper also goes > > on to look at other types of segmentation, and how to classify objects > once > > you've segmented them. > > > > https://peerj.com/preprints/671/ > > > > On Mon, Dec 8, 2014 at 8:39 AM, BioVoxxel <[hidden email]> > > wrote: > > > > > Dear Anders, dear all, > > > > > > potentially I can also add some personal opinion to this indeed > > interesting > > > and moreover important question, since many people face the problem on > > > deciding for a "proper" method (however you want to define this) as I > > often > > > figure out talking to students about these topics. > > > > > > Your question actually has two parts which I think need to be addressed > > > (thresholding and co-ocalization): > > > > > > 1.) Thresholding: > > > > > > Generally, I would not (never) decide on a method just because you will > > > "get away with it" or because a specific journal would accept it. Just > > > because it will be accepted by a few people does still not necessarily > > mean > > > that it is a suitable or appropriate analysis method. That said, here > > some > > > considerations. > > > > > > As Dimiter pointed it out already.... thresholding is not easy and > > > sometimes might not lead you to a satisfyingly accurate result. > > > Nevertheless, I had so far mostly good experiences using automatic > > > thresholding methods. But in many cases you will only get a good > feature > > > extraction if you pre-process the images. This in turn might/will lead > to > > > an alteration of feature outlines, forms and sizes. So, you first need > to > > > figure out such processing steps and check if this would still be > > > acceptable regarding the features you need to extract. A possible help > in > > > deciding for a suitable filtering and thresholding might be the "Filter > > > Check" as well as the "Threshold Check" (using the available auto > > > thresholds in ImageJ and Fiji) from the BioVoxxel Toolbox ( > > > http://fiji.sc/BioVoxxel_Toolbox). > > > I further agree with Michael Schell that it is worth investing the time > > in > > > finding suitable auto-thresholds if possible, or one of the methods > > Dimiter > > > mentioned. Because you reduce user bias and improve the extraction > > result. > > > > > > Manual thresholds are not really suitable if you try to analyze a > bigger > > > set of data for several reasons. > > > In terms of comparability you need to apply the same threshold to all > > > images in your experiment to keep the user bias at least a little lower > > > (besides your decision for the initial threshold). Due to natural > > > variability in your images a manual threshold with one or even two (an > > > upper and lower) cut-off value(s) will not work on a full set of data > > under > > > most conditions. An automatic threshold might also fail to achieve this > > but > > > since those algorithms consider the image histogram they account for > > those > > > variabilities and the chance to find suitable cut-offs is way higher. > > > If you manually threshold each image with different cut-off values you > > > actually loose comparability in your experiment completely due to > massive > > > user bias. > > > > > > Another problem, since you where talking about separating signal from > > > no-signal, is identifying a suitable separation of those two parts. > Your > > > background in a intensity based fluorescent image is in most cases very > > > dark. Our vision unfortunately is very prone to mis-interpret different > > > intensities which is getting especially difficult the darker those are. > > > Additionally, we perceive different colors with different visual > > > sensitivities. Thus, it is also important if you look at a grayscale > > image > > > (with a gray LUT) or at the same image in one of different false > colors. > > > Gray is always preferable in this context because you will see > > differences > > > best. > > > > > > Nevertheless, manual thresholding is very subjective and should be > > avoided > > > whenever possible. Even without those it is already difficult enough to > > > achieve objective analyses for many studies (my opinion). > > > > > > > > > 2.) Co-localization > > > If I got it correctly, your initial aim is a co-localization study. In > > this > > > context, I would not rely only on a pixel based overlap determination. > > This > > > might give you a hint and is partially used during object-based > > > co-localization studies. Nevertheless, you should consider additional > > > parameters like resolution limit and your actual image resolution and > the > > > intensity distribution in your images/features. Therefore, I would pay > > > attention first to a proper imaging setup, with a good nyquist-sampled > > > image and usage of the full dynamic range (besides other parameters). > To > > > this end the following paper might be helpful: > > > > > > Jennifer C. Waters, J Cell Biol. Jun 29, 2009; 185(7): 1135-1148. > > Accuracy > > > and precision in quantitative fluorescence microscopy. > > > > > > Analysis wise there are two very excellent tools available in Fiji > which > > is > > > the Coloc2 and the JACoP. The latter implements two object-based > methods > > > including a thresholding. In this context it might also be a > possibility > > to > > > use binary images as result of an auto-thresholding and mask your > > original > > > images with them (e.g. with >Edit >Paste Control or the >Process > > Image > > > Calculator). This is similar in applying a ROI as possible in the > Coloc2 > > > plugin. > > > As a suggestion for co-localization studies... I would not rely on a > > single > > > output method only, but rather combine several suitable ones as is > > possible > > > in Coloc2 and JACoP to get a better confidentiality about a potential > > > co-localization. > > > > > > So, to not create more confusion here some interesting reading and > > > important literature regarding co-localization: > > > > > > There was an interesting discussion about different co-localization > > > analyses on the list a few month ago with some papers suggested > already ( > > > > > > > > > https://list.nih.gov/cgi-bin/wa.exe?A2=ind1403&L=IMAGEJ&P=R31566&1=IMAGEJ&9=A&I=-3&J=on&d=No+Match%3BMatch%3BMatches&z=4 > > > ) > > > > > > Recommendable, especially if you use the JACoP tool: Bolte and > > Cordelieres, > > > J Microsc. 2006 Dec;224(Pt 3):213-32. A guided tour into subcellular > > > colocalization analysis in light microscopy > > > > > > Furthermore: Dunn et al. Am J Physiol Cell Physiol. 2011 > > > Apr;300(4):C723-42. A practical guide to evaluating colocalization in > > > biological microscopy > > > > > > There are many more papers regarding the topic e.g. from Elise > Stanley's > > > lab, Ingela Parmryd and Jeremy Adler. And many more I might not be > aware > > > of. > > > > > > So, in the worst case I misunderstood your question and overloaded you > > with > > > unnecessary answers (but they might be helpful to others). In the best > > case > > > you have a lot of reading suggestions and potentially a clearer picture > > on > > > how you want to start your analysis. > > > > > > Kind regards, > > > Jan > > > > > > > > > > > > 2014-12-06 16:00 GMT+01:00 Anders Lunde <[hidden email]>: > > > > > > > Dear mailing list, > > > > > > > > I have developed a nice macro for identifying colocalized signals for > > > > z-stack confocal images with multiple channels/colors. However, my > > > > advisor/professor has now come to question my method for setting a > > > > threshold for signal/no-signal in the infividual channels. > > > > > > > > My manual method has been to simply raise the threshold above what I > > > > relatively confidently can see is background, like large areas with > no > > > > apparent staining. The reason I did it manually is because when I > > played > > > > around with the automatic thresholding methods in ImageJ I decided > that > > > > they were not any better than manual and could be subject to > mistakes. > > > > > > > > My supervisor now feels that this sounds too subjective and would not > > > look > > > > good in a paper. He therefore asked me to try to find a way that was > > more > > > > guided e.g. by the histogram or something, anything that is less > > > subjective > > > > (not sure if he is worried about accuracy or how it sounds in a > paper). > > > > > > > > What is the current standard for this kind of analysis in scientific > > > > journals, in particular with regards to the acceptability of manual > > > > thresholding of immunofluorescent brain sections stained with various > > > > antibodies (and nuclear markers and neuron trancers)? Is there a > > > preference > > > > for automated, manual or some hybrid methods? Could I "get-away" with > > > > something like this: "Thresholds were set manually at a level that > > > > excluded most pixels in assumed background areas. Inspection of the > > > > assigned threshold level in the ImageJ intensity histogram showed > that > > > the > > > > thresholds were set at where the main peak (background pixels) > started > > to > > > > or had reached a minimum value." > > > > > > > > Image set that Im working on: > > > > > > > > I am working with images of brain sections with 4 colors/channels: > > > nuclear > > > > stain, two immunofluorescence staining for transciption factors > > (nuclear > > > > localization), and a retrograde nerve cell staning (nuclear + > cytoplasm > > > > staining). > > > > > > > > Greateful for any advice! > > > > > > > > -- > > > > ImageJ mailing list: http://imagej.nih.gov/ij/list.html > > > > > > > > > > > > > > > > -- > > > > > > CEO: Dr. rer. nat. Jan Brocher > > > phone: +49 (0)6234 917 03 39 > > > mobile: +49 (0)176 705 746 81 > > > e-mail: [hidden email] > > > info: [hidden email] > > > inquiries: [hidden email] > > > web: www.biovoxxel.de > > > > > > -- > > > ImageJ mailing list: http://imagej.nih.gov/ij/list.html > > > > > > > -- > > ImageJ mailing list: http://imagej.nih.gov/ij/list.html > > > > -- > ImageJ mailing list: http://imagej.nih.gov/ij/list.html > -- CEO: Dr. rer. nat. Jan Brocher phone: +49 (0)6234 917 03 39 mobile: +49 (0)176 705 746 81 e-mail: [hidden email] info: [hidden email] inquiries: [hidden email] web: www.biovoxxel.de -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html |
Hi Jan,
> Since a well checked procedure for such a guideline is critical, once > the basic information is setup I will drop a message to invite > especially people involved in threshold development or image > segmentation to take my basic text skeleton and further improve it via > interactive discussions to finally end up with a feedback controlled > guideline. Fantastic idea! With information like this, there is definitely a careful balance to be struck. One option would be to pursue the matter in the same way that Wikipedia does: by citing external sources—although in this case, that could be links to discussion threads of this mailing list. Please let me know if there is anything you need on the technical side, such as additional MediaWiki plugins, to facilitate this project. Regards, Curtis On Tue, Dec 9, 2014 at 4:08 PM, BioVoxxel <[hidden email]> wrote: > Hi Curtis, > > I take your invitation and will start to put some information about the > thresholding topic together. I might be able to do this over Chistmas and > New Year. But I am already glad about any additional information in > advance, like publications regarding qualitative and quantitative > assessment of thresholding qualities, etc. > Since a well checked procedure for such a guideline is critical, once the > basic information is setup I will drop a message to invite especially > people involved in threshold development or image segmentation to take my > basic text skeleton and further improve it via interactive discussions to > finally end up with a feedback controlled guideline. > > cheers, > Jan > > 2014-12-08 22:03 GMT+01:00 Curtis Rueden <[hidden email]>: > > > Hi everyone, > > > > I invite interested community members to contribute these valuable > insights > > to the ImageJ wiki's "image processing principles" page at: > > > > http://imagej.net/IP_Principles > > > > I think it would make this information easier to find, benefiting many > > researchers. > > > > Regards, > > Curtis > > > > On Mon, Dec 8, 2014 at 1:51 PM, Adam Hughes <[hidden email]> > > wrote: > > > > > I think that Jan hit it right on the head. I just want to add that if > > you > > > don't need to automate the analysis, and you're getting segmentations > > that > > > you want, then your method is sound. For images with objects > > significantly > > > brighter than the background and of uniform brightness, manual > > thresholding > > > works great. You could use Fiji's thresholding plugin to test 25 > methods > > > in tandem, and maybe you'll find that several of them also segment your > > > image nicely. If that's the case, then you could switch, or at least > in > > > the paper mention the other methods that worked. > > > > > > I just released a preprint on this exact subject when applied to SEM > > images > > > of nanoparticles, and I think it's relevant enough that I'm going to > > share > > > it shamelessly. The conclusions that Jan reached are basically the > same > > as > > > what I said in the Thresholding section of my paper. The paper also > goes > > > on to look at other types of segmentation, and how to classify objects > > once > > > you've segmented them. > > > > > > https://peerj.com/preprints/671/ > > > > > > On Mon, Dec 8, 2014 at 8:39 AM, BioVoxxel <[hidden email]> > > > wrote: > > > > > > > Dear Anders, dear all, > > > > > > > > potentially I can also add some personal opinion to this indeed > > > interesting > > > > and moreover important question, since many people face the problem > on > > > > deciding for a "proper" method (however you want to define this) as I > > > often > > > > figure out talking to students about these topics. > > > > > > > > Your question actually has two parts which I think need to be > addressed > > > > (thresholding and co-ocalization): > > > > > > > > 1.) Thresholding: > > > > > > > > Generally, I would not (never) decide on a method just because you > will > > > > "get away with it" or because a specific journal would accept it. > Just > > > > because it will be accepted by a few people does still not > necessarily > > > mean > > > > that it is a suitable or appropriate analysis method. That said, here > > > some > > > > considerations. > > > > > > > > As Dimiter pointed it out already.... thresholding is not easy and > > > > sometimes might not lead you to a satisfyingly accurate result. > > > > Nevertheless, I had so far mostly good experiences using automatic > > > > thresholding methods. But in many cases you will only get a good > > feature > > > > extraction if you pre-process the images. This in turn might/will > lead > > to > > > > an alteration of feature outlines, forms and sizes. So, you first > need > > to > > > > figure out such processing steps and check if this would still be > > > > acceptable regarding the features you need to extract. A possible > help > > in > > > > deciding for a suitable filtering and thresholding might be the > "Filter > > > > Check" as well as the "Threshold Check" (using the available auto > > > > thresholds in ImageJ and Fiji) from the BioVoxxel Toolbox ( > > > > http://fiji.sc/BioVoxxel_Toolbox). > > > > I further agree with Michael Schell that it is worth investing the > time > > > in > > > > finding suitable auto-thresholds if possible, or one of the methods > > > Dimiter > > > > mentioned. Because you reduce user bias and improve the extraction > > > result. > > > > > > > > Manual thresholds are not really suitable if you try to analyze a > > bigger > > > > set of data for several reasons. > > > > In terms of comparability you need to apply the same threshold to all > > > > images in your experiment to keep the user bias at least a little > lower > > > > (besides your decision for the initial threshold). Due to natural > > > > variability in your images a manual threshold with one or even two > (an > > > > upper and lower) cut-off value(s) will not work on a full set of data > > > under > > > > most conditions. An automatic threshold might also fail to achieve > this > > > but > > > > since those algorithms consider the image histogram they account for > > > those > > > > variabilities and the chance to find suitable cut-offs is way higher. > > > > If you manually threshold each image with different cut-off values > you > > > > actually loose comparability in your experiment completely due to > > massive > > > > user bias. > > > > > > > > Another problem, since you where talking about separating signal from > > > > no-signal, is identifying a suitable separation of those two parts. > > Your > > > > background in a intensity based fluorescent image is in most cases > very > > > > dark. Our vision unfortunately is very prone to mis-interpret > different > > > > intensities which is getting especially difficult the darker those > are. > > > > Additionally, we perceive different colors with different visual > > > > sensitivities. Thus, it is also important if you look at a grayscale > > > image > > > > (with a gray LUT) or at the same image in one of different false > > colors. > > > > Gray is always preferable in this context because you will see > > > differences > > > > best. > > > > > > > > Nevertheless, manual thresholding is very subjective and should be > > > avoided > > > > whenever possible. Even without those it is already difficult enough > to > > > > achieve objective analyses for many studies (my opinion). > > > > > > > > > > > > 2.) Co-localization > > > > If I got it correctly, your initial aim is a co-localization study. > In > > > this > > > > context, I would not rely only on a pixel based overlap > determination. > > > This > > > > might give you a hint and is partially used during object-based > > > > co-localization studies. Nevertheless, you should consider additional > > > > parameters like resolution limit and your actual image resolution and > > the > > > > intensity distribution in your images/features. Therefore, I would > pay > > > > attention first to a proper imaging setup, with a good > nyquist-sampled > > > > image and usage of the full dynamic range (besides other parameters). > > To > > > > this end the following paper might be helpful: > > > > > > > > Jennifer C. Waters, J Cell Biol. Jun 29, 2009; 185(7): 1135-1148. > > > Accuracy > > > > and precision in quantitative fluorescence microscopy. > > > > > > > > Analysis wise there are two very excellent tools available in Fiji > > which > > > is > > > > the Coloc2 and the JACoP. The latter implements two object-based > > methods > > > > including a thresholding. In this context it might also be a > > possibility > > > to > > > > use binary images as result of an auto-thresholding and mask your > > > original > > > > images with them (e.g. with >Edit >Paste Control or the >Process > > > Image > > > > Calculator). This is similar in applying a ROI as possible in the > > Coloc2 > > > > plugin. > > > > As a suggestion for co-localization studies... I would not rely on a > > > single > > > > output method only, but rather combine several suitable ones as is > > > possible > > > > in Coloc2 and JACoP to get a better confidentiality about a potential > > > > co-localization. > > > > > > > > So, to not create more confusion here some interesting reading and > > > > important literature regarding co-localization: > > > > > > > > There was an interesting discussion about different co-localization > > > > analyses on the list a few month ago with some papers suggested > > already ( > > > > > > > > > > > > > > https://list.nih.gov/cgi-bin/wa.exe?A2=ind1403&L=IMAGEJ&P=R31566&1=IMAGEJ&9=A&I=-3&J=on&d=No+Match%3BMatch%3BMatches&z=4 > > > > ) > > > > > > > > Recommendable, especially if you use the JACoP tool: Bolte and > > > Cordelieres, > > > > J Microsc. 2006 Dec;224(Pt 3):213-32. A guided tour into subcellular > > > > colocalization analysis in light microscopy > > > > > > > > Furthermore: Dunn et al. Am J Physiol Cell Physiol. 2011 > > > > Apr;300(4):C723-42. A practical guide to evaluating colocalization > in > > > > biological microscopy > > > > > > > > There are many more papers regarding the topic e.g. from Elise > > Stanley's > > > > lab, Ingela Parmryd and Jeremy Adler. And many more I might not be > > aware > > > > of. > > > > > > > > So, in the worst case I misunderstood your question and overloaded > you > > > with > > > > unnecessary answers (but they might be helpful to others). In the > best > > > case > > > > you have a lot of reading suggestions and potentially a clearer > picture > > > on > > > > how you want to start your analysis. > > > > > > > > Kind regards, > > > > Jan > > > > > > > > > > > > > > > > 2014-12-06 16:00 GMT+01:00 Anders Lunde <[hidden email] > >: > > > > > > > > > Dear mailing list, > > > > > > > > > > I have developed a nice macro for identifying colocalized signals > for > > > > > z-stack confocal images with multiple channels/colors. However, my > > > > > advisor/professor has now come to question my method for setting a > > > > > threshold for signal/no-signal in the infividual channels. > > > > > > > > > > My manual method has been to simply raise the threshold above what > I > > > > > relatively confidently can see is background, like large areas with > > no > > > > > apparent staining. The reason I did it manually is because when I > > > played > > > > > around with the automatic thresholding methods in ImageJ I decided > > that > > > > > they were not any better than manual and could be subject to > > mistakes. > > > > > > > > > > My supervisor now feels that this sounds too subjective and would > not > > > > look > > > > > good in a paper. He therefore asked me to try to find a way that > was > > > more > > > > > guided e.g. by the histogram or something, anything that is less > > > > subjective > > > > > (not sure if he is worried about accuracy or how it sounds in a > > paper). > > > > > > > > > > What is the current standard for this kind of analysis in > scientific > > > > > journals, in particular with regards to the acceptability of manual > > > > > thresholding of immunofluorescent brain sections stained with > various > > > > > antibodies (and nuclear markers and neuron trancers)? Is there a > > > > preference > > > > > for automated, manual or some hybrid methods? Could I "get-away" > with > > > > > something like this: "Thresholds were set manually at a level that > > > > > excluded most pixels in assumed background areas. Inspection of the > > > > > assigned threshold level in the ImageJ intensity histogram showed > > that > > > > the > > > > > thresholds were set at where the main peak (background pixels) > > started > > > to > > > > > or had reached a minimum value." > > > > > > > > > > Image set that Im working on: > > > > > > > > > > I am working with images of brain sections with 4 colors/channels: > > > > nuclear > > > > > stain, two immunofluorescence staining for transciption factors > > > (nuclear > > > > > localization), and a retrograde nerve cell staning (nuclear + > > cytoplasm > > > > > staining). > > > > > > > > > > Greateful for any advice! > > > > > > > > > > -- > > > > > ImageJ mailing list: http://imagej.nih.gov/ij/list.html > > > > > > > > > > > > > > > > > > > > > -- > > > > > > > > CEO: Dr. rer. nat. Jan Brocher > > > > phone: +49 (0)6234 917 03 39 > > > > mobile: +49 (0)176 705 746 81 > > > > e-mail: [hidden email] > > > > info: [hidden email] > > > > inquiries: [hidden email] > > > > web: www.biovoxxel.de > > > > > > > > -- > > > > ImageJ mailing list: http://imagej.nih.gov/ij/list.html > > > > > > > > > > -- > > > ImageJ mailing list: http://imagej.nih.gov/ij/list.html > > > > > > > -- > > ImageJ mailing list: http://imagej.nih.gov/ij/list.html > > > > > > -- > > CEO: Dr. rer. nat. Jan Brocher > phone: +49 (0)6234 917 03 39 > mobile: +49 (0)176 705 746 81 > e-mail: [hidden email] > info: [hidden email] > inquiries: [hidden email] > web: www.biovoxxel.de > > -- > ImageJ mailing list: http://imagej.nih.gov/ij/list.html > -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html |
In reply to this post by MrScientistman2
Perhaps my two cents to this thread.
Thresholding is and was the problem in image analysis. However, every analysis necessitates the definition of the region of interest. And this definition is the crucial point and is highly subjective. Thresholding, mostly applied to intensity images, is only one method of decision. Still, deciding which pixel belongs to which class is always a thresholding applied to a certain value. "Is manual thresholding methods accepted by scientific journals?" is not a question with a yes or no answer. It is necessary to consider the succeeding measuring steps. E. g. If the thresholded pixels are directly counted (area) or measured in terms of connected objects (number), hence are directly influenced by the thresholding, a manual thresholding needs a good discussion of its influence. If the thresholded pixels are used for more elaborated measurements, like the number or area of certain substructures per connected set of pixels (number/object, area/object), the direct influence or thresholding is reduced. Still the possible loss of interesting objects as well as the unbalanced number of selected objects has to be thoroughly considered in the discussion. To conclude, the quality of the description of the method and its discussion are the main points for acceptance of the method proposed. And, in my experience, this is too often forgotten from prospective authors. Karsten -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html |
Free forum by Nabble | Edit this page |