Dear all,
When converting a composite 8-bit stack to 16-bit I get different results than when I split the channels and convert. Run for example: run("Set Measurements...", "area mean integrated display redirect=None decimal=3"); run("Confocal Series (2.2MB)"); run("Duplicate...", "duplicate"); selectWindow("confocal-series.tif"); makeRectangle(133, 227, 56, 64); run("Measure"); run("16-bit"); run("Measure"); selectWindow("confocal-series-1.tif"); run("Split Channels"); close(); run("Restore Selection"); run("Measure"); run("16-bit"); run("Measure"); Also the next macro shows some strange behaviour when converting between 8 and 16-bit run("Confocal Series (2.2MB)"); makeRectangle(130, 224, 64, 66); run("Measure"); run("16-bit"); run("Measure"); run("8-bit"); run("Measure"); run("16-bit"); run("Measure"); Best wishes Kees Dr Ir K.R. Straatman Senior Experimental Officer Advanced Imaging Facility Centre for Core Biotechnology Services University of Leicester http://www2.le.ac.uk/colleges/medbiopsych/facilities-and-services/cbs/lite/aif -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html |
On Wednesday 25 Mar 2015 23:14:27 Straatman, Kees wrote:
> run("Confocal Series (2.2MB)"); > makeRectangle(130, 224, 64, 66); > run("Measure"); > run("16-bit"); > run("Measure"); > run("8-bit"); > run("Measure"); > run("16-bit"); > run("Measure"); Hi Kees, If you untick the option 'Scale When Converting', then the subsequent conversions between 8 and 16 bit seem to be OK in the latest build, but the first conversion to 16 bit returns a different value that the original 8 bit. I am guessing here that it might be related to the bin size of the histogram (which is set to 0.812) and that the Measure command uses the histogram (which now are slightly different due to the different bin sizes) to compute the average? However I have no explanation why the maximum greyscale is so different between the original 8bit (187) and the 16 bit (207) versions. Cheers Gabriel -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html |
In reply to this post by Krs5
On Mar 25, 2015, at 7:14 PM, Straatman, Kees (Dr.) <[hidden email]> wrote:
> > Dear all, > > When converting a composite 8-bit stack to 16-bit I get different results than when I split the channels and convert. Run for example: This bug is fixed in the latest ImageJ daily build (1.49q20). -wayne > run("Set Measurements...", "area mean integrated display redirect=None decimal=3"); > run("Confocal Series (2.2MB)"); > run("Duplicate...", "duplicate"); > selectWindow("confocal-series.tif"); > makeRectangle(133, 227, 56, 64); > run("Measure"); > run("16-bit"); > run("Measure"); > selectWindow("confocal-series-1.tif"); > run("Split Channels"); > close(); > run("Restore Selection"); > run("Measure"); > run("16-bit"); > run("Measure"); > > Also the next macro shows some strange behaviour when converting between 8 and 16-bit > > run("Confocal Series (2.2MB)"); > makeRectangle(130, 224, 64, 66); > run("Measure"); > run("16-bit"); > run("Measure"); > run("8-bit"); > run("Measure"); > run("16-bit"); > run("Measure"); > > Best wishes > > Kees > > > Dr Ir K.R. Straatman > Senior Experimental Officer > Advanced Imaging Facility > Centre for Core Biotechnology Services > University of Leicester > http://www2.le.ac.uk/colleges/medbiopsych/facilities-and-services/cbs/lite/aif > -- > ImageJ mailing list: http://imagej.nih.gov/ij/list.html -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html |
In reply to this post by Gabriel Landini
Hi Gabriel and Wayne,
Wayne, many thanks for the fix. Gabriel, Indeed. Had forgotten to check the effect of 'Scale when converting' on the second macro. It does the trick. Best wishes Kees -----Original Message----- From: ImageJ Interest Group [mailto:[hidden email]] On Behalf Of Gabriel Landini Sent: 26 March 2015 10:27 To: [hidden email] Subject: Re: Convert from 8-bit to 16-bit problems On Wednesday 25 Mar 2015 23:14:27 Straatman, Kees wrote: > run("Confocal Series (2.2MB)"); > makeRectangle(130, 224, 64, 66); > run("Measure"); > run("16-bit"); > run("Measure"); > run("8-bit"); > run("Measure"); > run("16-bit"); > run("Measure"); Hi Kees, If you untick the option 'Scale When Converting', then the subsequent conversions between 8 and 16 bit seem to be OK in the latest build, but the first conversion to 16 bit returns a different value that the original 8 bit. I am guessing here that it might be related to the bin size of the histogram (which is set to 0.812) and that the Measure command uses the histogram (which now are slightly different due to the different bin sizes) to compute the average? However I have no explanation why the maximum greyscale is so different between the original 8bit (187) and the 16 bit (207) versions. Cheers Gabriel -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html |
Free forum by Nabble | Edit this page |