|
Hi Juan,
here is a macro that converts 32-bit to 16-bit and creates a linear
calibration:
run("Conversions...", "scale");
resetMinAndMax();
getMinAndMax(min, max);
run("16-bit");
run("Calibrate...", "function=[Straight Line] unit=[Gray Value] text1=
[0\n65535] text2=["+min+"\n"+max+"]");
If you want to have the original calibration - this works only if you
have a linear calibration:
run("Conversions...", "scale");
min=calibrate(0);
max=calibrate(65535);
run("32-bit");
//convert to 32 bit, do whatever you like
setMinAndMax(min,max);
run("16-bit");
run("Calibrate...", "function=[Straight Line] unit=[Gray Value] text1=
[0\n65535] text2=["+min+"\n"+max+"]");
Beware of line breaks introduced by the mailer; the last line of both
macros starts with run("Calibrate..." and should be only one line.
On the long term, it would be nice to have ImageJ create calibrated 8-
bit and 16-bit images when converting from 32-bit.
Michael
________________________________________________________________
On 20 Apr 2010, at 11:37, Juan Francisco wrote:
> dear users:I open a 16 bit image with ImageJ to be calibrated it on
> ImageJ, but previously I need to convert the original 16 bit image
> into a 32 bit one. So, finally I get a calibrated 32 bit image;
> however I need to convert this final image into a 16 bit image by
> keeping the calibrated values on each pixel.Pls, anyone knows how
> to do this with ImageJ?Comments are ver appreciated.Thanks a lot!!JFC
|