Hello,
I am new to Image J and I have a very general question about how grayscale images are displayed on a computer monitor. I use a 12-bit monochrome CCD camera to capture fluorescent microscope images and save as .tiff files. When I open my images in ImageJ, the image is displayed as an 8-bit (0-255) on the monitor. When I hover over a pixel in the image with the mouse pointer, the 12-bit value (0-4095) that was captured by the camera is listed in the ImageJ toolbar. I looked at the ImageJ documentation and read up a little bit on digital displays, and it seems that all standard computer monitors will display grayscale images in 8-bit only. Why is this so? Is it because of hardware limitations and costs? Is it because the human eye can only detect 256 discrete shades of gray? If this is not the case, then is there not some loss of information in the visual image when it gets displayed at 8-bit? Am I losing some of the "true" contrast when I look at my 12-bit images in ImageJ? I did a google search on "12-bit grayscale displays" and found some sites that sell special X-ray and MRI monitors with 12-bit or even 16- bit grayscale resolution. If these kinds of monitors exist, then this means the human eye can infact detect more than 256 shades of gray, correct? Thank you in advance for any replies! John O |
Hi
Is there a quick way to find out what the dicom header size is of my files ? thanks Anna |
> Is there a quick way to find out what the dicom header size is of my
> files ? Enable debugging in Edit>Options>Misc, open the DICOM, and the line that starts with "offset:" in the Log window will contain the size of the DICOM header. -wayne |
Thanks wayne, actually I also managed to spot it in the image <
showinfo menu. It didn't explicitly say offset but it looked the right kind of number :) Cheers Anna On Jul 6, 2006, at 5:10 PM, Wayne Rasband wrote: >> Is there a quick way to find out what the dicom header size is of >> my files ? > > Enable debugging in Edit>Options>Misc, open the DICOM, and the line > that starts with "offset:" in the Log window will contain the size > of the DICOM header. > > -wayne |
In reply to this post by John Oreopoulos
Hi John,
I also have an interest in "High Dynamic Range" (HDR) and "Extended Dynamic Range" (EDR) imaging. ImageJ uses LookupTables (LUTs) to map ranges of intensities down to the 8 bit values that common video cards and digital monitors can understand. You can adjust the LUT values with ImageJ commands in the menu. That's what happens when you use the menu commands such as Image->Adjust->Brightness/Contrast. That way you can stretch out any particular portion of the 4096 range to the 256 element output range. If you are trying to visualize subtle intensity differences, that is all you need to do. However, displaying an HDR image "accurately" is trickier. Human vision can detect a brightness difference of about 1 - 2%, so standard monitors provide a grayscale that has each intensity a roughly equal 1% greater than the last intensity. Note that these are equal ratio differences, not equal differences. The brightness control on your monitor affects the range, and the contrast affects the black offset. Note the difference in responce between your camera and your display: While your camera has a linear response to light, your display does not have a linear response to its input signal. Instead, it is mapped to be have a response tuned to human visual difference detection. So to preserve the "correct" visual presentation of intensities from your camera requires an inverse nonlinear correction. The ImageJ menu command Process->Math->Gamma... set to about 0.45 and applied to your 12 bit image can provide an approximately correct correction, but to do it accurately requires calibrating your monitor so you can calculate the correct correction factors. Images from standard color digital cameras are already encoded with the same nonlinear "Standard RGB" (sRGB) scale, but scientific camera with extended ranges, such as yours, are not. Here are some links for further info: Weber's Law of Just Noticeable Difference Non-Linear Scale-Factor Methods Gamma FAQ - by Poynton Monitor calibration & gamma Is 24 bits enought ? Tone mapping - Wikipedia, the free encyclopedia Hope this helps. -- Harry Parker Senior Systems Engineer Dialog Imaging Systems, Inc. ----- Original Message ---- From: John Oreopoulos <[hidden email]> To: [hidden email] Sent: Thursday, July 6, 2006 2:01:19 PM Subject: grayscale displays and human vision Hello, I am new to Image J and I have a very general question about how grayscale images are displayed on a computer monitor. I use a 12-bit monochrome CCD camera to capture fluorescent microscope images and save as .tiff files. When I open my images in ImageJ, the image is displayed as an 8-bit (0-255) on the monitor. When I hover over a pixel in the image with the mouse pointer, the 12-bit value (0-4095) that was captured by the camera is listed in the ImageJ toolbar. I looked at the ImageJ documentation and read up a little bit on digital displays, and it seems that all standard computer monitors will display grayscale images in 8-bit only. Why is this so? Is it because of hardware limitations and costs? Is it because the human eye can only detect 256 discrete shades of gray? If this is not the case, then is there not some loss of information in the visual image when it gets displayed at 8-bit? Am I losing some of the "true" contrast when I look at my 12-bit images in ImageJ? I did a google search on "12-bit grayscale displays" and found some sites that sell special X-ray and MRI monitors with 12-bit or even 16- bit grayscale resolution. If these kinds of monitors exist, then this means the human eye can infact detect more than 256 shades of gray, correct? Thank you in advance for any replies! John O |
In reply to this post by John Oreopoulos
The human eye can perceive far more than 256 levels of gray, especially
if you consider accommodation (the eye's mechanisms for adapting to very dark or very bright environments). However, 256 gray levels is a good match for the dynamic range of common scenes. It's also very convenient for computers. :-) Radiologists look at uncommon scenes, and have been trained to have uncommon perceptual abilities. For them, it can make sense to pay a high premium for a little extra detail. For the rest of us, it almost never does. My group works with small-animal volume images (CT, MRI) with up to 15-bit dynamic range. To take advantage of that range, we usually use window-level adjustments to stretch contrast in areas of interest, pegging uninteresting areas to 0 or 255. In other words, we stretch contrast so that a small portion of the data histogram is spread across the display's complete dynamic range. In my experience, trained radiologists also do the same thing when working with digital images -- they can quickly "steer" the window-level settings to emphasize the detail they want to see. Even for them, it's easier to see low-contrast detail if you first stretch its contrast. If you're interested in human visual perception and how best to support it, I highly recommend Colin Ware's excellent _Information Visualization: Perception for Design_, ISBN 1558608192, now in its second edition. On Jul 6, 2006, at 2:01 PM, John Oreopoulos wrote: > Hello, > > I am new to Image J and I have a very general question about how > grayscale images are displayed on a computer monitor. I use a 12-bit > monochrome CCD camera to capture fluorescent microscope images and > save as .tiff files. When I open my images in ImageJ, the image is > displayed as an 8-bit (0-255) on the monitor. When I hover over a > pixel in the image with the mouse pointer, the 12-bit value (0-4095) > that was captured by the camera is listed in the ImageJ toolbar. I > looked at the ImageJ documentation and read up a little bit on digital > displays, and it seems that all standard computer monitors will > display grayscale images in 8-bit only. Why is this so? Is it > because of hardware limitations and costs? Is it because the human > eye can only detect 256 discrete shades of gray? If this is not the > case, then is there not some loss of information in the visual image > when it gets displayed at 8-bit? Am I losing some of the "true" > contrast when I look at my 12-bit images in ImageJ? > I did a google search on "12-bit grayscale displays" and found some > sites that sell special X-ray and MRI monitors with 12-bit or even > 16-bit grayscale resolution. If these kinds of monitors exist, then > this means the human eye can infact detect more than 256 shades of > gray, correct? > > Thank you in advance for any replies! > > John O > -jeffB (Jeff Brandenburg, Duke Center for In-Vivo Microscopy) |
In reply to this post by John Oreopoulos
I'm not an authority on human vision but I read somewhere that the human eye could distingusih approximately 16 different gray levels. This is far less than 256 of 8 bit. In fact it is only 3 bit.
I tested this in a small demonstration with a group of colleagues who spend a lot of time reading x-ray films of the hands and feet and 16 levels appeared to be about what the best could do. Try it your self. You can compose images with gray scales of anything between 0 and 256. Code them, mix them up in random fasion and ask your colleagues to tell you which is whiter (or blacker ) comparing A to B, A to C, A to D etc. JTS -------------- Original message ---------------------- From: Jeff Brandenburg <[hidden email]> > The human eye can perceive far more than 256 levels of gray, especially > if you consider accommodation (the eye's mechanisms for adapting to > very dark or very bright environments). However, 256 gray levels is a > good match for the dynamic range of common scenes. It's also very > convenient for computers. :-) > > Radiologists look at uncommon scenes, and have been trained to have > uncommon perceptual abilities. For them, it can make sense to pay a > high premium for a little extra detail. For the rest of us, it almost > never does. > > My group works with small-animal volume images (CT, MRI) with up to > 15-bit dynamic range. To take advantage of that range, we usually use > window-level adjustments to stretch contrast in areas of interest, > pegging uninteresting areas to 0 or 255. In other words, we stretch > contrast so that a small portion of the data histogram is spread across > the display's complete dynamic range. In my experience, trained > radiologists also do the same thing when working with digital images -- > they can quickly "steer" the window-level settings to emphasize the > detail they want to see. Even for them, it's easier to see > low-contrast detail if you first stretch its contrast. > > If you're interested in human visual perception and how best to support > it, I highly recommend Colin Ware's excellent _Information > Visualization: Perception for Design_, ISBN 1558608192, now in its > second edition. > > On Jul 6, 2006, at 2:01 PM, John Oreopoulos wrote: > > > Hello, > > > > I am new to Image J and I have a very general question about how > > grayscale images are displayed on a computer monitor. I use a 12-bit > > monochrome CCD camera to capture fluorescent microscope images and > > save as .tiff files. When I open my images in ImageJ, the image is > > displayed as an 8-bit (0-255) on the monitor. When I hover over a > > pixel in the image with the mouse pointer, the 12-bit value (0-4095) > > that was captured by the camera is listed in the ImageJ toolbar. I > > looked at the ImageJ documentation and read up a little bit on digital > > displays, and it seems that all standard computer monitors will > > display grayscale images in 8-bit only. Why is this so? Is it > > because of hardware limitations and costs? Is it because the human > > eye can only detect 256 discrete shades of gray? If this is not the > > case, then is there not some loss of information in the visual image > > when it gets displayed at 8-bit? Am I losing some of the "true" > > contrast when I look at my 12-bit images in ImageJ? > > I did a google search on "12-bit grayscale displays" and found some > > sites that sell special X-ray and MRI monitors with 12-bit or even > > 16-bit grayscale resolution. If these kinds of monitors exist, then > > this means the human eye can infact detect more than 256 shades of > > gray, correct? > > > > Thank you in advance for any replies! > > > > John O > > > -- > -jeffB (Jeff Brandenburg, Duke Center for In-Vivo Microscopy) |
Hi there,
JTS wrote ---------------------------------------------------------------------------------------------------------------- I'm not an authority on human vision but I read somewhere that the human eye could distingusih approximately 16 different gray levels. This is far less than 256 of 8 bit. In fact it is only 3 bit. I tested this in a small demonstration with a group of colleagues who spend a lot of time reading x-ray films of the hands and feet and 16 levels appeared to be about what the best could do. Try it your self. You can compose images with gray scales of anything between 0 and 256. Code them, mix them up in random fasion and ask your colleagues to tell you which is whiter (or blacker ) comparing A to B, A to C, A to D etc. ---------------------------------------------------------------------------------------------------------------- There seems to be a confusion here between smalled difference in brightness one can distinguish (say a change of x percent) and the total number of such small differences that can appear in ONE scene (any dark adaption would NOT count here, because then we are comparing diffent scenes) with having a brightness too low to discern or too bright for the eye (causes glare etc.). I do not know what the brightness differences were that you used, but only 16 levels within the one-scene dynamic range is definitely too low! Take any BW picture in an image processing progamm and reduce the number of gray levels to 64 and later to only 16, you will start to see a difference at 64 levels and definitely at 16 levels! (HOWEVER, trying to surely identify a certain level my be a different thing!) The gamma faq http://www.poynton.com/notes/colour_and_gamma/GammaFAQ.htm has a good discussion on that, (I hope I will not get sued by some underemployed advaocate for citing from it): <cite 15. How many bits do I need to smoothly shade from black to white? At a particular level of adaptation, human vision responds to about a hundred-to-one contrast ratio of luminance from white to black. Call these luminance values 100 and 1. Within this range, vision can detect that two luminance values are different if the ratio between them exceeds about 1.01, corresponding to a contrast sensitivity of one percent. To shade smoothly over this range, so as to produce no perceptible steps, at the black end of the scale it is necessary to have coding that represents different luminance levels 1.00, 1.01, 1.02, and so on. If linear light coding is used, the "delta" of 0.01 must be maintained all the way up the scale to white. This requires about 9,900 codes, or about fourteen bits per component. If you use nonlinear coding, then the 1.01 "delta" required at the black end of the scale applies as a ratio, not an absolute increment, and progresses like compound interest up to white. This results in about 460 codes, or about nine bits per component. Eight bits, nonlinearly coded according to Rec. 709, is sufficient for broadcast-quality digital television at a contrast ratio of about 50:1. If poor viewing conditions or poor display quality restrict the contrast ratio of the display, then fewer bits can be employed. If a linear light system is quantized to a small number of bits, with black at code zero, then the ability of human vision to discern a 1.01 ratio between adjacent luminance levels takes effect below code 100. If a linear light system has only eight bits, then the top end of the scale is only 255, and contouring in dark areas will be perceptible even in very poor viewing conditions \cite> JW ______________________________________________________________________ This email has been scanned by the MessageLabs Email Security System. For more information please visit http://www.messagelabs.com/email ______________________________________________________________________ |
In reply to this post by John T. Sharp
On Jul 10, 2006, at 10:03 AM, John T. Sharp wrote:
> I'm not an authority on human vision but I read somewhere that the > human eye could distingusih approximately 16 different gray levels. > This is far less than 256 of 8 bit. In fact it is only 3 bit. > > I tested this in a small demonstration with a group of colleagues who > spend a lot of time reading x-ray films of the hands and feet and 16 > levels appeared to be about what the best could do. > > Try it your self. You can compose images with gray scales of anything > between 0 and 256. Code them, mix them up in random fasion and ask > your colleagues to tell you which is whiter (or blacker ) comparing A > to B, A to C, A to D etc. Well, sort of. Being able to identify unique shades *in isolation*, or in non-contiguous comparisons, is a very different problem from distinguishing contrast within a scene. In reality, I'd say that 16 is way too high, because our perception of gray levels is so tied up with foreground illumination, background illumination, and so forth. We've all probably seen the optical illusions where a 50% gray square against a white background looks darker than a 50% gray square against a black background. As Joachim Wesner pointed out before I could finish this message :-), gray-level resolution is tested most severely in areas of smooth gradation. There's even a well-known term, "posterization", for the visual effect that you get when you reduce the number of steps too far. If you're curious, load your favorite grayscale image into ImageJ, and try Process:Math:AND... with a value of 11110000 to reduce the image to 16 levels of gray. (The boats.gif sample image is a good one.) It's *very* easy to see the difference! -- -jeffB (Jeff Brandenburg, Duke Center for In-Vivo Microscopy) |
In reply to this post by John T. Sharp
Hi,
On Mon, 10 Jul 2006, John T. Sharp wrote: > I'm not an authority on human vision but I read somewhere that the human > eye could distingusih approximately 16 different gray levels. This is > far less than 256 of 8 bit. In fact it is only 3 bit. > > I tested this in a small demonstration with a group of colleagues who > spend a lot of time reading x-ray films of the hands and feet and 16 > levels appeared to be about what the best could do. That is not a fair test. Humans are really bad at estimating _absolute_ values, especially when there is a high variation in colour, but they are wonderful with _relative_ values, especially when there is next to no variation. Try this macro: -- snip -- r1=192; r2=197; w=400; h=400; newImage("Cascade", "RGB White", w, h, 1); makeRectangle(0, 0, w/2, h); setForegroundColor(r1, r1, r1); run("Fill"); makeRectangle(w/2, 0, w, h); setForegroundColor(r2, r2, r2); run("Fill"); run("Select None"); -- snap -- Depending on the monitor, and your eye sight, you can make the distance between r1 and r2 even smaller. A colleague of mine could easily discern 192 from 194. Ciao, Dscho |
In reply to this post by John Oreopoulos
Dear all,
following this thread, I just wrote a macro Tool that might help exploring images with a high dynamic range. It will adjust the display range to the local minimum and maximum. You can try it with the CT example image. It can help finding details in low contrast regions. http://rsb.info.nih.gov/ij/macros/tools/HDRexplorerTool.txt Jerome Quoting Johannes Schindelin <[hidden email]>: > Hi, > > On Mon, 10 Jul 2006, John T. Sharp wrote: > >> I'm not an authority on human vision but I read somewhere that the human >> eye could distingusih approximately 16 different gray levels. This is >> far less than 256 of 8 bit. In fact it is only 3 bit. >> >> I tested this in a small demonstration with a group of colleagues who >> spend a lot of time reading x-ray films of the hands and feet and 16 >> levels appeared to be about what the best could do. > > That is not a fair test. Humans are really bad at estimating _absolute_ > values, especially when there is a high variation in colour, but they are > wonderful with _relative_ values, especially when there is next to no > variation. Try this macro: > > -- snip -- > r1=192; r2=197; > w=400; h=400; > newImage("Cascade", "RGB White", w, h, 1); > makeRectangle(0, 0, w/2, h); > setForegroundColor(r1, r1, r1); > run("Fill"); > makeRectangle(w/2, 0, w, h); > setForegroundColor(r2, r2, r2); > run("Fill"); > run("Select None"); > -- snap -- > > Depending on the monitor, and your eye sight, you can make the distance > between r1 and r2 even smaller. A colleague of mine could easily discern > 192 from 194. > > Ciao, > Dscho > |
In reply to this post by Jeff Brandenburg
Up until a few years ago, and after many Photoshop users complained they
could actually see banding due to Photoshop compensating for the monitors factory gamma (~2.5) relative to their preferred working space (e.g., ~1.8), some expensive monitors together with display cards began offering 10bit precision. I don't remember which monitors or display cards, but you can probably google it. genuinely :o) michael shaffer SEM/MLA Research Coordinator <http://www.mun.ca/creait/maf/> Inco Innovation Centre c/o Memorial University St. John's, NL A1C 5S7 > On Jul 6, 2006, at 2:01 PM, John Oreopoulos wrote: > > > Hello, > > > > I am new to Image J and I have a very general question about how > > grayscale images are displayed on a computer monitor. I > use a 12-bit > > monochrome CCD camera to capture fluorescent microscope images and > > save as .tiff files. When I open my images in ImageJ, the image is > > displayed as an 8-bit (0-255) on the monitor. When I hover over a > > pixel in the image with the mouse pointer, the 12-bit value > (0-4095) > > that was captured by the camera is listed in the ImageJ toolbar. I > > looked at the ImageJ documentation and read up a little bit > on digital > > displays, and it seems that all standard computer monitors will > > display grayscale images in 8-bit only. Why is this so? Is it > > because of hardware limitations and costs? Is it because the human > > eye can only detect 256 discrete shades of gray? If this > is not the > > case, then is there not some loss of information in the > visual image > > when it gets displayed at 8-bit? Am I losing some of the "true" > > contrast when I look at my 12-bit images in ImageJ? > > I did a google search on "12-bit grayscale displays" and found some > > sites that sell special X-ray and MRI monitors with 12-bit or even > > 16-bit grayscale resolution. If these kinds of monitors > exist, then > > this means the human eye can infact detect more than 256 shades of > > gray, correct? > > > > Thank you in advance for any replies! > > > > John O > > > -- > -jeffB (Jeff Brandenburg, Duke Center for In-Vivo Microscopy) > |
In reply to this post by Jerome Mutterer
FYI
Here is a good starting point: Digital Imaging and Communications in Medicine (DICOM) Part 14: Grayscale Standard Display Function The document is available via the NEMA website (or google DICOM) Cheers -----Original Message----- From: Jerome Mutterer <[hidden email]> To: [hidden email] Sent: Mon, 10 Jul 2006 18:52:38 +0200 Subject: Re: grayscale displays and human vision Dear all, following this thread, I just wrote a macro Tool that might help exploring images with a high dynamic range. It will adjust the display range to the local minimum and maximum. You can try it with the CT example image. It can help finding details in low contrast regions. http://rsb.info.nih.gov/ij/macros/tools/HDRexplorerTool.txt Jerome Quoting Johannes Schindelin <[hidden email]>: > Hi, > > On Mon, 10 Jul 2006, John T. Sharp wrote: > >> I'm not an authority on human vision but I read somewhere that the human >> eye could distingusih approximately 16 different gray levels. This is >> far less than 256 of 8 bit. In fact it is only 3 bit. >> >> I tested this in a small demonstration with a group of colleagues who >> spend a lot of time reading x-ray films of the hands and feet and 16 >> levels appeared to be about what the best could do. > > That is not a fair test. Humans are really bad at estimating _absolute_ > values, especially when there is a high variation in colour, but they are > wonderful with _relative_ values, especially when there is next to no > variation. Try this macro: > > -- snip -- > r1=192; r2=197; > w=400; h=400; > newImage("Cascade", "RGB White", w, h, 1); > makeRectangle(0, 0, w/2, h); > setForegroundColor(r1, r1, r1); > run("Fill"); > makeRectangle(w/2, 0, w, h); > setForegroundColor(r2, r2, r2); > run("Fill"); > run("Select None"); > -- snap -- > > Depending on the monitor, and your eye sight, you can make the distance > between r1 and r2 even smaller. A colleague of mine could easily discern > 192 from 194. > > Ciao, > Dscho > Check out AOL.com today. Breaking news, video search, pictures, email and IM. All on demand. Always Free. |
Does anyone know of a plugin for calculating the slope of signal intensity
over a time series of image data? |
hi there,
I tried to compile ImageJ source code in my server. I tried two ways: 1. download source code of ImageJ, run "ant", it compile source code to ij.jar file successfully (with my new plugin class) and it launch ImageJ automatically; but I can't see my new plugin in menu bar and when I click "plugins/Compile and run", give me error messages of "This JVM does not include the javac compiler.Javac is included..." 2. download a package for windows (without Java (1.6MB). ), Install the package, launch ImageJ successfully, but when I click "plugins/Compile and run", it gave me the same error message of "This JVM does not include the javac compiler.Javac is included..." I wonder which way is the best for compiling source code and what is wrong for my system settings. I have Java1.5 installed in server with ant, cgywin. The windows environment setting is "path--java\jdk bin" and "JAVA_HOME---java\jdk", "JVM---java\jdk", etc thanks, Michael, __________________________________________________ Do You Yahoo!? Tired of spam? Yahoo! Mail has the best spam protection around http://mail.yahoo.com |
Hi Michael,
You could try making an environment variable CLASSPATH and include the directory where the 'tools.jar' file resides. For my pc that would be: CLASSPATH = C:\Program Files\Java\jdk1.5.0_06\lib Also check the 'ImageJ.cfg' file in the ImageJ directory and see what JVM it's calling. I don't know if the settings in the 'ImageJ.cfg' file will override the settings for the JAVA_HOME evironment variable (I could test this, but I am to lazy). Greetings, Edwin > hi there, > > I tried to compile ImageJ source code in my server. > > I tried two ways: > > 1. download source code of ImageJ, run "ant", it > compile source code to ij.jar file successfully (with > my new plugin class) and it launch ImageJ > automatically; > > but I can't see my new plugin in menu bar and when I > click "plugins/Compile and run", give me error > messages of "This JVM does not include the javac > compiler.Javac is included..." > > 2. download a package for windows (without Java > (1.6MB). ), Install the package, > > launch ImageJ successfully, but when I click > "plugins/Compile and run", it gave me the same error > message of "This JVM does not include the javac > compiler.Javac is included..." > > I wonder which way is the best for compiling source > code and what is wrong for my system settings. > > I have Java1.5 installed in server with ant, cgywin. > The windows environment setting is "path--java\jdk > bin" and "JAVA_HOME---java\jdk", "JVM---java\jdk", etc > > thanks, > > Michael, > > > __________________________________________________ > Do You Yahoo!? > Tired of spam? Yahoo! Mail has the best spam protection > http://mail.yahoo.com > |
Commandeur a écrit :
> Hi Michael, > > You could try making an environment variable CLASSPATH and > include the directory where the 'tools.jar' file resides. > For my pc that would be: > > CLASSPATH = C:\Program Files\Java\jdk1.5.0_06\lib > Hi, The tools.jar file should be directly in the classpath : CLASSPATH = C:\Program Files\Java\jdk1.5.0_06\lib\tools.jar and then : java .... -cp CLASSPATH ij.ImageJ Thomas > Also check the 'ImageJ.cfg' file in the ImageJ directory and > see what JVM it's calling. I don't know if the settings in > the 'ImageJ.cfg' file will override the settings for the > JAVA_HOME evironment variable (I could test this, but I am > to lazy). > > Greetings, > Edwin > > >> hi there, >> >> I tried to compile ImageJ source code in my server. >> >> I tried two ways: >> >> 1. download source code of ImageJ, run "ant", it >> compile source code to ij.jar file successfully (with >> my new plugin class) and it launch ImageJ >> automatically; >> >> but I can't see my new plugin in menu bar and when I >> click "plugins/Compile and run", give me error >> messages of "This JVM does not include the javac >> compiler.Javac is included..." >> >> 2. download a package for windows (without Java >> (1.6MB). ), Install the package, >> >> launch ImageJ successfully, but when I click >> "plugins/Compile and run", it gave me the same error >> message of "This JVM does not include the javac >> compiler.Javac is included..." >> >> I wonder which way is the best for compiling source >> code and what is wrong for my system settings. >> >> I have Java1.5 installed in server with ant, cgywin. >> The windows environment setting is "path--java\jdk >> bin" and "JAVA_HOME---java\jdk", "JVM---java\jdk", etc >> >> thanks, >> >> Michael, >> >> >> __________________________________________________ >> Do You Yahoo!? >> Tired of spam? Yahoo! Mail has the best spam protection >> > around > >> http://mail.yahoo.com >> >> > > . > > -- /*****************************************************/ Thomas Boudier, MCU Université Pierre et Marie Curie Imagerie Intégrative,Institut Curie - INSERM U759. Tél : 01 69 86 31 72 Fax : 01 69 07 53 27 /*****************************************************/ |
In reply to this post by David S Liebeskind
> Does anyone know of a plugin for calculating the slope of
> signal intensity over a time series of image data? Here is how you can calculate the slope: 1. In Analyze>Set Measurements, check "Mean Gray Value" and uncheck all other options. 2. Run Image>Stacks>Plot Z-axis Profile 3. Right click in the Results window and select "Copy All" ("Copy" in v1.37) from the drop down menu. 4. Open the Curve Fitter (Analyze>Tools>Curve Fitting). 5. Select the default data set, then paste (ctrl-v). 6. Click "Fit" and the slope is displayed in the Log window as the "b" parameter. -wayne |
Free forum by Nabble | Edit this page |