Dear all,
I want to calculate the fluorescence intensity from an RGB image that I have thresholded. So to know the total fluorescence intensity in a ROI am I right in thinking I can take the 'integrated density' value in the measurements section? Also, should I calculate the brightness value with or without weighted RGBs? What is the weighting for? Thank you very much for your help, I am afraid I know very little about the principles of image analysis and processing, Sinead |
Hi Sinead,
You should integrate / add the color or intensity in the thresholded area and divide it by the area. It is normal to weight the RGB colors differently: Intensity = 0.3*R + 0.59*G + 0.11*B The reason for this is that we have less blue cones receptors in our eyes than red and green cones. The ShapeLogic plugin has a color particle analyzer that calculates both average intensity and average color or colored spots. -Sami Badawi http://www.shapelogic.org On Wed, Jul 22, 2009 at 10:56 AM, Sinead Roberts<[hidden email]> wrote: > Dear all, > > I want to calculate the fluorescence intensity from an RGB image that I have thresholded. So to know > the total fluorescence intensity in a ROI am I right in thinking I can take the 'integrated density' value > in the measurements section? Also, should I calculate the brightness value with or without weighted > RGBs? What is the weighting for? > > Thank you very much for your help, I am afraid I know very little about the principles of image > analysis and processing, > > Sinead > |
On Jul 22, 2009, at 10:50 PM, Sami Badawi wrote:
> > It is normal to weight the RGB colors differently: > Intensity = 0.3*R + 0.59*G + 0.11*B > The reason for this is that we have less blue cones receptors in our > eyes than red and green cones. Well.....not exactly. Number is important - but so is the wiring diagram (photoreceptors to ganglion cells) and any number of other factors. There are, indeed, fewer "blue" cones - but the ratio of "red" to "green" cones varies wildly from individual to individual (without markedly changing the relative weights of R and G). The numbers above come from psycho-physical experiments; at the time they were done, no one *knew* how many "red", "green", and "blue" photoreceptors were in the human eye. They are the result of measuring the response of the total system - how they relate to individual components is still an open question. And, of course - they only work for the "average human". It's physiology, not physics! Actually, there's a nice story you can tell that explains the 2/1 ratio of "green" to "red" cones based on how they are used to convert from [R, G, B] to [Red-Green, Blue-Yellow, White-Black]. Alas, this nice story does not explain the low weight for "blue" cones. I like to think that blue has a low weight because "the sky is blue". It's as good a reason as any. -- Kenneth Sloan [hidden email] |
In reply to this post by Sinead Roberts
That is a great help, thank you very much :).
I have one question now following on from the tutorial you posted- about watershed algorithms to recognise touching objects. If I have stained cells with DAPI and a plasma membrane stain, where it is at the plasma membrane the cells are touching, can I apply some sort of watershed that will recognise and outline as cells those regions that contain the nuclei? Basically I mean a way for the programme to recognise as objects the real cells as opposed to the background regions that also theoretically are 'enclosed' by overlapping membrane? And moreover, if I have a z-stack of images, can I go about linking as being from one 'cell' the corresponding cell outlines in each slice? Or, will this be something I have to do manually? Thanks again for you help, Sinead |
Hi Sinead,
On Jul 24, 2009, at 10:47 AM, Sinead Roberts wrote: > That is a great help, thank you very much :). > > I have one question now following on from the tutorial you posted- > about watershed algorithms to > recognise touching objects. If I have stained cells with DAPI and a > plasma membrane stain, where it > is at the plasma membrane the cells are touching, can I apply some > sort of watershed that will > recognise and outline as cells those regions that contain the nuclei? not quite sure what you mean? do you mean use information from the plasma membrane stain to help determine the outline of single nuclei? > Basically I mean a way for the > programme to recognise as objects the real cells as opposed to the > background regions that also > theoretically are 'enclosed' by overlapping membrane? maybe you can explain better by sending me a sample image and an explanation of what you really want to measure. > And moreover, if I have a z-stack of images, can I go about linking > as being from one 'cell' the > corresponding cell outlines in each slice? Or, will this be > something I have to do manually? you can use imageJ in 3D and count objects in 3D in the Fiji - imageJ distribution , see Analyse - 3D objects counter. first you need to segment the nulclei in 3D (in fact you can do it per z slice with a global or local threshold) then use the 3d objects counter plugin to measure the objects in 3D if you nuclei seem to touch each other, and you need to separate them, you can use the watershed separation method in ImageJ to separate them before counting. cheers dan > > Thanks again for you help, > Sinead Dr. Daniel James White BSc. (Hons.) PhD Senior Microscopist / Image Visualisation, Processing and Analysis Light Microscopy and Image Processing Facilities Max Planck Institute of Molecular Cell Biology and Genetics Pfotenhauerstrasse 108 01307 DRESDEN Germany +49 (0)15114966933 (German Mobile) +49 (0)351 210 2627 (Work phone at MPI-CBG) +49 (0)351 210 1078 (Fax MPI-CBG LMF) http://www.bioimagexd.net BioImageXD http://pacific.mpi-cbg.de FIJI - is just ImageJ (Batteries Included) http://www.chalkie.org.uk Dan's Homepages https://info.med.tu-dresden.de/MTZimaging/index.php/Main_Page Dresden Imaging Facility Network dan (at) chalkie.org.uk ( white (at) mpi-cbg.de ) |
Free forum by Nabble | Edit this page |