Dear All,
I am looking for a camera compatible with ImageJ. It should have the next specs: at least 2-3 Mega Pixel, colour and C-mount Surfing the internet I see only expensive solutions. Does anyone know some affordable solutions (2000-3000 euro max), Thanks in advance!, Peter van Loon The information contained in this e-mail (and attachments if any) is exclusively intended for the recipient(s) named above and may be confidential, proprietary, and/or legally privileged. Unintentional disclosure of this e-mail does not constitute a waiver of any right or privilege. Please notify us if you are not the intended recipient and do not use, print, copy, forward, or disclose (any part of) this e-mail, but delete it (and attachments and copies if any) subsequently. Thank you. You can find disclaimer translations in different languages by visiting our website http://www.rijkzwaan.com/edisclaimer The registration number of each Rijk Zwaan company can be found by using the following link: http://www.rijkzwaan.com/registrationnumbers |
If your requirement is for still images, even a medium-price digital SLR camera such as the Nikon D90 will give superb results at higher resolution (14 Megapixels). You will also have to buy a an adaptor to attach such a camera to a c-mount. These cost around 400 euros from a company in Austria, see http://www.lmscope.com/produkt22/LM Mikroskop Adapter Mikroskope en.shtml but also check Meiji Techno as a possible source, or Martin Microscopes if you are in USA. There is advice on the Austrian website on which DSLR is best. High-end Canon Eos models seem to have the best computer control software, but may exceed your budget.
If you are after video, the Nikon D90 gives high-definition streaming video (so-called 'Live View') which is compressed but looks superb on a HDMI monitor. This is very useful if you need to focus a microscope or other device. High-end DSLRs tend to provide the best high-resolution video outputs. Very cheap USB cameras (around 400 euros) which have a c-mount ( e.g. GXCam) are available which can give up to 5 Mpixel resolution in colour, but the refresh rate is only 5 per second, which is a pain for focussing and following movement. These come with everything necessary for attaching to a microscope. |
On Saturday 12 Feb 2011, bradscopegems wrote:
> If your requirement is for still images, even a medium-price digital SLR > camera such as the Nikon D90 will give superb results at higher resolution > (14 Megapixels). The resolution is mainly driven by the microscope optics. With a 14 MP camera one gets what is called "empty magnification". I would suggest to find out what are the resolution values of the microscope objectives are (from the manufacturer manuals) and start thinking from there. Depending on the field width of the camera and the optics, I think anything > 4 MP or thereabouts will not capture much more detail than the resolution of the optical system will be able to provide in a relatively good brightfield microscope. Of course one can subsample large images with empty magnification and reduce file size while keeping the useful amount of detail, but there are other disadvantages. With a SLR one will struggle to make illumination modifications on the fly to correct for brightfield background illumination using the transmittance method. You can do it, but it will be more time consuming as one cannot check histogram saturation on the fly, etc. then transfer the images to the computer, reload them, etc. Whether this is crucial it depends on how many images one is expecting to acquire. Also remember that colour cameras using Bayer masks will interpolate colours and that reduces further the resolution of the image data. A good alternative is to use greyscale cameras with a R-G-B filter wheel or tunable filter so each pixel is exposed 3 times (R G & B). This avoids the interpolation. Such things are, sadly more expensive. If I had to buy a new camera, I would try to find out what is supported right now (some camera manufacturers produce IJ plugins that can drive their cameras, but be aware that some of these only support manual acquisition, and not driven from a plugin or macro, which is really useful). Check the MicroManager pages to see what it is supported too. If you decide to get a camera supported by an ImageJ plugin but without macro support, you could still use the IJ_Robot plugin to try to automate it. It is an ugly way of doing it, but it works. I hope this is useful. Cheers Gabriel |
Dear Gabriel,
You will not find resolution values in the scant literature supplied by microscope manufacturers, but you can calculate how many pixels you will need from the N.A. and magnification written on the objective. Assuming Nyquist sampling, the pixel separation (assuming no extra magnification) is 21, 14 and 16 microns for three commonly-used objectives (60x N.A. 1.4, 20x N.A. 0.7 and 10x N.A. 0.3). So, if we wish to capture a 20 x 20 mm square area of the intermediate image (a reasonably large fraction of what we see in an eyepiece) we need 0.9, 2 and 1.5 megapixels minimum (i.e. to record full detail without any empty magnification. In practice, microscopists usually choose to work with about 3x the linear magnification at the Nyquist minimum, so this means that the preferred number of pixels would be nine times this, unless the camera was recording a reduced area of the intermediate image. This means that the high pixel numbers of the modern DSLRs are not overkill. This is why standard PAL video resolution (about 0.4 megapixels maximum) is so hopeless for microscopy, forcing users to capture only a tiny fraction of the eyepiece field. Brad Amos On 12 February 2011 12:20, Gabriel Landini <[hidden email]> wrote: > On Saturday 12 Feb 2011, bradscopegems wrote: > > If your requirement is for still images, even a medium-price digital SLR > > camera such as the Nikon D90 will give superb results at higher > resolution > > (14 Megapixels). > > The resolution is mainly driven by the microscope optics. With a 14 MP > camera > one gets what is called "empty magnification". > I would suggest to find out what are the resolution values of the > microscope > objectives are (from the manufacturer manuals) and start thinking from > there. > Depending on the field width of the camera and the optics, I think anything > > > 4 MP or thereabouts will not capture much more detail than the resolution > of > the optical system will be able to provide in a relatively good brightfield > microscope. > > Of course one can subsample large images with empty magnification and > reduce > file size while keeping the useful amount of detail, but there are other > disadvantages. With a SLR one will struggle to make illumination > modifications > on the fly to correct for brightfield background illumination using the > transmittance method. You can do it, but it will be more time consuming as > one > cannot check histogram saturation on the fly, etc. then transfer the images > to > the computer, reload them, etc. Whether this is crucial it depends on how > many > images one is expecting to acquire. > > Also remember that colour cameras using Bayer masks will interpolate > colours > and that reduces further the resolution of the image data. A good > alternative > is to use greyscale cameras with a R-G-B filter wheel or tunable filter so > each pixel is exposed 3 times (R G & B). This avoids the interpolation. > Such > things are, sadly more expensive. > > If I had to buy a new camera, I would try to find out what is supported > right > now (some camera manufacturers produce IJ plugins that can drive their > cameras, but be aware that some of these only support manual acquisition, > and > not driven from a plugin or macro, which is really useful). > Check the MicroManager pages to see what it is supported too. > If you decide to get a camera supported by an ImageJ plugin but without > macro > support, you could still use the IJ_Robot plugin to try to automate it. It > is > an ugly way of doing it, but it works. > > I hope this is useful. > Cheers > > Gabriel > -- Dr W. B. Amos FRS MRC Laboratory of Molecular Biology Hills Road, Cambridge CB2 0QH telephone 44 (0)1223 411640 (lab) fax 44(0)1223 213556 Emails [hidden email] or [hidden email] Websites: (Lab) http://www2.mrc-lmb.cam.ac.uk/SS/Amos_B/ (Personal) http://homepage.ntlworld.com/w.amos2/ |
Dear Brad,
I've been reading about Nyquist theorem and so, and I don't get the math, it seems 3x it's at the maximum limit of the linear region, but how do you calculate the final total resolution required? you say for 60x - 1.4 we need 0.9 megapixels and to work with 3x linear maginification, that is nine times this, can you explain or reference the math? thanks indeed, ramon On 12 February 2011 14:22, Brad Amos <[hidden email]> wrote: > Dear Gabriel, > You will not find resolution values in the scant > literature supplied by microscope manufacturers, but you can calculate how > many pixels you will need from the N.A. and magnification written on the > objective. Assuming Nyquist sampling, the pixel separation (assuming no > extra magnification) is 21, 14 and 16 microns for three commonly-used > objectives (60x N.A. 1.4, 20x N.A. 0.7 and 10x N.A. 0.3). So, if we wish to > capture a 20 x 20 mm square area of the intermediate image (a reasonably > large fraction of what we see in an eyepiece) we need 0.9, 2 and 1.5 > megapixels minimum (i.e. to record full detail without any empty > magnification. In practice, microscopists usually choose to work with about > 3x the linear magnification at the Nyquist minimum, so this means that the > preferred number of pixels would be nine times this, unless the camera was > recording a reduced area of the intermediate image. This means that the > high > pixel numbers of the modern DSLRs are not overkill. This is why standard > PAL video resolution (about 0.4 megapixels maximum) is so hopeless for > microscopy, forcing users to capture only a tiny fraction of the eyepiece > field. > Brad Amos > > On 12 February 2011 12:20, Gabriel Landini <[hidden email]> wrote: > > > On Saturday 12 Feb 2011, bradscopegems wrote: > > > If your requirement is for still images, even a medium-price digital > SLR > > > camera such as the Nikon D90 will give superb results at higher > > resolution > > > (14 Megapixels). > > > > The resolution is mainly driven by the microscope optics. With a 14 MP > > camera > > one gets what is called "empty magnification". > > I would suggest to find out what are the resolution values of the > > microscope > > objectives are (from the manufacturer manuals) and start thinking from > > there. > > Depending on the field width of the camera and the optics, I think > anything > > > > > 4 MP or thereabouts will not capture much more detail than the resolution > > of > > the optical system will be able to provide in a relatively good > brightfield > > microscope. > > > > Of course one can subsample large images with empty magnification and > > reduce > > file size while keeping the useful amount of detail, but there are other > > disadvantages. With a SLR one will struggle to make illumination > > modifications > > on the fly to correct for brightfield background illumination using the > > transmittance method. You can do it, but it will be more time consuming > as > > one > > cannot check histogram saturation on the fly, etc. then transfer the > images > > to > > the computer, reload them, etc. Whether this is crucial it depends on how > > many > > images one is expecting to acquire. > > > > Also remember that colour cameras using Bayer masks will interpolate > > colours > > and that reduces further the resolution of the image data. A good > > alternative > > is to use greyscale cameras with a R-G-B filter wheel or tunable filter > so > > each pixel is exposed 3 times (R G & B). This avoids the interpolation. > > Such > > things are, sadly more expensive. > > > > If I had to buy a new camera, I would try to find out what is supported > > right > > now (some camera manufacturers produce IJ plugins that can drive their > > cameras, but be aware that some of these only support manual acquisition, > > and > > not driven from a plugin or macro, which is really useful). > > Check the MicroManager pages to see what it is supported too. > > If you decide to get a camera supported by an ImageJ plugin but without > > macro > > support, you could still use the IJ_Robot plugin to try to automate it. > It > > is > > an ugly way of doing it, but it works. > > > > I hope this is useful. > > Cheers > > > > Gabriel > > > > > > -- > Dr W. B. Amos FRS > MRC Laboratory of Molecular Biology > Hills Road, Cambridge CB2 0QH > telephone 44 (0)1223 411640 (lab) > fax 44(0)1223 213556 > Emails [hidden email] > or [hidden email] > Websites: (Lab) http://www2.mrc-lmb.cam.ac.uk/SS/Amos_B/ > (Personal) http://homepage.ntlworld.com/w.amos2/ > |
In reply to this post by bradscopegems-2
Thank you for your reply.
I am really thinking now for a DSLR option. Canon has the Eos utility and Nikon the Camera Control software to capture the image controlled by a PC but I don't have experience on using it. So which one will be best? Of course this software can't be controlled by ImageJ. But it still is an option. Normally people are using a professional camera normally with C-mount (that's why I mentioned C mount as a spec). But this is not necessary for me. I wont use it for microscopy but by image analysis on plants. Sorry, I was not clear on that point. DLSR camera's have color interpretation (Bayer) but most professional camera's for image analysis use it also (exept 3-CCD or camera;s with color-wheel). So what will be the biggest difference? Noise? So if I have to choose because of budget reasons between a low resolution professional camera or a high resolution DSLR camera, maybe the last is the best? Does anyone has experiences with using DSLR camera with image analysis? Thanks in advance! Peter van Loon |
In reply to this post by bradscopegems-2
On Saturday 12 Feb 2011, you wrote:
> You will not find resolution values in the scant > literature supplied by microscope manufacturers, Hi Brad, The Olympus objectives we use came with some blurb where it was stated. However, I just checked and strangely they do not provide this information in their co.uk website. The objectives have their specifications listed but not resolving power. > So, if we wish to capture a 20 x 20 mm square area of the intermediate > image (a reasonably large fraction of what we see in an eyepiece) we need > 0.9, 2 and 1.5 megapixels minimum (i.e. to record full detail without any > empty magnification. > In practice, microscopists usually choose to work with about > 3x the linear magnification at the Nyquist minimum, so this means that the > preferred number of pixels would be nine times this, unless the camera was > recording a reduced area of the intermediate image. This means that the > high pixel numbers of the modern DSLRs are not overkill. My point, (maybe I did not articulate it well) is that the data being stored in such large number of pixels would not be adding anything in terms of image detail and yet it will require larger storage and more processing. If this is not taken into consideration one risks processing and reporting morphological detail which could not be resolved in the first place. This is of course obvious to experienced microscopists, but not perhaps to those who did not think of this in the first place. Let's use the example of fractal objects imaged with such level of empty magnification. Applying the yardstick method, their perimeters measured with small yardstick sizes will appear smoother than they really are because the detail of sizes close to the image pixels cannot be resolved. I must confess that I wasn't aware of the 3x preference by microscopists. Is there a reason for this number? Regards, Gabriel |
Dear Gabriel,
The 3x is anecdotal and subjective, but I can account for it by reference to my previous message about the intensity profile of the image of two point objects separated by the Rayleigh distance. This curve is shown in practically every textbook dealing with microscope optics. At the minimum Nyquist sampling frequency, the intensity profile across the resolved region will consist of just three points: two high points corresponding to the peaks and a low point in between. Only at about 3x this frequency will one begin to see something of the shape of the peaks and of the valley in between. And, of course, a high signal to noise ratio, requiring many hundreds of photons coming from each resolved point, is necessary to see the dip at all. Brad On 12 February 2011 17:06, Gabriel Landini <[hidden email]> wrote: > On Saturday 12 Feb 2011, you wrote: > > You will not find resolution values in the scant > > literature supplied by microscope manufacturers, > > Hi Brad, > The Olympus objectives we use came with some blurb where it was stated. > However, I just checked and strangely they do not provide this information > in > their co.uk website. The objectives have their specifications listed but > not > resolving power. > > > So, if we wish to capture a 20 x 20 mm square area of the intermediate > > image (a reasonably large fraction of what we see in an eyepiece) we > need > > 0.9, 2 and 1.5 megapixels minimum (i.e. to record full detail without any > > empty magnification. > > In practice, microscopists usually choose to work with about > > 3x the linear magnification at the Nyquist minimum, so this means that > the > > preferred number of pixels would be nine times this, unless the camera > was > > recording a reduced area of the intermediate image. This means that the > > high pixel numbers of the modern DSLRs are not overkill. > > My point, (maybe I did not articulate it well) is that the data being > stored > in such large number of pixels would not be adding anything in terms of > image > detail and yet it will require larger storage and more processing. > If this is not taken into consideration one risks processing and reporting > morphological detail which could not be resolved in the first place. This > is > of course obvious to experienced microscopists, but not perhaps to those > who > did not think of this in the first place. > > Let's use the example of fractal objects imaged with such level of empty > magnification. Applying the yardstick method, their perimeters measured > with > small yardstick sizes will appear smoother than they really are because the > detail of sizes close to the image pixels cannot be resolved. > > I must confess that I wasn't aware of the 3x preference by microscopists. > Is > there a reason for this number? > Regards, > > Gabriel > -- Dr W. B. Amos FRS MRC Laboratory of Molecular Biology Hills Road, Cambridge CB2 0QH telephone 44 (0)1223 411640 (lab) fax 44(0)1223 213556 Emails [hidden email] or [hidden email] Websites: (Lab) http://www2.mrc-lmb.cam.ac.uk/SS/Amos_B/ (Personal) http://homepage.ntlworld.com/w.amos2/ |
In reply to this post by Peter van Loon
On Saturday 12 Feb 2011, <[hidden email]> wrote:
> Normally people are using a professional camera normally with C-mount > (that's why I mentioned C mount as a spec). C mount a common thread in cameras and microscope adaptors. To attach a Nikon SLR camera to a microscope you need an adaptor called F mount. > DLSR camera's have color interpretation (Bayer) but most professional > camera's for image analysis use it also Some do, some don't > So what will be the biggest difference? Noise? I already mentioned this in a previous email. The Bayer mask over the sensor has green red and blue pixels. To assign the 3 colours to a final "image pixel", the Bayer mask is interpolated. So in reality, the resolution that the manufacturer reports is not accurately the one in the acquired image. Have a look at the wikipedia article on Bayer mask. http://en.wikipedia.org/wiki/Bayer_mask Cameras with the wheel can capture R , G and B for each pixel. The colour is more accurate as it does not need to be the interpolated and so the resolution is as specified byt number of pixels in the sensor. The problem is that as the images are taken in sequence the objects should not move between shots. The tunable filter cameras are quicker than the filter wheel ones (or at least the one I have is quicker than the wheel ones I saw). > So if I have to choose because of budget reasons between a low resolution > professional camera or a high resolution DSLR camera, maybe the last is the > best? Which low resolution professional and which DSLR? The question does not make much sense yet. > Does anyone has experiences with using DSLR camera with image analysis? Since you don't say what you are trying to achieve, here are some generic suggestions. Do not use jpegs. Use uncompressed (non-lossy) formats for your images (TIFF or RAW (there is a plugin DC-RAW somewhere to read these). Use a standardised illumination source so the images are comparable. Use a fixed focal length in your shots so the magnification is also comparable. Add a colour calibration tablet to calibrate the colours and the magnification, see: http://imagejdocu.tudor.lu/doku.php?id=plugin:color:chart_white_balance:start Use the camera in Manual mode so you are in control of all the settings. Use a tripod. Cheers Gabriel |
In reply to this post by bradscopegems-2
Brad,
3x is my experience from developing acoustical imaging systems. In this case the pixels are computed, so it is important to get the spacing right to make the computation efficient. I've been experimenting with the factor for years, and 3 seems to be the optimum for simple beamforming methods that are limited in resolution by the Rayleigh expression. Thanks for the Nyquist interpretation. Bob Robert P. Dougherty President OptiNav, Inc. 1414 127th Pl NE #106 Bellevue, WA 98005 (425) 891-4883 FAX (425) 467-1119 [hidden email] www.optinav. com On Feb 12, 2011, at 12:37 PM, Brad Amos <[hidden email]> wrote: > Dear Gabriel, > The 3x is anecdotal and subjective, but I can account > for it by reference to my previous message about the intensity profile of > the image of two point objects separated by the Rayleigh distance. This > curve is shown in practically every textbook dealing with microscope optics. > At the minimum Nyquist sampling frequency, the intensity profile across the > resolved region will consist of just three points: two high points > corresponding to the peaks and a low point in between. Only at about 3x this > frequency will one begin to see something of the shape of the peaks and of > the valley in between. And, of course, a high signal to noise ratio, > requiring many hundreds of photons coming from each resolved point, is > necessary to see the dip at all. > Brad > > On 12 February 2011 17:06, Gabriel Landini <[hidden email]> wrote: > >> On Saturday 12 Feb 2011, you wrote: >>> You will not find resolution values in the scant >>> literature supplied by microscope manufacturers, >> >> Hi Brad, >> The Olympus objectives we use came with some blurb where it was stated. >> However, I just checked and strangely they do not provide this information >> in >> their co.uk website. The objectives have their specifications listed but >> not >> resolving power. >> >>> >> >> >> >> >> |
In reply to this post by bradscopegems-2
Hi,
I'm responding to the question of whether there is a reason for the 3x sampling reported to be commonly used by microscopists. Since no-one else has brought the following up, I decided it might be useful in this discussion (not in terms of buying a camera, but useful in the sense of being informative to the discussion). And I'll apologize in advance if this seems too obvious and/or pedantic for anyone/everyone. To start, forget about the optics, the sensor and everything else about producing a digital image, and only think about the image as a digital signal. For this purpose, you can even assume that everything leading up to the image itself is "perfect" and that the only loss of resolution/signal when going from the original object to the image is the fact that the final image is digital (not sampled using infinitely small steps). When one does this, the "3x" sampling referred to in previous postings becomes relatively easy to explain, and not so anecdotal and subjective. People have mentioned the Nyquist sampling frequency but no-one has gone into any detail about it. This concept comes from information theory and the microscopy community that I am familiar with (electron microscopy) lumps a lot of not terribly rigorous ideas about it into phrases such as the Nyquist limit, Shannon sampling (named after Claude Shannon, one of the fathers of information theory), etc. There is a lot of information about these ideas in various places on Wikipedia, and to quote from there, the Nyquist-Shannon sampling theorem says: If a function x(t) contains no frequencies higher than B hertz, it is completely determined by giving its ordinates at a series of points spaced 1/(2B) seconds apart. The inverse of this is also true: if the sampling is 1/(2B), the maximum possible frequency contained in a (sampled) function y(t) is B hertz. The word "possible" is important in that last sentence: the original function x(t) may or may not have frequencies higher than B hertz, but the function y(t), sampled at 1/(2B), CANNOT have any frequencies higher than B (that's the point of the theorem). Any frequencies that were originally present in x(t) at frequencies lower than B will be preserved in the sampled signal, and if there were not any frequencies lower than B (a totally aperiodic signal, for example), the sampled function will also not have any lower frequencies... For a digital image, the "function" is the image itself (a 2d function, but that doesn't matter), the sampling is the pixel size (how much of the original object is represented by a pixel in the digital image, a spatial instead of a temporal sampling) and so the maximum resolution that can be contained in a digital image with pixel size N is 1/(2N). This has nothing to do with the number of pixels in an image, and everything to do with the pixel size itself (and note that this is not the pixel size of the sensor, but the apparent pixel size in the image). One way to have this make sense is to think about resolution as meaning "the ability to distinguish that two features are indeed two features and not one." This is related to the Rayleigh criterion, and brings into the discussion things like numerical aperture when dealing with the resolution of lenses and cameras. But since we are assuming that all this is perfect at the moment, all we care about is whether we can tell two objects from one in the final digital image. If the features are close enough, a digital image will show them as a single large feature (i.e., it can't resolve the large feature into two separate, smaller things). In order to tell that there are really two features beside one another in an image, one needs at a minimum a single pixel between the two features that clearly belongs to neither. If the pixel size is small enough, there will be a pixel between the two that distinguishes one from the other, and it is this "second pixel" as one travels from digital point to digital point that says the resolution is sufficient to tell the two features are not a single large "thing." At one extreme, if the "features" are only a pixel wide (or better said, if the signal from a feature can be contained in a single pixel), this all means that one needs 2 pixels (in all directions, but think about a 1d case in this context) to determine that one is seeing a single, isolated feature and not two adjacent ones: the first pixel has the signal, the second pixel says that signal isn't there and the third pixel could either say a signal is still not there or that there is a new signal present (the second feature). To put this back into the framework of Nyquist-Shannon sampling, the point-like features we want to see have size x (equal to a pixel in the digital image) and the resolution that defines the ability to tell that there are two such features side by side is 1/(2x). This is all a TERRIBLY non-rigorous description of the Nyquist-Shannon sampling theorem but it makes the point reasonably well and fits with some of what has been said before. This all might seem to indicate that the "3x" value noted in previous postings should be "2x" instead and indeed, the maximum possible resolution in a digital image is 1/(2x) (the Nyquist-Shannon sampling theorem again). However, if a digital image is manipulated in just about any way that affects its sampling (pixellation), the image loses information. For example, transformations such as rotation by amounts that aren't multiples of 90 degrees, or shifts in x and/or y by fractions of a pixel, require an interpolation of the image, and that interpolation leads to information loss. Another way to describe this is to say that the original maximum resolution of 1/(2x) is degraded after such image transformations even though the sampling is still x. This in turn means that in the case where the resolution of a digital image is at the limit of what the user needs in order to see a feature, after relatively common image transformations, the resolution will no longer be sufficient and the feature(s) will vanish. A reasonable solution to this is to over-sample so that the desired resolution is well within the 1/(2x) limit imposed by sampling using steps of x. To give a concrete example, if the pixels in a digital image represent 500 nm, the maximum resolution is 1/(2*500) and one can just distinguish 1000 nm features. If the goal is to see such features, it would be better NOT to live on the edge of detectability and to sample at ~330 nm/ pixel (so that a resolution limit of 1/(3x) allows the 1000 nm features to be seen), or even 250 nm (where 1/(4x) gives the desired resolution to see a 1000 nm feature). In this case, we often say that we want 1000 nm resolution, and in order to obtain that, we need to chose our sampling distance x so that 3x (or even 4x) equals 1000 nm (instead of the absolute minimum of 2x). One could obviously over-sample by a factor of (say) 5 and not even think about sampling issues having an effect on feature resolution. However, the 5x over-sampled image has 25x more pixels, and takes 25x more computer memory, storage space, etc. This massive over-sampling can be the "empty resolution" that people sometimes mention (though other types of empty resolution are also possible). Over-sampling at 1.5x only increases things by a small factor (2.25x) and still results in virtually no loss of feature resolution (and no empty resolution). The above discussion is certainly the root of the of "3x sampling" commonly mentioned in the cryo-electron microscopy field, and I suspect it plays a major part in the sampling schemes of most other digital imaging processes. LOTS of other things affect the resolution in a digital image, and these can be as important as what is discussed here. But since it is theoretically impossible to get resolution that is better than 1/(2x) from an image sampled in steps of x, one is most often better served by sampling a bit more finely: If one wants a resolution of 1/Q, sample at Q/3 instead of Q/2! Brad Amos wrote: > Dear Gabriel, > The 3x is anecdotal and subjective, but I can account > for it by reference to my previous message about the intensity profile of > the image of two point objects separated by the Rayleigh distance. This > curve is shown in practically every textbook dealing with microscope optics. > At the minimum Nyquist sampling frequency, the intensity profile across the > resolved region will consist of just three points: two high points > corresponding to the peaks and a low point in between. Only at about 3x this > frequency will one begin to see something of the shape of the peaks and of > the valley in between. And, of course, a high signal to noise ratio, > requiring many hundreds of photons coming from each resolved point, is > necessary to see the dip at all. > Brad > > On 12 February 2011 17:06, Gabriel Landini <[hidden email]> wrote: > >> On Saturday 12 Feb 2011, you wrote: >>> You will not find resolution values in the scant >>> literature supplied by microscope manufacturers, >> Hi Brad, >> The Olympus objectives we use came with some blurb where it was stated. >> However, I just checked and strangely they do not provide this information >> in >> their co.uk website. The objectives have their specifications listed but >> not >> resolving power. >> >>> So, if we wish to capture a 20 x 20 mm square area of the intermediate >>> image (a reasonably large fraction of what we see in an eyepiece) we >> need >>> 0.9, 2 and 1.5 megapixels minimum (i.e. to record full detail without any >>> empty magnification. >>> In practice, microscopists usually choose to work with about >>> 3x the linear magnification at the Nyquist minimum, so this means that >> the >>> preferred number of pixels would be nine times this, unless the camera >> was >>> recording a reduced area of the intermediate image. This means that the >>> high pixel numbers of the modern DSLRs are not overkill. >> My point, (maybe I did not articulate it well) is that the data being >> stored >> in such large number of pixels would not be adding anything in terms of >> image >> detail and yet it will require larger storage and more processing. >> If this is not taken into consideration one risks processing and reporting >> morphological detail which could not be resolved in the first place. This >> is >> of course obvious to experienced microscopists, but not perhaps to those >> who >> did not think of this in the first place. >> >> Let's use the example of fractal objects imaged with such level of empty >> magnification. Applying the yardstick method, their perimeters measured >> with >> small yardstick sizes will appear smoother than they really are because the >> detail of sizes close to the image pixels cannot be resolved. >> >> I must confess that I wasn't aware of the 3x preference by microscopists. >> Is >> there a reason for this number? >> Regards, >> >> Gabriel >> > > > -- David Gene Morgan cryoEM Facility 320C Simon Hall Indiana University Bloomington 812 856 1457 (office) 812 856 3221 (EM lab) http://bio.indiana.edu/~cryo |
Our lab bought a new Tucsen cooled CCD camera for like $700 USD or something like that through ebay.
Be weary of purchasing lab equipment through ebay though b/c it is most likely be stolen. We still haven't tested it out yet though. The cooling element is a fan though, I prefer thermoelectrics especially for extremely high resolution images. ________________________________________ From: ImageJ Interest Group [[hidden email]] On Behalf Of David Gene Morgan [[hidden email]] Sent: Monday, February 14, 2011 5:41 PM To: [hidden email] Subject: Re: affordable camera suggestions Hi, I'm responding to the question of whether there is a reason for the 3x sampling reported to be commonly used by microscopists. Since no-one else has brought the following up, I decided it might be useful in this discussion (not in terms of buying a camera, but useful in the sense of being informative to the discussion). And I'll apologize in advance if this seems too obvious and/or pedantic for anyone/everyone. To start, forget about the optics, the sensor and everything else about producing a digital image, and only think about the image as a digital signal. For this purpose, you can even assume that everything leading up to the image itself is "perfect" and that the only loss of resolution/signal when going from the original object to the image is the fact that the final image is digital (not sampled using infinitely small steps). When one does this, the "3x" sampling referred to in previous postings becomes relatively easy to explain, and not so anecdotal and subjective. People have mentioned the Nyquist sampling frequency but no-one has gone into any detail about it. This concept comes from information theory and the microscopy community that I am familiar with (electron microscopy) lumps a lot of not terribly rigorous ideas about it into phrases such as the Nyquist limit, Shannon sampling (named after Claude Shannon, one of the fathers of information theory), etc. There is a lot of information about these ideas in various places on Wikipedia, and to quote from there, the Nyquist-Shannon sampling theorem says: If a function x(t) contains no frequencies higher than B hertz, it is completely determined by giving its ordinates at a series of points spaced 1/(2B) seconds apart. The inverse of this is also true: if the sampling is 1/(2B), the maximum possible frequency contained in a (sampled) function y(t) is B hertz. The word "possible" is important in that last sentence: the original function x(t) may or may not have frequencies higher than B hertz, but the function y(t), sampled at 1/(2B), CANNOT have any frequencies higher than B (that's the point of the theorem). Any frequencies that were originally present in x(t) at frequencies lower than B will be preserved in the sampled signal, and if there were not any frequencies lower than B (a totally aperiodic signal, for example), the sampled function will also not have any lower frequencies... For a digital image, the "function" is the image itself (a 2d function, but that doesn't matter), the sampling is the pixel size (how much of the original object is represented by a pixel in the digital image, a spatial instead of a temporal sampling) and so the maximum resolution that can be contained in a digital image with pixel size N is 1/(2N). This has nothing to do with the number of pixels in an image, and everything to do with the pixel size itself (and note that this is not the pixel size of the sensor, but the apparent pixel size in the image). One way to have this make sense is to think about resolution as meaning "the ability to distinguish that two features are indeed two features and not one." This is related to the Rayleigh criterion, and brings into the discussion things like numerical aperture when dealing with the resolution of lenses and cameras. But since we are assuming that all this is perfect at the moment, all we care about is whether we can tell two objects from one in the final digital image. If the features are close enough, a digital image will show them as a single large feature (i.e., it can't resolve the large feature into two separate, smaller things). In order to tell that there are really two features beside one another in an image, one needs at a minimum a single pixel between the two features that clearly belongs to neither. If the pixel size is small enough, there will be a pixel between the two that distinguishes one from the other, and it is this "second pixel" as one travels from digital point to digital point that says the resolution is sufficient to tell the two features are not a single large "thing." At one extreme, if the "features" are only a pixel wide (or better said, if the signal from a feature can be contained in a single pixel), this all means that one needs 2 pixels (in all directions, but think about a 1d case in this context) to determine that one is seeing a single, isolated feature and not two adjacent ones: the first pixel has the signal, the second pixel says that signal isn't there and the third pixel could either say a signal is still not there or that there is a new signal present (the second feature). To put this back into the framework of Nyquist-Shannon sampling, the point-like features we want to see have size x (equal to a pixel in the digital image) and the resolution that defines the ability to tell that there are two such features side by side is 1/(2x). This is all a TERRIBLY non-rigorous description of the Nyquist-Shannon sampling theorem but it makes the point reasonably well and fits with some of what has been said before. This all might seem to indicate that the "3x" value noted in previous postings should be "2x" instead and indeed, the maximum possible resolution in a digital image is 1/(2x) (the Nyquist-Shannon sampling theorem again). However, if a digital image is manipulated in just about any way that affects its sampling (pixellation), the image loses information. For example, transformations such as rotation by amounts that aren't multiples of 90 degrees, or shifts in x and/or y by fractions of a pixel, require an interpolation of the image, and that interpolation leads to information loss. Another way to describe this is to say that the original maximum resolution of 1/(2x) is degraded after such image transformations even though the sampling is still x. This in turn means that in the case where the resolution of a digital image is at the limit of what the user needs in order to see a feature, after relatively common image transformations, the resolution will no longer be sufficient and the feature(s) will vanish. A reasonable solution to this is to over-sample so that the desired resolution is well within the 1/(2x) limit imposed by sampling using steps of x. To give a concrete example, if the pixels in a digital image represent 500 nm, the maximum resolution is 1/(2*500) and one can just distinguish 1000 nm features. If the goal is to see such features, it would be better NOT to live on the edge of detectability and to sample at ~330 nm/ pixel (so that a resolution limit of 1/(3x) allows the 1000 nm features to be seen), or even 250 nm (where 1/(4x) gives the desired resolution to see a 1000 nm feature). In this case, we often say that we want 1000 nm resolution, and in order to obtain that, we need to chose our sampling distance x so that 3x (or even 4x) equals 1000 nm (instead of the absolute minimum of 2x). One could obviously over-sample by a factor of (say) 5 and not even think about sampling issues having an effect on feature resolution. However, the 5x over-sampled image has 25x more pixels, and takes 25x more computer memory, storage space, etc. This massive over-sampling can be the "empty resolution" that people sometimes mention (though other types of empty resolution are also possible). Over-sampling at 1.5x only increases things by a small factor (2.25x) and still results in virtually no loss of feature resolution (and no empty resolution). The above discussion is certainly the root of the of "3x sampling" commonly mentioned in the cryo-electron microscopy field, and I suspect it plays a major part in the sampling schemes of most other digital imaging processes. LOTS of other things affect the resolution in a digital image, and these can be as important as what is discussed here. But since it is theoretically impossible to get resolution that is better than 1/(2x) from an image sampled in steps of x, one is most often better served by sampling a bit more finely: If one wants a resolution of 1/Q, sample at Q/3 instead of Q/2! Brad Amos wrote: > Dear Gabriel, > The 3x is anecdotal and subjective, but I can account > for it by reference to my previous message about the intensity profile of > the image of two point objects separated by the Rayleigh distance. This > curve is shown in practically every textbook dealing with microscope optics. > At the minimum Nyquist sampling frequency, the intensity profile across the > resolved region will consist of just three points: two high points > corresponding to the peaks and a low point in between. Only at about 3x this > frequency will one begin to see something of the shape of the peaks and of > the valley in between. And, of course, a high signal to noise ratio, > requiring many hundreds of photons coming from each resolved point, is > necessary to see the dip at all. > Brad > > On 12 February 2011 17:06, Gabriel Landini <[hidden email]> wrote: > >> On Saturday 12 Feb 2011, you wrote: >>> You will not find resolution values in the scant >>> literature supplied by microscope manufacturers, >> Hi Brad, >> The Olympus objectives we use came with some blurb where it was stated. >> However, I just checked and strangely they do not provide this information >> in >> their co.uk website. The objectives have their specifications listed but >> not >> resolving power. >> >>> So, if we wish to capture a 20 x 20 mm square area of the intermediate >>> image (a reasonably large fraction of what we see in an eyepiece) we >> need >>> 0.9, 2 and 1.5 megapixels minimum (i.e. to record full detail without any >>> empty magnification. >>> In practice, microscopists usually choose to work with about >>> 3x the linear magnification at the Nyquist minimum, so this means that >> the >>> preferred number of pixels would be nine times this, unless the camera >> was >>> recording a reduced area of the intermediate image. This means that the >>> high pixel numbers of the modern DSLRs are not overkill. >> My point, (maybe I did not articulate it well) is that the data being >> stored >> in such large number of pixels would not be adding anything in terms of >> image >> detail and yet it will require larger storage and more processing. >> If this is not taken into consideration one risks processing and reporting >> morphological detail which could not be resolved in the first place. This >> is >> of course obvious to experienced microscopists, but not perhaps to those >> who >> did not think of this in the first place. >> >> Let's use the example of fractal objects imaged with such level of empty >> magnification. Applying the yardstick method, their perimeters measured >> with >> small yardstick sizes will appear smoother than they really are because the >> detail of sizes close to the image pixels cannot be resolved. >> >> I must confess that I wasn't aware of the 3x preference by microscopists. >> Is >> there a reason for this number? >> Regards, >> >> Gabriel >> > > > -- David Gene Morgan cryoEM Facility 320C Simon Hall Indiana University Bloomington 812 856 1457 (office) 812 856 3221 (EM lab) http://bio.indiana.edu/~cryo |
In reply to this post by David Gene Morgan
Hi all,
On Mon, 14 Feb 2011 23:41:38 +0100, David Gene Morgan <[hidden email]> wrote: One could obviously over-sample by a factor of (say) 5 and not even > think about sampling issues having an effect on feature resolution. > However, the 5x over-sampled image has 25x more pixels, and takes 25x > more computer memory, storage space, etc. This massive over-sampling > can be the "empty resolution" that people sometimes mention (though > other types of empty resolution are also possible). Over-sampling at > 1.5x only increases things by a small factor (2.25x) and still results > in virtually no loss of feature resolution (and no empty resolution). With today's computers the memory and storage issues are probably not so important anymore. I think the most critical issue is that with more pixels (ie. smaller pixel size) the amount of photons per pixel decreases, and this decreases the signal-to-noise ratio. > The above discussion is certainly the root of the of "3x sampling" > commonly mentioned in the cryo-electron microscopy field, and I suspect > it plays a major part in the sampling schemes of most other digital > imaging processes. LOTS of other things affect the resolution in a > digital image, and these can be as important as what is discussed here. > But since it is theoretically impossible to get resolution that is > better than 1/(2x) from an image sampled in steps of x, one is most > often better served by sampling a bit more finely: If one wants a > resolution of 1/Q, sample at Q/3 instead of Q/2! You can also see a geometrical derivation that leads to a minimal sampling of 2.8x optical resolution in Pawley's Handbook of biological confocal microscopy: <http://books.google.com/books?id=IKcPnaNPrhoC&lpg=PA64&ots=r8SqLx2avr&dq=nyquist%202d%20sampling&pg=PA65#v=onepage&q&f=false> I have also seen 2.3x mentioned that was apparently also somehow derived from Nyquist for 2D signals, but without the actual derivation. So while 3x might be "anecdotal", it is not far away from mathematically derived values. Best, Janne |
In reply to this post by David Gene Morgan
On Monday 14 Feb 2011 22:41:38 David Gene Morgan wrote:
> I'm responding to the question of whether there is a reason for the 3x > sampling reported to be commonly used by microscopists. I think both Brad's and your explanations were very useful, thank you. There seem to be a variety of possible combinations of extra optics in the extension tubes that one could add: x0.25, x0.5, x0.75, x1, x1.2, 1.5, x2.5, x3.3, x4, x5. Presumably these also have an impact on the resolution of the image projected on the sensor. Is that just a multiplicative effect only, or it there further deterioration of the image resolution due to these? On a related note, I found the information of the objectives resolution (in the Olympus BX50 manual, page 24). Given that this information is, curiously, not in the Olympus website, if anybody needs it, just send me a private mail. Regards, Gabriel |
In reply to this post by Peter van Loon
Dear Peter,
My experience is the following. For a full compatibility with Image J you have two solutions : - cameras with device adapters through Micro-Manager see http://valelab.ucsf.edu/~nico/MMwiki/index.php/Device_Support; Micro-Manager can be seen as an ImageJ plugin driving hardwares, - the FireCam plugin see http://www.phase-hl.com/cgi-bin/querypage.cgi?FireCamIJ_uk Hoping this helps you. Philippe* * 2011/2/11 Peter van Loon <[hidden email]> > Dear All, > > I am looking for a camera compatible with ImageJ. > It should have the next specs: at least 2-3 Mega Pixel, colour and C-mount > Surfing the internet I see only expensive solutions. Does anyone know some > affordable solutions (2000-3000 euro max), > > Thanks in advance!, > > Peter van Loon > > The information contained in this e-mail (and attachments if any) is > exclusively intended for the recipient(s) named above and may be > confidential, proprietary, and/or legally privileged. Unintentional > disclosure of this e-mail does not constitute a waiver of any right or > privilege. Please notify us if you are not the intended recipient and do not > use, print, copy, forward, or disclose (any part of) this e-mail, but delete > it (and attachments and copies if any) subsequently. Thank you. > > You can find disclaimer translations in different languages by visiting our > website http://www.rijkzwaan.com/edisclaimer > > The registration number of each Rijk Zwaan company can be found by using > the following link: > http://www.rijkzwaan.com/registrationnumbers > |
In reply to this post by Gabriel Landini
2011/2/15 Gabriel Landini <[hidden email]>:
> On Monday 14 Feb 2011 22:41:38 David Gene Morgan wrote: >> I'm responding to the question of whether there is a reason for the 3x >> sampling reported to be commonly used by microscopists. > > I think both Brad's and your explanations were very useful, thank you. > > There seem to be a variety of possible combinations of extra optics in the > extension tubes that one could add: x0.25, x0.5, x0.75, x1, x1.2, 1.5, x2.5, > x3.3, x4, x5. > Presumably these also have an impact on the resolution of the > image projected on the sensor. Is that just a multiplicative effect only, or > it there further deterioration of the image resolution due to these? In a microscope the tube length (distance between principal planes of objective and tubelens) should be the sum of the focal length of the tubelens (f_Tl=164.5 mm for most Zeiss systems) and the focal length of the objective (f_obj=164.5/MAG=2.61 mm for a magnification of 63). If this condition is fulfilled the system is called 'telecentric' and has the property that features in different z-slices don't change their size, when you focus through the sample. In our Zeiss Axiovert 200M the tube lenses (Zeiss calls them Optovar) 1.0x, 1.8x and 2.5x are all in one turret at the same distance from the objective (presumably 164.5 mm). These lenses look like singlets with small curvature, so I think the principal planes are close to the center of the lens. This, however, means that the system is telecentric only for an Optovar 1.0x. The Optovar 2.5x should actually be in a distance of 2.5 * 164.5 mm = 41 cm from the objective. I have never figured out a way to quantify the effect of non-telecentricity on the images. Regards, Martin -- Martin Kielhorn Randall Division of Cell & Molecular Biophysics King's College London, New Hunt's House Guy's Campus, London SE1 1UL, U.K. tel: +44 (0) 207 848 6519, fax: +44 (0) 207 848 6435 |
In reply to this post by David Gene Morgan
Hi,
1) I would like to bring an argument why the optimium sampling rate is slightly higher than the Nyquist rate but not at three times the bandwidth of the signal. 2) I will also indicate how you check if you adjusted your magnification correctly. 3) I give some more information regarding finite sensor pixel size. In every system that measures light intensity you have noise. Even with an ideal noise-free detector light consists of photons. And a measurement of photons is always plagued by shot noise. Your signal to noise ratio increases when you detect more photons. Brad and Dave suggest to sample at three times the bandwidth of the signal. But that means they have to use a smaller detector size, thus increasing the noise in the signal. They have not increased information content but increased noise. That is certainly not optimal. Other explanations in this thread look at features in real space. E.g. determine the minimum between two spots which are just resolved. You should digitally resample the captured image into a bigger version by zero-padding the Fourier transform. This interpolation doesn't add noise and you can see all features magnified or rotate your image. These were most of my arguments for 1). Now I describe 2). (I'm sorry that I'm not proficient enough in ImageJ to give a step by step instruction) a) Start with a quadratic image you have captured with the microscope (I suggest 512x512 or 1024x1024). b) Make sure the edges all have the same brightness. The discrete Fourier transform is circular and jumps on the edges will lead to artifacts. Alternative ways to achieve that: b1) If you use a fluorescent microscope move a few sub-resolution beads (170nm diameter yellow green for 63x oil immersion) into the center of the field of view. Capture a dark image (without illumination) as well and subtract it from the bead image. b2) If you use a bright field microscope capture an bright image A without sample, an image of the sample B that only fills the center of the field of view and a dark image C. Calculate (A-C)/(B-C). This calculation will be correct illumination non-uniformity. b3) Multiply with a window that is one in the center gradually reaches zero at the border of the image. b4) Just try without correction and live with the artifacts. c) Do a Fourier transform of the image and look at the log of the magnitude. If you oversampled the band limited signal you should see a disk with a diameter smaller than the Fourier image around the center. If you undersampled the signal the disk will either hit the edges and fold back into the Fourier image (aliasing) or information is everywhere and the Fourier image is bright everywhere (in this case you could try to defocus the image a little bit). For sampling at the Nyquist limit the disk fits perfectly into the image. If you happen to use bright field or you have an objective with an adjustable aperture you can try decreasing the diameter of the aperture. Your image should turn darker, lines should blur and in the Fourier transform of the image the disk should be smaller. 3) The other posters haven't referred to the fact that the camera pixel is of a certain size. If you speak about sampling distance, you should refer to the pixel pitch P -- the distance between the mid points of two sensor pixels. The pixel size S is the light sensitive array and related to the fill factor G of the sensor (or the sensor and microlenses). In general the pixel size is smaller than the pixel pitch. First think of the problem in 1D: Sound cards usually contain a 'sample and hold circuit' (http://en.wikipedia.org/wiki/Sample_and_hold) that freezes the voltage u(t) at a certain point t_0 in time so that the ADC can digitize the value. In a camera this corresponds to a pixel that is infinitesimally small and therefore wouldn't collect much light. The question I will now try to answer is: "What happens to the sampled signal due to a finite sensor size?" The effect can be understood by looking into Fourier space. The image is sampled by a 2d grid with periodicity P. The Fourier transform of a grid is another grid http://en.wikipedia.org/wiki/Dirac_comb . One can introduce finite sensor size by convolving the sensors grid of period P with the rectangle of size S. In the Fourier transform this results (convolution theorem) in a multiplication by a sinc function (the Fourier transform of a square). The sinc function is 1 in the center, i.e. the average pixel intensity of the whole image is the same, independent on pixel size S. On all other points of the Fourier plane the sinc function is smaller than 1. That means that the integrating effect of the finite pixel size reduces the cameras ability to capture high frequency content. Fortunately the reduction is moderate even for a fill factor of 100% with pixel pitch equal to pixel pitch P=S. If you know the geometry of your sensor you could correct for this effect but I haven't seen anyone doing this for imaging. Maybe it is used when people capture holograms with cameras. That is the problem where I first thought about the effect of finite pixel size. Regards, Martin -- Martin Kielhorn Randall Division of Cell & Molecular Biophysics King's College London, New Hunt's House Guy's Campus, London SE1 1UL, U.K. tel: +44 (0) 207 848 6519, fax: +44 (0) 207 848 6435 |
Free forum by Nabble | Edit this page |