Posted by
Chu, Calvin-2 on
Feb 15, 2011; 12:00am
URL: http://imagej.273.s1.nabble.com/affordable-camera-suggestions-tp3685628p3685639.html
Our lab bought a new Tucsen cooled CCD camera for like $700 USD or something like that through ebay.
Be weary of purchasing lab equipment through ebay though b/c it is most likely be stolen.
We still haven't tested it out yet though. The cooling element is a fan though, I prefer thermoelectrics especially for extremely high resolution images.
________________________________________
From: ImageJ Interest Group [
[hidden email]] On Behalf Of David Gene Morgan [
[hidden email]]
Sent: Monday, February 14, 2011 5:41 PM
To:
[hidden email]
Subject: Re: affordable camera suggestions
Hi,
I'm responding to the question of whether there is a reason for the 3x
sampling reported to be commonly used by microscopists. Since no-one
else has brought the following up, I decided it might be useful in this
discussion (not in terms of buying a camera, but useful in the sense of
being informative to the discussion). And I'll apologize in advance if
this seems too obvious and/or pedantic for anyone/everyone.
To start, forget about the optics, the sensor and everything else about
producing a digital image, and only think about the image as a digital
signal. For this purpose, you can even assume that everything leading
up to the image itself is "perfect" and that the only loss of
resolution/signal when going from the original object to the image is
the fact that the final image is digital (not sampled using infinitely
small steps).
When one does this, the "3x" sampling referred to in previous postings
becomes relatively easy to explain, and not so anecdotal and subjective.
People have mentioned the Nyquist sampling frequency but no-one has
gone into any detail about it. This concept comes from information
theory and the microscopy community that I am familiar with (electron
microscopy) lumps a lot of not terribly rigorous ideas about it into
phrases such as the Nyquist limit, Shannon sampling (named after Claude
Shannon, one of the fathers of information theory), etc.
There is a lot of information about these ideas in various places on
Wikipedia, and to quote from there, the Nyquist-Shannon sampling theorem
says:
If a function x(t) contains no frequencies higher than B hertz, it is
completely determined by giving its ordinates at a series of points
spaced 1/(2B) seconds apart.
The inverse of this is also true: if the sampling is 1/(2B), the
maximum possible frequency contained in a (sampled) function y(t) is B
hertz.
The word "possible" is important in that last sentence: the original
function x(t) may or may not have frequencies higher than B hertz, but
the function y(t), sampled at 1/(2B), CANNOT have any frequencies higher
than B (that's the point of the theorem). Any frequencies that were
originally present in x(t) at frequencies lower than B will be preserved
in the sampled signal, and if there were not any frequencies lower than
B (a totally aperiodic signal, for example), the sampled function will
also not have any lower frequencies...
For a digital image, the "function" is the image itself (a 2d function,
but that doesn't matter), the sampling is the pixel size (how much of
the original object is represented by a pixel in the digital image, a
spatial instead of a temporal sampling) and so the maximum resolution
that can be contained in a digital image with pixel size N is 1/(2N).
This has nothing to do with the number of pixels in an image, and
everything to do with the pixel size itself (and note that this is not
the pixel size of the sensor, but the apparent pixel size in the image).
One way to have this make sense is to think about resolution as meaning
"the ability to distinguish that two features are indeed two features
and not one." This is related to the Rayleigh criterion, and brings
into the discussion things like numerical aperture when dealing with the
resolution of lenses and cameras. But since we are assuming that all
this is perfect at the moment, all we care about is whether we can tell
two objects from one in the final digital image.
If the features are close enough, a digital image will show them as a
single large feature (i.e., it can't resolve the large feature into two
separate, smaller things). In order to tell that there are really two
features beside one another in an image, one needs at a minimum a single
pixel between the two features that clearly belongs to neither. If the
pixel size is small enough, there will be a pixel between the two that
distinguishes one from the other, and it is this "second pixel" as one
travels from digital point to digital point that says the resolution is
sufficient to tell the two features are not a single large "thing."
At one extreme, if the "features" are only a pixel wide (or better
said, if the signal from a feature can be contained in a single pixel),
this all means that one needs 2 pixels (in all directions, but think
about a 1d case in this context) to determine that one is seeing a
single, isolated feature and not two adjacent ones: the first pixel has
the signal, the second pixel says that signal isn't there and the third
pixel could either say a signal is still not there or that there is a
new signal present (the second feature). To put this back into the
framework of Nyquist-Shannon sampling, the point-like features we want
to see have size x (equal to a pixel in the digital image) and the
resolution that defines the ability to tell that there are two such
features side by side is 1/(2x). This is all a TERRIBLY non-rigorous
description of the Nyquist-Shannon sampling theorem but it makes the
point reasonably well and fits with some of what has been said before.
This all might seem to indicate that the "3x" value noted in previous
postings should be "2x" instead and indeed, the maximum possible
resolution in a digital image is 1/(2x) (the Nyquist-Shannon sampling
theorem again). However, if a digital image is manipulated in just
about any way that affects its sampling (pixellation), the image loses
information. For example, transformations such as rotation by amounts
that aren't multiples of 90 degrees, or shifts in x and/or y by
fractions of a pixel, require an interpolation of the image, and that
interpolation leads to information loss. Another way to describe this
is to say that the original maximum resolution of 1/(2x) is degraded
after such image transformations even though the sampling is still x.
This in turn means that in the case where the resolution of a digital
image is at the limit of what the user needs in order to see a feature,
after relatively common image transformations, the resolution will no
longer be sufficient and the feature(s) will vanish.
A reasonable solution to this is to over-sample so that the desired
resolution is well within the 1/(2x) limit imposed by sampling using
steps of x. To give a concrete example, if the pixels in a digital
image represent 500 nm, the maximum resolution is 1/(2*500) and one can
just distinguish 1000 nm features. If the goal is to see such features,
it would be better NOT to live on the edge of detectability and to
sample at ~330 nm/ pixel (so that a resolution limit of 1/(3x) allows
the 1000 nm features to be seen), or even 250 nm (where 1/(4x) gives the
desired resolution to see a 1000 nm feature). In this case, we often
say that we want 1000 nm resolution, and in order to obtain that, we
need to chose our sampling distance x so that 3x (or even 4x) equals
1000 nm (instead of the absolute minimum of 2x).
One could obviously over-sample by a factor of (say) 5 and not even
think about sampling issues having an effect on feature resolution.
However, the 5x over-sampled image has 25x more pixels, and takes 25x
more computer memory, storage space, etc. This massive over-sampling
can be the "empty resolution" that people sometimes mention (though
other types of empty resolution are also possible). Over-sampling at
1.5x only increases things by a small factor (2.25x) and still results
in virtually no loss of feature resolution (and no empty resolution).
The above discussion is certainly the root of the of "3x sampling"
commonly mentioned in the cryo-electron microscopy field, and I suspect
it plays a major part in the sampling schemes of most other digital
imaging processes. LOTS of other things affect the resolution in a
digital image, and these can be as important as what is discussed here.
But since it is theoretically impossible to get resolution that is
better than 1/(2x) from an image sampled in steps of x, one is most
often better served by sampling a bit more finely: If one wants a
resolution of 1/Q, sample at Q/3 instead of Q/2!
Brad Amos wrote:
> Dear Gabriel,
> The 3x is anecdotal and subjective, but I can account
> for it by reference to my previous message about the intensity profile of
> the image of two point objects separated by the Rayleigh distance. This
> curve is shown in practically every textbook dealing with microscope optics.
> At the minimum Nyquist sampling frequency, the intensity profile across the
> resolved region will consist of just three points: two high points
> corresponding to the peaks and a low point in between. Only at about 3x this
> frequency will one begin to see something of the shape of the peaks and of
> the valley in between. And, of course, a high signal to noise ratio,
> requiring many hundreds of photons coming from each resolved point, is
> necessary to see the dip at all.
> Brad
>
> On 12 February 2011 17:06, Gabriel Landini <
[hidden email]> wrote:
>
>> On Saturday 12 Feb 2011, you wrote:
>>> You will not find resolution values in the scant
>>> literature supplied by microscope manufacturers,
>> Hi Brad,
>> The Olympus objectives we use came with some blurb where it was stated.
>> However, I just checked and strangely they do not provide this information
>> in
>> their co.uk website. The objectives have their specifications listed but
>> not
>> resolving power.
>>
>>> So, if we wish to capture a 20 x 20 mm square area of the intermediate
>>> image (a reasonably large fraction of what we see in an eyepiece) we
>> need
>>> 0.9, 2 and 1.5 megapixels minimum (i.e. to record full detail without any
>>> empty magnification.
>>> In practice, microscopists usually choose to work with about
>>> 3x the linear magnification at the Nyquist minimum, so this means that
>> the
>>> preferred number of pixels would be nine times this, unless the camera
>> was
>>> recording a reduced area of the intermediate image. This means that the
>>> high pixel numbers of the modern DSLRs are not overkill.
>> My point, (maybe I did not articulate it well) is that the data being
>> stored
>> in such large number of pixels would not be adding anything in terms of
>> image
>> detail and yet it will require larger storage and more processing.
>> If this is not taken into consideration one risks processing and reporting
>> morphological detail which could not be resolved in the first place. This
>> is
>> of course obvious to experienced microscopists, but not perhaps to those
>> who
>> did not think of this in the first place.
>>
>> Let's use the example of fractal objects imaged with such level of empty
>> magnification. Applying the yardstick method, their perimeters measured
>> with
>> small yardstick sizes will appear smoother than they really are because the
>> detail of sizes close to the image pixels cannot be resolved.
>>
>> I must confess that I wasn't aware of the 3x preference by microscopists.
>> Is
>> there a reason for this number?
>> Regards,
>>
>> Gabriel
>>
>
>
>
--
David Gene Morgan
cryoEM Facility
320C Simon Hall
Indiana University Bloomington
812 856 1457 (office)
812 856 3221 (EM lab)
http://bio.indiana.edu/~cryo