Re: Method to get zoomed in or zoomed out AWT image from and ImagePlus or ImageWindow?

Posted by Frederic V. Hessman on
URL: http://imagej.273.s1.nabble.com/Method-to-get-zoomed-in-or-zoomed-out-AWT-image-from-and-ImagePlus-or-ImageWindow-tp3688583p3688590.html

On 15 Apr 2010, at 12:21, Johannes Schindelin wrote:

>> Some older CCD cameras used detectors that were not squares (in  
>> digital
>> spectroscopy, for instance, one doesn't really care), but since  
>> imaging
>> is now driven by digital photography, most pixels are indeed squares.
>
> No.

Oh, yes!

> A pixel will in almost all cases be a point sample of a signal, and  
> that
> point sample can be modeled mathematically as a convolution of the  
> signal
> with a kernel, the so-called point spread function (which does not  
> need to
> be independent of the coordinate, or for that matter, of the signal in
> general).

No, a digital imaging sampling is NEVER a point sample: it's ALWAYS an  
area sample: there are no point detectors (well, practically).  The  
dead-areas between the sensitive parts of the CCD/CMOS pixels have  
become extremely small, so that one can practically consider the  
images to be rows and rows of perfectly square areas (not points), all  
lined up.

The only question is whether
        1. the area sample is uniform (variations in the sub-pixel quantum  
efficiency, usually negligible, but an important contribution if you  
need extreme positioning accuracy, e.g. in astrometry)
        2. the object being imaged has a resolution poorer (good), equal to  
(marginal) or better than (not good) the pixel sampling caused by the  
pixel area.  In the first case, you can define a PSF, in the second  
you may not, and in the 3rd case you've lost the information needed to  
know what the PSF looks like (at least at pixel to subpixel scales).

> And even with modern CCDs, that point spread function can be modeled
> better by a 2D Gaussian bell than by a 2D rectangular function. The  
> most
> important point to keep in mind is that adjacent point samples have an
> overlapping support from which they draw information.

An observed PSF is well-modelled by a mathematical function like a 2D  
Gaussian bell or a non-mathematical empirical 2-D distribution only if  
the object being imaged has a resolution much poorer than that of the  
detector (i.e. structures much larger than the sizes of the pixels).

The simplest way to think about it is to imagine two extreme cases:
        1. big pixels imaging an infinitesimal point source
        2. small pixels imaging a very extended and on small scales  
featureless source
The first case is hopeless: that's not imaging, that's 0-D photometry  
(like using old-fashioned phototubes).  The 2nd case is good for  
things like deconvolution.  The realistic case is that one is  
sometimes near the limit of loosing the spatial information by having  
pixels about the size of the things one is trying to measure: one  
often doesn't want to waste pixels by imaging on scales where there's  
no information (or one doesn't want to spend money on lots of pixels  
one doesn't need).

You can pretend you have point sampling if your pixels are ~5x smaller  
than the structures you're measuring so that the changes in the PSF  
are small and the 2-D rectangular sampling function imposed by the  
pixel areas is close to a linear (or bi-linear) interpolation, but  
you're still just pretending.

Thus, the pixelized zooming in and out shown by ImageJ is EXACTLY what  
is needed in general (for perfectly square pixels): any other  
representation demands that one know what the PSF looks like in an  
amount of detail not present in the image itself.

Rick