ImagePlus.getImage() gives me a nice AWT image. But it always has the
dimensions of the 100%-zoomed window. Is there and equivalent method that will give an AWT image of the zoomed-in or zoomed-out image shown in a zoomed ImageCanvas? It looks like ImageCanvas.getImage() gives me an ImagePlus, rather than and AWT image. Maybe I'm missing something obvious...but I would love someone to show me an answer. Bill |
Hi,
On Tue, 13 Apr 2010, Bill Mohler wrote: > ImagePlus.getImage() gives me a nice AWT image. But it always has the > dimensions of the 100%-zoomed window. > > Is there and equivalent method that will give an AWT image of the > zoomed-in or zoomed-out image shown in a zoomed ImageCanvas? It looks > like ImageCanvas.getImage() gives me an ImagePlus, rather than and AWT > image. As far as I can tell, ImageCanvas does not expose the image painted into the canvas. You'll have to resize it yourself. On the other hand, you _should_ resize it yourself in any case. ImageJ uses AWT to resize the image, which is wrong, because it pretends that pixels are little squares. For zooming out, use something like http://pacific.mpi-cbg.de/wiki/index.php/Downsample instead (on that page, you will see why by the presented images). For zooming in, I am not aware of an appropriate upsampling plugin, but I am sure there is one. Ciao, Johannes |
On 13 Apr 2010, at 18:21, Johannes Schindelin wrote:
> For zooming in, I am not aware of an appropriate upsampling plugin, > but I > am sure there is one. For zooming in (enlarging), simply use Image>Adjust Size (ij.plugin.Resizer) with Bilinear (faster) or Bicubic (better) interpolation. Michael |
Hi,
On Tue, 13 Apr 2010, Michael Schmid wrote: > On 13 Apr 2010, at 18:21, Johannes Schindelin wrote: > > >For zooming in, I am not aware of an appropriate upsampling plugin, but > >I am sure there is one. > > For zooming in (enlarging), simply use Image>Adjust Size > (ij.plugin.Resizer) with Bilinear (faster) or Bicubic (better) > interpolation. Only if you ignore my comment about pixels not being little squares, of course. Ciao, Dscho |
Dscho-
I've addressed my original problem by using ImageProcessor.resize(). But I'm stuck mentally on your argument about the squares. What's the alternative? Circles? I thought that the pixels in our cameras were actually squares. Are they a different shape, either physically or functionally? Does anyone do math to account for their actual shape? Maybe in astronomy? Very interested in the details of what's correct, even if my own imaging never reaches the precision needing this sort of rigor. Thanks, Bill Johannes Schindelin wrote: > Hi, > > On Tue, 13 Apr 2010, Michael Schmid wrote: > > >> On 13 Apr 2010, at 18:21, Johannes Schindelin wrote: >> >> >>> For zooming in, I am not aware of an appropriate upsampling plugin, but >>> I am sure there is one. >>> >> For zooming in (enlarging), simply use Image>Adjust Size >> (ij.plugin.Resizer) with Bilinear (faster) or Bicubic (better) >> interpolation. >> > > Only if you ignore my comment about pixels not being little squares, of > course. > > Ciao, > Dscho > > |
Bill,
the picture elements of a digital image are samples and as such they are numbers that have no extent. Mathematically they are delta-functions with a certain integral value that denotes the value of a pixel. The way e.g. a digital camera captures images can be described by averaging the analog image in the sensor plane by the light sensitive area of a single photo sensitive element of the sensor. This low-pass filtered image then is sampled by a 2D array of delta-funktions. Now why is that? The description takes into account that each of the photo sensitive elements of the camera sensor delivers a single value (photo-current or electric charge) according to the _mean_ light intensity that is collected by is tiny area. This averaging means a low-pass filtering of the light distribution in the sensor plane of your camera. >Dscho- > >I've addressed my original problem by using ImageProcessor.resize(). > >But I'm stuck mentally on your argument about the squares. What's >the alternative? Circles? I thought that the pixels in our cameras >were actually squares. Are they a different shape, either >physically or functionally? Does anyone do math to account for >their actual shape? Maybe in astronomy? > >Very interested in the details of what's correct, even if my own >imaging never reaches the precision needing this sort of rigor. > >Thanks, >Bill > >Johannes Schindelin wrote: >>Hi, >> >>On Tue, 13 Apr 2010, Michael Schmid wrote: >> >> >>>On 13 Apr 2010, at 18:21, Johannes Schindelin wrote: >>> >>> >>>>For zooming in, I am not aware of an appropriate upsampling >>>>plugin, but I am sure there is one. >>>> >>>For zooming in (enlarging), simply use Image>Adjust Size >>>(ij.plugin.Resizer) with Bilinear (faster) or Bicubic (better) >>>interpolation. >>> >> >>Only if you ignore my comment about pixels not being little >>squares, of course. >> >>Ciao, >>Dscho HTH -- Herbie ------------------------ <http://www.gluender.de> |
In reply to this post by Bill Mohler
Hi,
On Wed, 14 Apr 2010, Bill Mohler wrote: > I've addressed my original problem by using ImageProcessor.resize(). ... which is wrong ;-) > But I'm stuck mentally on your argument about the squares. What's the > alternative? Circles? Pixels are sampled data. They are the result of a convolution of the actual signal with a point spread function. So pixels do not even have proper shapes, as they do not have clear-cut boundaries! > I thought that the pixels in our cameras were actually squares. Are > they a different shape, either physically or functionally? Does anyone > do math to account for their actual shape? Maybe in astronomy? Everybody does, otherwise you end up with aliasing effects, as demonstrated on the website I referred you to. Ciao, Johannes |
In reply to this post by Bill Mohler
Some older CCD cameras used detectors that were not squares (in
digital spectroscopy, for instance, one doesn't really care), but since imaging is now driven by digital photography, most pixels are indeed squares. In astronomy, where the "World Coordinate System" behind the pixels is often of interest, there are very elaborate means for calibrating pixel versus physical space, even to the point of all- sky pixelisation (HEALPix). In astronomical photometry, depending upon whether one wants to conserve surface brightness or total number of photons, the question of how to interpolate or resample is a tricky one indeed. Rick On 14 Apr 2010, at 16:08, Bill Mohler wrote: > Dscho- > > I've addressed my original problem by using ImageProcessor.resize(). > > But I'm stuck mentally on your argument about the squares. What's > the alternative? Circles? I thought that the pixels in our cameras > were actually squares. Are they a different shape, either > physically or functionally? Does anyone do math to account for > their actual shape? Maybe in astronomy? > > Very interested in the details of what's correct, even if my own > imaging never reaches the precision needing this sort of rigor. > > Thanks, > Bill > > Johannes Schindelin wrote: >> Hi, >> >> On Tue, 13 Apr 2010, Michael Schmid wrote: >> >> >>> On 13 Apr 2010, at 18:21, Johannes Schindelin wrote: >>> >>> >>>> For zooming in, I am not aware of an appropriate upsampling >>>> plugin, but I am sure there is one. >>>> >>> For zooming in (enlarging), simply use Image>Adjust Size >>> (ij.plugin.Resizer) with Bilinear (faster) or Bicubic (better) >>> interpolation. >>> >> >> Only if you ignore my comment about pixels not being little >> squares, of course. >> >> Ciao, >> Dscho >> >> |
Hi,
On Thu, 15 Apr 2010, Frederic Hessman wrote: > Some older CCD cameras used detectors that were not squares (in digital > spectroscopy, for instance, one doesn't really care), but since imaging > is now driven by digital photography, most pixels are indeed squares. No. > In astronomy, where the "World Coordinate System" behind the pixels is > often of interest, there are very elaborate means for calibrating pixel > versus physical space, even to the point of all-sky pixelisation > (HEALPix). In astronomical photometry, depending upon whether one wants > to conserve surface brightness or total number of photons, the question > of how to interpolate or resample is a tricky one indeed. Our discussion is not about the arrangement of pixels. It is about their shapes. A pixel will in almost all cases be a point sample of a signal, and that point sample can be modeled mathematically as a convolution of the signal with a kernel, the so-called point spread function (which does not need to be independent of the coordinate, or for that matter, of the signal in general). And even with modern CCDs, that point spread function can be modeled better by a 2D Gaussian bell than by a 2D rectangular function. The most important point to keep in mind is that adjacent point samples have an overlapping support from which they draw information. Ciao, Johannes |
On 15 Apr 2010, at 12:21, Johannes Schindelin wrote:
>> Some older CCD cameras used detectors that were not squares (in >> digital >> spectroscopy, for instance, one doesn't really care), but since >> imaging >> is now driven by digital photography, most pixels are indeed squares. > > No. Oh, yes! > A pixel will in almost all cases be a point sample of a signal, and > that > point sample can be modeled mathematically as a convolution of the > signal > with a kernel, the so-called point spread function (which does not > need to > be independent of the coordinate, or for that matter, of the signal in > general). No, a digital imaging sampling is NEVER a point sample: it's ALWAYS an area sample: there are no point detectors (well, practically). The dead-areas between the sensitive parts of the CCD/CMOS pixels have become extremely small, so that one can practically consider the images to be rows and rows of perfectly square areas (not points), all lined up. The only question is whether 1. the area sample is uniform (variations in the sub-pixel quantum efficiency, usually negligible, but an important contribution if you need extreme positioning accuracy, e.g. in astrometry) 2. the object being imaged has a resolution poorer (good), equal to (marginal) or better than (not good) the pixel sampling caused by the pixel area. In the first case, you can define a PSF, in the second you may not, and in the 3rd case you've lost the information needed to know what the PSF looks like (at least at pixel to subpixel scales). > And even with modern CCDs, that point spread function can be modeled > better by a 2D Gaussian bell than by a 2D rectangular function. The > most > important point to keep in mind is that adjacent point samples have an > overlapping support from which they draw information. An observed PSF is well-modelled by a mathematical function like a 2D Gaussian bell or a non-mathematical empirical 2-D distribution only if the object being imaged has a resolution much poorer than that of the detector (i.e. structures much larger than the sizes of the pixels). The simplest way to think about it is to imagine two extreme cases: 1. big pixels imaging an infinitesimal point source 2. small pixels imaging a very extended and on small scales featureless source The first case is hopeless: that's not imaging, that's 0-D photometry (like using old-fashioned phototubes). The 2nd case is good for things like deconvolution. The realistic case is that one is sometimes near the limit of loosing the spatial information by having pixels about the size of the things one is trying to measure: one often doesn't want to waste pixels by imaging on scales where there's no information (or one doesn't want to spend money on lots of pixels one doesn't need). You can pretend you have point sampling if your pixels are ~5x smaller than the structures you're measuring so that the changes in the PSF are small and the 2-D rectangular sampling function imposed by the pixel areas is close to a linear (or bi-linear) interpolation, but you're still just pretending. Thus, the pixelized zooming in and out shown by ImageJ is EXACTLY what is needed in general (for perfectly square pixels): any other representation demands that one know what the PSF looks like in an amount of detail not present in the image itself. Rick |
Hi,
On Thu, 15 Apr 2010, Frederic V. Hessman wrote: > Thus, the pixelized zooming in and out shown by ImageJ is EXACTLY what > is needed in general (for perfectly square pixels): any other > representation demands that one know what the PSF looks like in an > amount of detail not present in the image itself. Sorry, that observation does not match my experience both with microscopy and photography. (Yes, we use CCDs even with microscopes.) I guess that stars are somehow different, and that they really have ragged outlines, as seen when zooming in, say, into http://veimages.gsfc.nasa.gov/1583/sts097-354-36aurora.jpg Thanks for the clarification! Johannes P.S.: Thanks to your explanation, I wonder if the default PSF should be changed from Gaussian bells to rectangular kernels. I may have not paid close attention in classes, but I seemed to remember that somebody suggested Gaussian kernels to be the best bet absent further information. I was probably wrong. |
Free forum by Nabble | Edit this page |