Posted by
Harry Parker on
Dec 12, 2007; 1:41pm
URL: http://imagej.273.s1.nabble.com/k-means-Clustering-tp3697801p3697803.html
Hi Tom,
I've been working some on algorithms for anti-vignetting, a.k.a., Lens Shading Correction, (LSC).
I'm not sure what you are looking for. To me, the obvious answer to your question is make the origin of your LSC factor a variable.
CMOS sensors (what I'm working with) have pixels that are like little wells. (I believe CCD sensors are the same way, but the pixel wells are not so deep.) When light comes in from off center through the color filters at the top of each pixel well, as is the case anywhere off center in the sensor, the light hits the photodiode active area off center, and may not fully illuminate the photodiode. This is an additional contributing factor to image shading. This is worse for wide angle lenses which have the light hitting the sensor at wider angles on the sides.
Because the photodiodes may be asymmetrically shaped and asymmetrically positioned at the bottom of the pixel well, the intensity falloff can also be asymmetrical.
So, the total falloff may not be what is predicted from analyzing the lens alone. And the sensor may be centered on the optical axis of the lens, and yet you will still get asymmetrical falloff.
For my purposes, I've modeled lens falloff well enough (in some cases) as the simple product of parabolic functions in x any y with adjustable centers, x0 and y0. Your needs are obviously different, and so you will need to experiment with fitting different curves.
I've found that the ImageJ plugin, ExpressionNT, (
http://www.ulfdittmer.com/imagej/expression.html) is very helpful for exploring the impact of different variable gain functions applied to an image. For scientific least squares fit optimization, we have in the past used Excel and MatLab. Currently, we are using the optimization routines built into SciPy (
http://www.scipy.org/SciPy) to calibrate our LSC routines.
Hope this helps.
--
Harry Parker
Senior Systems Engineer
Digital Imaging Systems, Inc.
----- Original Message ----
From: Thomas Elliot <
[hidden email]>
To:
[hidden email]
Sent: Friday, December 7, 2007 11:18:58 AM
Subject: off-centre Vignette correction
Greetings List, here is an issue that has so far resisted our
attempts...
Executive Summary:
Can anyone suggest an approach to deal with a vignetting effect that
does not begin from the optical center of the image?
Details of the experiment
* All figures referred to here can be seen at
http://www.uoguelph.ca/~gyoung02Background
Aerial imagery from a tethered helium blimp is used to assess
nitrogen status of specific horticultural crops. Images are captured
using Kodak DCS-460 and -460c digital cameras from a height of
approximately 350 feet (107 m). The images are stored in Kodak's
proprietary format and imported using proprietary software. While the
system has been used in a number of scientific and industrial
applications, it was obviously intended for photojournalism or
portrait photography purposes. This can cause problems when
quantitative measures of radiance are required. Aside from the issue
of non-direct measures of red, green, and blue wavelengths caused by
the use of a single CCD array (with RGB filters applied in the Bayer
pattern), there is the more important (I think) issue of vignetting
effects caused by the geometry of camera system. A vignette is where
the intensity (brightness) falls off from the image center, causing
the edges (and corners especially) to appear darker. This effect
becomes most apparent when applying a linear contrast stretch to the
imagery (Figure 1 at the webpage above).
Research Question
The differences in radiance between regions of interest within the
imagery are used to determine if a significant change exists.
Problem
The presence of vignetting effects may introduce trends that are
completely independent of the processes under study. It is therefore
important that these effects be quantified and removed if possible.
This can be achieved through a knowledge of the camera response
model and the radial distortion parameters of the lens. If this
information is unavailable (as in my case), various algorithms have
been developed which can identify the appropriate functions and use
them to remove vignetting.
One such algorithm was applied to Figure 1, the result of which can
be seen in Figure 2. As you can see, the algorithm has done a fairly
good job of removing vignetting, but if you look closely you will
notice a trend, from left to right across the image, where systematic
brightening seems to be occurring. This effect is the result of the
model's assumption that vignetting occurs radially from the optical
center of the image. However, by examining an image of a barium
sulphate test panel taken with the near infrared camera (Figure 3) it
can be seen the the vignette effect is not centered. It still appears
to be radial, but with the optical center being shifted from where it
should be. I hypothesize that this effect is due to a misalignment
between the lens apparatus and CCD chip in the camera body.
The REAL Question
So my question is, can anyone suggest an approach to deal with a
vignetting effect that does not begin from the optical center of the
image?
Regards,
Tom Elliot
____________________________________________________________________________________
Never miss a thing. Make Yahoo your home page.
http://www.yahoo.com/r/hs