help on normalizing background across pictures

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

help on normalizing background across pictures

ashleydai
I am trying to use ImageJ to quantify flower color (mainly purple) of different genotypes. In the field, I put different flowers on the same background plastic board (dark purple) one at a time, and took a picture of it. However, since the pictures were taken by the automatic setting of a pocket digital camera at different times of a day and under different weather conditions as well, I found that the background I used have a wide range of luminosity readings across different pictures. I am concerned that the sunlight difference and camera's perception are adding a variance to the background as well as the flower, which needs to be controlled.

So the idea is to normalize the background on different pictures to a uniform background and apply the specific change to the flower on the same image, and then quantify the flower color. I searched around but couldn't find any method on how to accomplish this. I would appreciate any idea and suggestion! Thanks in advance!

Ashley
Reply | Threaded
Open this post in threaded view
|

Re: help on normalizing background across pictures

Bob Loushin
Although it seems a bit off topic for this list, I have seen other people
asking similar questions on the list as well, so I am responding to you here
rather than off-line.  You really have several issues to be concerned about
here.  For those who are interested in the main question (How do I do color
and exposure corrections?), that is dealt with at the end of my note.

Issue:  Choice of reference color.  The best way to do color correction for
an digital imaging system is to use a reference in the frame, if possible.
Your idea with the plastic board in the background fits this admirably.
Unfortunately, the color is not a good choice.  Ideally, the reference color
should be at least approximately equal strength in all channels (in this
case, RGB), relatively strong in all channels (strong signal in the
reference means higher signal to noise ratio giving lower error in the
correction), and in the linear part of the response curve for the detector
(most cameras saturate at the high end, meaning that increasing the amount
of light on the detector does not increase the signal.  You need to be below
the level of light that causes this).  Thus, your reference color should be
a light gray, but not pure white.  Gray reflects all colors equally, so it
accurately reflects the illuminant to the detector.  By choosing a light
gray, most of the light falling on the subject is reflected back to the
detector, keeping your signal high.  And choosing light gray instead of
white will keep you out of the saturation region of the detector.  In the
old days of color negative film, there was a standard for this, the Kodak
18% gray card.  This is not a good choice for digital cameras, since they
measure positively, not negatively, so the 18% card is too dark.
Unfortunately, no comparable standard has emerged for digital cameras.  Try
going to your local art store and see if you can find the paper they use for
making mattes.  The less gloss the better; high gloss reflects specularly,
causing hot spots in the reference.

Issue:  if the background color occupies most of the field of view of the
image, lens flare will cause it to contaminate the rest of the image as
well, distorting your measurements.  Typical values for this are a few
percent, but it varies considerably depending on how well the optics are
designed and how large the subject is compared to the background.  The flare
tends to "bleed" across edges, causing the effect to be larger near the edge
of objects than in the center.  If the object is small, it has relatively
more edge pixels and fewer interior pixels.

Issue:  Many cameras will attempt to do some sort of auto-gray balance.  If
yours is one of them, it will automatically detect that there is a dominant
color in the field of view (dark purple), assume that color should be near a
gray, and automatically apply a correction in that direction BEFORE SAVING
YOUR IMAGE.  Having an unknown, but different change applied to every frame
of your data without your knowledge is not good.  So if you haven't already
done so, turn off any settings that say auto gray balance, auto white
balance, auto color correction, etc.  That's exactly the job you are trying
to perform manually.  Auto exposure can probably be left on.  It won't mess
with color, just brightness.

Issue:  Purple flowers.  Purple makes me nervous.  The reason goes back to
how the human visual system perceives color.  The cone cells in the eye that
detect red also have something of a response in the blue.  Early films did
not have this--the blue sensitive layer detected only blue, and the red
sensitive layer detected only red.  Consequently, they had a devil of a time
getting purple flowers to look purple--they came out either blue or pink.
This was eventually corrected by using tricks like cross layer suppressors
in the film or altering the sensitivities of the individual layers to more
closely match those of the human eye.  Many digital cameras haven't caught
up to this yet, or have only done so partially.  If you happen to be a fan
of a sports team whose team color is purple (skol, Vikings!), you will have
seen this first hand--the color of the uniforms when you see them live in
the stadium is very different than what you see on TV.  On the field, they
are purple.  On TV, they are blue, or purplish blue, or, sometimes
(depending on the camera), purple.  Bottom line:  You really need to think
carefully about how your camera is responding to purple if you want to use
it for quantitative comparisons.  One possibility would be to set up a side
experiment--use your eyes to visually assess how similar or different a set
of flowers are (preferably spanning the whole range from 100% one genotype
to 100% the other), and take pictures of them and measure them that way.
Comparing the two measurements should allow you to calibrate the scale.

OK, so assuming I've dealt with all of the above issues, how do I do gray
balancing (or, as you said, normalize the background color on different
pictures to a uniform background and apply the change to the subject) and
exposure correction (adjusting all pictures to the same light level)?

1)  Pick an area on the reference card.  Average its red values to get Rave.
Average its green values to get Gave.  Average its blue values to get Bave.
2)  Pick one of the images as a reference.  Ideally, this would be done by
carefully setting one up with controlled lighting which was both correctly
color balanced and exposed.  However, for the purposes of you experiment, it
looks to me as if you could just pick one that "looks good"--meaning that
the both the reference and the subject looks as close as possible to the way
it looked visually.  Pick an area on the reference card in this image.
Average its red values to get Rref.  Average its green values to get Gref.
Average its blue values to get Bref.
3)  Create correction factors Rcorr = Rref / Rave, Gcorr = Gref / Gave, and
Bcorr = Bref / Bave for each image.  The correction factors for different
images will be different.
4)  Apply the correction to every pixel in the image:  Rcalibrated = R *
Rcorr, Gcalibrated = G * Gcorr,  and Bcalibrated = B * Bcorr.

I don't know enough about ImageJ to know if a specific plugin exists to do
it, but I doubt it.  It is relatively simple to write a plugin to do it,
though.  When you code this up, you'll need to be careful about overflow at
the high end--if Rcorr, Gcorr, or Bcorr happens to be >1.0, then pixels near
saturation will be pushed into overflow.  You'll need to decide what to do
about this, but for your application, it should be relatively rare, and may
not happen in your subjects at all.  If it does, you may end up having to
throw out that data point (pixel), or rescale your data a bit to prevent
overflow from happening.

This sort of thing is done in many consumer image processing programs like
Photoshop.  It is usually called "Click balance".  The usual way it works is
that the user clicks on something white (or light gray), and the program
automatically averages in a region around where they pick to get the ave
values. The ref values are generic ones calculated to work well for the
color space used by the program for its color calculations.  Various
combinations of clipping and reshaping the tone curve are used to prevent
the overflow problem.  So an alternative might be for you to bring all your
images into Photoshop, click balance, and then pull them back into ImageJ
for quantitative analysis.

Sorry this got so long, but some questions are easier to ask than to answer
(Define the universe.  Give three examples...).

--------------------------------------------------
From: "ashleydai" <[hidden email]>
Sent: Monday, April 12, 2010 8:18 PM
To: <[hidden email]>
Subject: help on normalizing background across pictures

> I am trying to use ImageJ to quantify flower color (mainly purple) of
> different genotypes. In the field, I put different flowers on the same
> background plastic board (dark purple) one at a time, and took a picture
> of
> it. However, since the pictures were taken by the automatic setting of a
> pocket digital camera at different times of a day and under different
> weather conditions as well, I found that the background I used have a wide
> range of luminosity readings across different pictures. I am concerned
> that
> the sunlight difference and camera's perception are adding a variance to
> the
> background as well as the flower, which needs to be controlled.
>
> So the idea is to normalize the background on different pictures to a
> uniform background and apply the specific change to the flower on the same
> image, and then quantify the flower color. I searched around but couldn't
> find any method on how to accomplish this. I would appreciate any idea and
> suggestion! Thanks in advance!
>
> Ashley
> --
> View this message in context:
> http://n2.nabble.com/help-on-normalizing-background-across-pictures-tp4893699p4893699.html
> Sent from the ImageJ mailing list archive at Nabble.com.
>
Reply | Threaded
Open this post in threaded view
|

Re: help on normalizing background across pictures

Bob Loushin
Just to be clear, the math I presented below does both exposure correction
and color balance at the same time.  If you only want to do the color
balance, instead use the following:

1)  Same as below.
2)  Same as below.
3)  Calculate RGref = Gref / Rref and BGref = Gref / Bref
4)  For each of the other images, calculate RGbal = (Gave / Rave) / RGref
and BGbal = (Gave / Bave) / BGref
5)  Apply the correction to each pixel in the image:  Rcorr = RGbal * R,
Gcorr = G, and Bcorr = BGbal * B.  Again, you need to be careful of
overflow.

Finally, if you only want to exposure correct:

1)  Same as below.
2)  Same as below.
3)  Calculate Exp = Gref / Gave
4)  Apply the correction to each pixel in the image:  Rcorr = Exp * R, Gcorr
= Exp * G, Bcorr = Exp * B.

There are a few other subtleties I've glossed over here, but these equations
should be good for quantitative comparisons.

--------------------------------------------------
From: "Bob" <[hidden email]>
Sent: Tuesday, April 13, 2010 8:58 AM
To: <[hidden email]>
Subject: Re: help on normalizing background across pictures

SNIP

> OK, so assuming I've dealt with all of the above issues, how do I do gray
> balancing (or, as you said, normalize the background color on different
> pictures to a uniform background and apply the change to the subject) and
> exposure correction (adjusting all pictures to the same light level)?
>
> 1)  Pick an area on the reference card.  Average its red values to get
> Rave. Average its green values to get Gave.  Average its blue values to
> get Bave.
> 2)  Pick one of the images as a reference.  Ideally, this would be done by
> carefully setting one up with controlled lighting which was both correctly
> color balanced and exposed.  However, for the purposes of you experiment,
> it looks to me as if you could just pick one that "looks good"--meaning
> that the both the reference and the subject looks as close as possible to
> the way it looked visually.  Pick an area on the reference card in this
> image. Average its red values to get Rref.  Average its green values to
> get Gref. Average its blue values to get Bref.
> 3)  Create correction factors Rcorr = Rref / Rave, Gcorr = Gref / Gave,
> and Bcorr = Bref / Bave for each image.  The correction factors for
> different images will be different.
> 4)  Apply the correction to every pixel in the image:  Rcalibrated = R *
> Rcorr, Gcalibrated = G * Gcorr,  and Bcalibrated = B * Bcorr.
>
> I don't know enough about ImageJ to know if a specific plugin exists to do
> it, but I doubt it.  It is relatively simple to write a plugin to do it,
> though.  When you code this up, you'll need to be careful about overflow
> at the high end--if Rcorr, Gcorr, or Bcorr happens to be >1.0, then pixels
> near saturation will be pushed into overflow.  You'll need to decide what
> to do about this, but for your application, it should be relatively rare,
> and may not happen in your subjects at all.  If it does, you may end up
> having to throw out that data point (pixel), or rescale your data a bit to
> prevent overflow from happening.
>
> This sort of thing is done in many consumer image processing programs like
> Photoshop.  It is usually called "Click balance".  The usual way it works
> is that the user clicks on something white (or light gray), and the
> program automatically averages in a region around where they pick to get
> the ave values. The ref values are generic ones calculated to work well
> for the color space used by the program for its color calculations.
> Various combinations of clipping and reshaping the tone curve are used to
> prevent the overflow problem.  So an alternative might be for you to bring
> all your images into Photoshop, click balance, and then pull them back
> into ImageJ for quantitative analysis.
>
> Sorry this got so long, but some questions are easier to ask than to
> answer (Define the universe.  Give three examples...).
>
> --------------------------------------------------
> From: "ashleydai" <[hidden email]>
> Sent: Monday, April 12, 2010 8:18 PM
> To: <[hidden email]>
> Subject: help on normalizing background across pictures
>
>> I am trying to use ImageJ to quantify flower color (mainly purple) of
>> different genotypes. In the field, I put different flowers on the same
>> background plastic board (dark purple) one at a time, and took a picture
>> of
>> it. However, since the pictures were taken by the automatic setting of a
>> pocket digital camera at different times of a day and under different
>> weather conditions as well, I found that the background I used have a
>> wide
>> range of luminosity readings across different pictures. I am concerned
>> that
>> the sunlight difference and camera's perception are adding a variance to
>> the
>> background as well as the flower, which needs to be controlled.
>>
>> So the idea is to normalize the background on different pictures to a
>> uniform background and apply the specific change to the flower on the
>> same
>> image, and then quantify the flower color. I searched around but couldn't
>> find any method on how to accomplish this. I would appreciate any idea
>> and
>> suggestion! Thanks in advance!
>>
>> Ashley
>> --
>> View this message in context:
>> http://n2.nabble.com/help-on-normalizing-background-across-pictures-tp4893699p4893699.html
>> Sent from the ImageJ mailing list archive at Nabble.com.
>>
>
Reply | Threaded
Open this post in threaded view
|

Re: help on normalizing background across pictures

ashleydai
Thanks very much! You are awesome!

The problem you mentioned does exist. The background purple turns out to be blue in all of the pictures. However, the flowers look pretty close to what I see. (Maybe because they have pale petals to the edge?)
Also, the flowers typically ocuppy 1/5 to 1/4 of the whole image. I don't know how much distortion due to the background there would be.

The calculation you provided is very clear and makes sense. However, I don't know how much time it would take to adjust more than 200 pictures without any plugin. I have tried the white balance from the "levels" option in Photoshop too. But it's still the adjustment within an image rather than across images based on a reference. I will continue to explore, meanwhile, I would appreciate your further advice.    

Thanks again!!