k-means Clustering

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
8 messages Options
Reply | Threaded
Open this post in threaded view
|

k-means Clustering

Richard Han-2
Dear All,

I am using k-means Clustering to measure the blue and red in my  
images. I use the threshold to move from one cluster to another. In  
some images the k-means clustering could produce very good matches  
(I'd say spot on), but in others the results were less desirable. My  
settings are

Number of clusters to: 4
Cluster center tolerance: 0.0001
Enable randomization seed: ticked
Randomization seed: 48
The other options were not ticked

Could I do anything more to improve the analysis? and... Is it  
possible to pre-define the red and blue clusters (ie, assign cluster  
1 to be blue)?

Any help will be appreciated. Thanking you.

Richard Han
University of Edinburgh
Reply | Threaded
Open this post in threaded view
|

Re: k-means Clustering

Toby Cornish
It might be helpful to know what sort of images these are.

I haven't worked a lot with k-means for color discrimination, but AFAIK this is just using kmeans clustering based on pixel RGB value distances, so it doesn't really know "blue."  You could follow up the kmeans with a little algorithm that looks at the pixels in each cluster and evaluates which cluster is the most red and which is the most blue.

Alternatively, you could use other means of segmenting an image into colors, such as HSV colorspace segmentation.

toby

>Date:    Thu, 6 Dec 2007 09:53:04 +0000
>From:    Richard Han <[hidden email]>
>Subject: k-means Clustering
>
>Dear All,
>
>I am using k-means Clustering to measure the blue and red in my  
>images. I use the threshold to move from one cluster to another. In  
>some images the k-means clustering could produce very good matches  
>(I'd say spot on), but in others the results were less desirable. My  
>settings are
>
>Number of clusters to: 4
>Cluster center tolerance: 0.0001
>Enable randomization seed: ticked
>Randomization seed: 48
>The other options were not ticked
>
>Could I do anything more to improve the analysis? and... Is it  
>possible to pre-define the red and blue clusters (ie, assign cluster  
>1 to be blue)?
>
>Any help will be appreciated. Thanking you.
>
>Richard Han
>University of Edinburgh


Toby C. Cornish, M.D., Ph.D.
Pathology Resident
Johns Hopkins Medical Institutions
[hidden email]
Reply | Threaded
Open this post in threaded view
|

Re: k-means Clustering

Toby Cornish
In reply to this post by Richard Han-2
Something else I forgot is that you could try kmeans after conversion from RGB colorspace to the LAB or another colorspace (convert to a 3-slice stack in a new colorspace and run kmeans on the stack). Plugins are readily available for that process.

toby

>Date:    Thu, 6 Dec 2007 09:53:04 +0000
>From:    Richard Han <[hidden email]>
>Subject: k-means Clustering
>
>Dear All,
>
>I am using k-means Clustering to measure the blue and red in my  
>images. I use the threshold to move from one cluster to another. In  
>some images the k-means clustering could produce very good matches  
>(I'd say spot on), but in others the results were less desirable. My  
>settings are
>
>Number of clusters to: 4
>Cluster center tolerance: 0.0001
>Enable randomization seed: ticked
>Randomization seed: 48
>The other options were not ticked
>
>Could I do anything more to improve the analysis? and... Is it  
>possible to pre-define the red and blue clusters (ie, assign cluster  
>1 to be blue)?
>
>Any help will be appreciated. Thanking you.
>
>Richard Han
>University of Edinburgh


Toby C. Cornish, M.D., Ph.D.
Pathology Resident
Johns Hopkins Medical Institutions
[hidden email]
Reply | Threaded
Open this post in threaded view
|

off-centre Vignette correction

Thomas Elliot
In reply to this post by Richard Han-2
Greetings List, here is an issue that has so far resisted our attempts...

Executive Summary:
Can anyone suggest an approach to deal with a vignetting effect that  
does not begin from the optical center of the image?


Details of the experiment

* All figures referred to here can be seen at http://www.uoguelph.ca/~gyoung02

Background
  Aerial imagery from a tethered helium blimp is used to assess  
nitrogen status of specific horticultural crops.  Images are captured  
using  Kodak DCS-460 and -460c digital cameras from a height of  
approximately  350 feet (107 m).  The images are stored in Kodak's  
proprietary format and imported using proprietary software.  While the  
system has been used in a number of scientific and industrial  
applications, it was obviously intended for photojournalism or  
portrait photography purposes.  This can cause problems when  
quantitative measures of radiance are required.  Aside from the issue  
of non-direct measures of red, green, and blue wavelengths caused by  
the use of a single CCD array (with RGB filters applied in the Bayer  
pattern), there is the more important (I think) issue of vignetting  
effects caused by the geometry of camera system.  A vignette is where  
the intensity (brightness) falls off from the image center, causing  
the edges (and corners especially) to appear darker.  This effect  
becomes most apparent when applying a linear contrast stretch to the  
imagery (Figure 1 at the webpage above).

Research Question
   The differences in radiance between regions of interest within the  
imagery are used to determine if a significant change exists.

Problem
   The presence of vignetting effects may introduce trends that are  
completely independent of the processes under study.  It is therefore  
important that these effects be quantified and removed if possible.
   This can be achieved through a knowledge of the camera response  
model and the radial distortion parameters of the lens.  If this  
information is unavailable (as in my case), various algorithms have  
been developed which can identify the appropriate functions and use  
them to remove vignetting.
   One such algorithm was applied to Figure 1, the result of which can  
be seen in Figure 2.  As you can see, the algorithm has done a fairly  
good job of removing vignetting, but if you look closely you will  
notice a trend, from left to right across the image, where systematic  
brightening seems to be occurring.  This effect is the result of the  
model's assumption that vignetting occurs radially from the optical  
center of the image.  However, by examining an image of a barium  
sulphate test panel taken with the near infrared camera (Figure 3) it  
can be seen the the vignette effect is not centered.  It still appears  
to be radial, but with the optical center being shifted from where it  
should be.  I hypothesize that this effect is due to a misalignment  
between the lens apparatus and CCD chip in the camera body.

The REAL Question
   So my question is, can anyone suggest an approach to deal with a  
vignetting effect that does not begin from the optical center of the  
image?


Regards,

Tom Elliot
Reply | Threaded
Open this post in threaded view
|

Re: k-means Clustering

Dimiter Prodanov-2
In reply to this post by Richard Han-2
Hi,

I did something similar in the past. The solution
was to use the Color Threshold plugin of Gabriel Landini
and then to use k-means clustering.
Another option is to try with the Color Deconvolution
(again Gabriel) and then to apply the k-means.

Another option is to implement Gaussian mixture model
classifier.

Cheers

Dimiter Prodanov

PS: You can post a link to the images for us to see.
Reply | Threaded
Open this post in threaded view
|

Re: k-means Clustering

Richard Han-2
In reply to this post by Richard Han-2
Hi Toby,

Thank you so much for your reply.
I would like to measure the area of red/blue from RGB images of  
histological slides. I have tried the Threshold color (HSV) as  
suggested to me by Jacqui Ross. It seems the Threshold color is very  
accurate (better than the k-means). If I use RGB split, the red  
channel actually resemble the blue in the original image??
I guess I will stick with the threshold color (HSV) for measurement.

Regards

Richard

>
> Date:    Fri, 7 Dec 2007 10:31:32 -0500
> From:    Toby Cornish <[hidden email]>
> Subject: Re: k-means Clustering
>
> It might be helpful to know what sort of images these are.
>
> I haven't worked a lot with k-means for color discrimination, but  
> AFAIK this is just using kmeans clustering based on pixel RGB value  
> distances, so it doesn't really know "blue."  You could follow up  
> the kmeans with a little algorithm that looks at the pixels in each  
> cluster and evaluates which cluster is the most red and which is  
> the most blue.
>
> Alternatively, you could use other means of segmenting an image  
> into colors, such as HSV colorspace segmentation.
>
> toby
>
>> Date:    Thu, 6 Dec 2007 09:53:04 +0000
>> From:    Richard Han <[hidden email]>
>> Subject: k-means Clustering
>>
>> Dear All,
>>
>> I am using k-means Clustering to measure the blue and red in my
>> images. I use the threshold to move from one cluster to another. In
>> some images the k-means clustering could produce very good matches
>> (I'd say spot on), but in others the results were less desirable. My
>> settings are
>>
>> Number of clusters to: 4
>> Cluster center tolerance: 0.0001
>> Enable randomization seed: ticked
>> Randomization seed: 48
>> The other options were not ticked
>>
>> Could I do anything more to improve the analysis? and... Is it
>> possible to pre-define the red and blue clusters (ie, assign cluster
>> 1 to be blue)?
>>
>> Any help will be appreciated. Thanking you.
>>
>> Richard Han
>> University of Edinburgh
>
>
> Toby C. Cornish, M.D., Ph.D.
> Pathology Resident
> Johns Hopkins Medical Institutions
> [hidden email]
>
> ------------------------------
>
> Date:    Fri, 7 Dec 2007 10:48:08 -0500
> From:    Toby Cornish <[hidden email]>
> Subject: Re: k-means Clustering
>
> Something else I forgot is that you could try kmeans after  
> conversion from RGB colorspace to the LAB or another colorspace  
> (convert to a 3-slice stack in a new colorspace and run kmeans on  
> the stack). Plugins are readily available for that process.
>
> toby
>
>> Date:    Thu, 6 Dec 2007 09:53:04 +0000
>> From:    Richard Han <[hidden email]>
>> Subject: k-means Clustering
>>
>> Dear All,
>>
>> I am using k-means Clustering to measure the blue and red in my
>> images. I use the threshold to move from one cluster to another. In
>> some images the k-means clustering could produce very good matches
>> (I'd say spot on), but in others the results were less desirable. My
>> settings are
>>
>> Number of clusters to: 4
>> Cluster center tolerance: 0.0001
>> Enable randomization seed: ticked
>> Randomization seed: 48
>> The other options were not ticked
>>
>> Could I do anything more to improve the analysis? and... Is it
>> possible to pre-define the red and blue clusters (ie, assign cluster
>> 1 to be blue)?
>>
>> Any help will be appreciated. Thanking you.
>>
>> Richard Han
>> University of Edinburgh
>
>
> Toby C. Cornish, M.D., Ph.D.
> Pathology Resident
> Johns Hopkins Medical Institutions
> [hidden email]
>
>
Reply | Threaded
Open this post in threaded view
|

Re: k-means Clustering

Gabriel Landini
On Monday 10 December 2007 10:02:41 Richard Han wrote:
> I would like to measure the area of red/blue from RGB images of
> histological slides.

Histological stains behave as "subtractive" colours and it is very likely that
they will colocalise in some places.
If that is the case, colour thresholding does not work because the hue of the
co-localised stains is neither red nor blue (as it has contributions from
both) while the thresholding will be assigning a pixel to one of the 2
classes.

Have a look at Ruifrok's paper on colour deconvolution (I wrote a plugin that
does that):

Ruifrok AC, Johnston DA. Quantification of histochemical staining by color
deconvolution. Anal Quant Cytol Histol. 2001 Aug;23(4):291-9.

Regards,

G.
Reply | Threaded
Open this post in threaded view
|

Re: off-centre Vignette correction

Harry Parker
In reply to this post by Thomas Elliot
Hi Tom,

I've been working some on algorithms for anti-vignetting, a.k.a., Lens Shading Correction, (LSC).

I'm not sure what you are looking for. To me, the obvious answer to your question is make the origin of your LSC factor a variable.  

CMOS sensors (what I'm working with) have pixels that are like little wells.  (I believe CCD sensors are the same way, but the pixel wells are not so deep.) When light comes in from off center through the color filters at the top of each pixel well, as is the case anywhere off center in the sensor, the light hits the photodiode active area off center, and may not fully illuminate the photodiode.  This is an additional contributing factor to image shading.  This is worse for wide angle lenses which have the light hitting the sensor at wider angles on the sides.

Because the photodiodes may be asymmetrically shaped and asymmetrically positioned at the bottom of the pixel well, the intensity falloff can also be asymmetrical.

So, the total falloff may not be what is predicted from analyzing the lens alone. And the sensor may be centered on the optical axis of the lens, and yet you will still get asymmetrical falloff.

For my purposes, I've modeled lens falloff well enough (in some cases) as the simple product of parabolic functions in x any y with adjustable centers, x0 and y0.  Your needs are obviously different, and so you will need to experiment with fitting different curves.

I've found that the ImageJ plugin, ExpressionNT, (http://www.ulfdittmer.com/imagej/expression.html) is very helpful for exploring the impact of different variable gain functions applied to an image. For scientific least squares fit optimization, we have in the past used Excel and MatLab. Currently, we are using the optimization routines built into SciPy (http://www.scipy.org/SciPy) to calibrate our LSC routines.
 
Hope this helps.


--  
Harry Parker  
Senior Systems Engineer  
Digital Imaging Systems, Inc.

----- Original Message ----
From: Thomas Elliot <[hidden email]>
To: [hidden email]
Sent: Friday, December 7, 2007 11:18:58 AM
Subject: off-centre Vignette correction

Greetings List, here is an issue that has so far resisted our
 attempts...

Executive Summary:
Can anyone suggest an approach to deal with a vignetting effect that  
does not begin from the optical center of the image?


Details of the experiment

* All figures referred to here can be seen at
 http://www.uoguelph.ca/~gyoung02

Background
  Aerial imagery from a tethered helium blimp is used to assess  
nitrogen status of specific horticultural crops.  Images are captured  
using  Kodak DCS-460 and -460c digital cameras from a height of  
approximately  350 feet (107 m).  The images are stored in Kodak's  
proprietary format and imported using proprietary software.  While the
 
system has been used in a number of scientific and industrial  
applications, it was obviously intended for photojournalism or  
portrait photography purposes.  This can cause problems when  
quantitative measures of radiance are required.  Aside from the issue  
of non-direct measures of red, green, and blue wavelengths caused by  
the use of a single CCD array (with RGB filters applied in the Bayer  
pattern), there is the more important (I think) issue of vignetting  
effects caused by the geometry of camera system.  A vignette is where  
the intensity (brightness) falls off from the image center, causing  
the edges (and corners especially) to appear darker.  This effect  
becomes most apparent when applying a linear contrast stretch to the  
imagery (Figure 1 at the webpage above).

Research Question
   The differences in radiance between regions of interest within the  
imagery are used to determine if a significant change exists.

Problem
   The presence of vignetting effects may introduce trends that are  
completely independent of the processes under study.  It is therefore  
important that these effects be quantified and removed if possible.
   This can be achieved through a knowledge of the camera response  
model and the radial distortion parameters of the lens.  If this  
information is unavailable (as in my case), various algorithms have  
been developed which can identify the appropriate functions and use  
them to remove vignetting.
   One such algorithm was applied to Figure 1, the result of which can
 
be seen in Figure 2.  As you can see, the algorithm has done a fairly  
good job of removing vignetting, but if you look closely you will  
notice a trend, from left to right across the image, where systematic  
brightening seems to be occurring.  This effect is the result of the  
model's assumption that vignetting occurs radially from the optical  
center of the image.  However, by examining an image of a barium  
sulphate test panel taken with the near infrared camera (Figure 3) it  
can be seen the the vignette effect is not centered.  It still appears
 
to be radial, but with the optical center being shifted from where it  
should be.  I hypothesize that this effect is due to a misalignment  
between the lens apparatus and CCD chip in the camera body.

The REAL Question
   So my question is, can anyone suggest an approach to deal with a  
vignetting effect that does not begin from the optical center of the  
image?


Regards,

Tom Elliot





      ____________________________________________________________________________________
Never miss a thing.  Make Yahoo your home page.
http://www.yahoo.com/r/hs