width at a given direction

Previous Topic Next Topic
 
classic Classic list List threaded Threaded
4 messages Options
Reply | Threaded
Open this post in threaded view
|

width at a given direction

Slobodan Dražić
Hi everybody,

I am working on a scientific paper that deal with estimation of width at a given direction. I refer ImageJ, but I am interested in how the width of a shape at a given direction is estimated. Is it just the distance of the farthest two pixels in a given direction in a binary image or some algorithm is used?

Thanks in advance,
Slobodan

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: width at a given direction

Kenneth R Sloan
Do you want the width in one particular direction, or a lot of directions?

Back when dinosaurs roamed the Earth (ca. 1978), I  worked on these types of problems.  Often, the question was “in what directions are the min/max widths”.  Later, there were a series of papers by others (look for “digital calipers”).

What I did was to estimate the function w(dir) - width as a function of direction.  I simply projected all of the pixels identified as part of the object onto lines at a number of directions (depending on the precision required) and connected the dots.  It turned out that w(dir) was a useful concept - a number of interesting shape parameters fall out naturally once you have it.

To directly answer your question - yes - I would use a binary image and compute the distance between the two extremes in the direction(s) you care about.  But, there’s no need to identify the particular pixels involved!  The simplest method is to project ALL of the pixels onto a line in the selected direction and keep track of the min and max projected values.

Represent the direction as a unit vector and treat each pixel’s coordinates as a vector.  Take the dot-product, and you have the projection of that pixel onto  a line in that direction.  (Hence, the title of my little contribution talked about “dot-product space”) I don’t think it matters - but it might help keep values in a more reasonable range if you use the center of gravity of the object as the origin of your co-ordinate system.  That way, you end up with a polygonal representation of the shape, with the origin at the center of gravity and two vertices for every selected direction.

Again - the “digital calipers” approach should be considered for extreme precision - but given that you are starting with a raster image, and given the opportunity for “embarrassing parallelism”, projecting onto a small set of directions would be my first choice.  Depending on the details, I would think that something on the order of 32 discrete, fixed directions would be more than sufficient.  Once you have a 32-point sampling of w(d), many things are easy to estimate - again, this is a simplified version of the
imaged object, and is probably adequate to estimate the answer any “shape” question you have about the original object.  Perhaps some care should be taken to identify and deal with any outlier pixels due to noise.  The simplified model is museum viewable (almost as good as convex…)

Summary:

for each direction d (<dx,dy> - unit length vector)
 {
  for every pixel p   (<x,y>   - treated as a vector - optionally, translate so <0,0> is c.o.g.)
   {
    projection = p . d
    tally min(d), max(d)   (iteratively update min/max - optionally, for every direction)
   }
  w(d) = max(d)-min(d)
 }

Personally, I might interchange the order of the iterations, keep min/max as arrays indexed by direction, and add a second iteration over direction to compute widths after all min/max values were determined.  It depends on the details of your specific problem - how many pixels, how many directions, how much parallelism do you have available, etc.

And finally, depending on your data and your problem, I would consider using moments to fit an ellipsoid to the object and estimate w(d).  This is attractive (when it applies!) because computation of the moments is well understood and the properties of the resulting ellipse are also well understood.  In the cases where the question is “in what direction do we observe the min/max width”, it’s particularly attractive.
If you explore this option, be aware of the fine points - in particular the difference between using ALL pixels vs using only BOUNDARY pixels.


--
Kenneth Sloan
[hidden email]<mailto:[hidden email]>
"La lutte elle-même vers les sommets suffit à remplir un coeur d'homme; il faut imaginer Sisyphe heureux."


On Dec 29, 2014, at 13:19 , Slobodan Dražić <[hidden email]<mailto:[hidden email]>> wrote:

Hi everybody,

I am working on a scientific paper that deal with estimation of width at a given direction. I refer ImageJ, but I am interested in how the width of a shape at a given direction is estimated. Is it just the distance of the farthest two pixels in a given direction in a binary image or some algorithm is used?

Thanks in advance,
Slobodan

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html


--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: width at a given direction

Cammer, Michael
In reply to this post by Slobodan Dražić
Depending how many images you need to calculate, how about using a ruler and measuring by hand?
Some slide rules have linear inches on an outside edge.
_________________________________________
Michael Cammer, Optical Microscopy Specialist
http://ocs.med.nyu.edu/microscopy
Lab: (212) 263-3208  Cell: (914) 309-3270

________________________________________
From: ImageJ Interest Group [[hidden email]] on behalf of Kenneth R Sloan [[hidden email]]
Sent: Monday, December 29, 2014 3:57 PM
To: [hidden email]
Subject: Re: width at a given direction

Do you want the width in one particular direction, or a lot of directions?

Back when dinosaurs roamed the Earth (ca. 1978), I  worked on these types of problems.  Often, the question was “in what directions are the min/max widths”.  Later, there were a series of papers by others (look for “digital calipers”).

What I did was to estimate the function w(dir) - width as a function of direction.  I simply projected all of the pixels identified as part of the object onto lines at a number of directions (depending on the precision required) and connected the dots.  It turned out that w(dir) was a useful concept - a number of interesting shape parameters fall out naturally once you have it.

To directly answer your question - yes - I would use a binary image and compute the distance between the two extremes in the direction(s) you care about.  But, there’s no need to identify the particular pixels involved!  The simplest method is to project ALL of the pixels onto a line in the selected direction and keep track of the min and max projected values.

Represent the direction as a unit vector and treat each pixel’s coordinates as a vector.  Take the dot-product, and you have the projection of that pixel onto  a line in that direction.  (Hence, the title of my little contribution talked about “dot-product space”) I don’t think it matters - but it might help keep values in a more reasonable range if you use the center of gravity of the object as the origin of your co-ordinate system.  That way, you end up with a polygonal representation of the shape, with the origin at the center of gravity and two vertices for every selected direction.

Again - the “digital calipers” approach should be considered for extreme precision - but given that you are starting with a raster image, and given the opportunity for “embarrassing parallelism”, projecting onto a small set of directions would be my first choice.  Depending on the details, I would think that something on the order of 32 discrete, fixed directions would be more than sufficient.  Once you have a 32-point sampling of w(d), many things are easy to estimate - again, this is a simplified version of the
imaged object, and is probably adequate to estimate the answer any “shape” question you have about the original object.  Perhaps some care should be taken to identify and deal with any outlier pixels due to noise.  The simplified model is museum viewable (almost as good as convex…)

Summary:

for each direction d (<dx,dy> - unit length vector)
 {
  for every pixel p   (<x,y>   - treated as a vector - optionally, translate so <0,0> is c.o.g.)
   {
    projection = p . d
    tally min(d), max(d)   (iteratively update min/max - optionally, for every direction)
   }
  w(d) = max(d)-min(d)
 }

Personally, I might interchange the order of the iterations, keep min/max as arrays indexed by direction, and add a second iteration over direction to compute widths after all min/max values were determined.  It depends on the details of your specific problem - how many pixels, how many directions, how much parallelism do you have available, etc.

And finally, depending on your data and your problem, I would consider using moments to fit an ellipsoid to the object and estimate w(d).  This is attractive (when it applies!) because computation of the moments is well understood and the properties of the resulting ellipse are also well understood.  In the cases where the question is “in what direction do we observe the min/max width”, it’s particularly attractive.
If you explore this option, be aware of the fine points - in particular the difference between using ALL pixels vs using only BOUNDARY pixels.


--
Kenneth Sloan
[hidden email]<mailto:[hidden email]>
"La lutte elle-même vers les sommets suffit à remplir un coeur d'homme; il faut imaginer Sisyphe heureux."


On Dec 29, 2014, at 13:19 , Slobodan Dražić <[hidden email]<mailto:[hidden email]>> wrote:

Hi everybody,

I am working on a scientific paper that deal with estimation of width at a given direction. I refer ImageJ, but I am interested in how the width of a shape at a given direction is estimated. Is it just the distance of the farthest two pixels in a given direction in a binary image or some algorithm is used?

Thanks in advance,
Slobodan

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html


--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html
Reply | Threaded
Open this post in threaded view
|

Re: width at a given direction

Slobodan Dražić
In reply to this post by Kenneth R Sloan
First of all, thank you for the explanation. I am interested in the width in a lot of direction, i.e. I am interested in estimation of width function you described. I have checked "rotating calipers", but as far as I understand there is no much difference as regards precision in applying them comparing to projections you described. The only difference is that rotating calipers are more related to polygons and the procedure is faster when the min or max width is estimated. 
In all, I am interested in which way is width function (or Feret's diameter) estimated in imageJ since the precision of the method is of importance.  It is not very important that I describe the shape as best a possible. I assume that you suggested that I use ellipsoid to describe given shape as best as possible, but regardless of a shape, what is the maximal error of the estimation? For example, if  projections of pixels you described are used the maximal error of estimation is \sqrt(2) pixels when direction is 45 degrees. Is it maximal error when imageJ toolbox is used for calculation of width function of an arbitrary shape?
Thanks is advance,Slobodan
 

     From: Kenneth R Sloan <[hidden email]>
 To: [hidden email]
 Sent: Monday, December 29, 2014 9:57 PM
 Subject: Re: width at a given direction
   
Do you want the width in one particular direction, or a lot of directions?

Back when dinosaurs roamed the Earth (ca. 1978), I  worked on these types of problems.  Often, the question was “in what directions are the min/max widths”.  Later, there were a series of papers by others (look for “digital calipers”).

What I did was to estimate the function w(dir) - width as a function of direction.  I simply projected all of the pixels identified as part of the object onto lines at a number of directions (depending on the precision required) and connected the dots.  It turned out that w(dir) was a useful concept - a number of interesting shape parameters fall out naturally once you have it.

To directly answer your question - yes - I would use a binary image and compute the distance between the two extremes in the direction(s) you care about.  But, there’s no need to identify the particular pixels involved!  The simplest method is to project ALL of the pixels onto a line in the selected direction and keep track of the min and max projected values.

Represent the direction as a unit vector and treat each pixel’s coordinates as a vector.  Take the dot-product, and you have the projection of that pixel onto  a line in that direction.  (Hence, the title of my little contribution talked about “dot-product space”) I don’t think it matters - but it might help keep values in a more reasonable range if you use the center of gravity of the object as the origin of your co-ordinate system.  That way, you end up with a polygonal representation of the shape, with the origin at the center of gravity and two vertices for every selected direction.

Again - the “digital calipers” approach should be considered for extreme precision - but given that you are starting with a raster image, and given the opportunity for “embarrassing parallelism”, projecting onto a small set of directions would be my first choice.  Depending on the details, I would think that something on the order of 32 discrete, fixed directions would be more than sufficient.  Once you have a 32-point sampling of w(d), many things are easy to estimate - again, this is a simplified version of the
imaged object, and is probably adequate to estimate the answer any “shape” question you have about the original object.  Perhaps some care should be taken to identify and deal with any outlier pixels due to noise.  The simplified model is museum viewable (almost as good as convex…)

Summary:

for each direction d (<dx,dy> - unit length vector)
 {
  for every pixel p  (<x,y>  - treated as a vector - optionally, translate so <0,0> is c.o.g.)
  {
    projection = p . d
    tally min(d), max(d)  (iteratively update min/max - optionally, for every direction)
  }
  w(d) = max(d)-min(d)
 }

Personally, I might interchange the order of the iterations, keep min/max as arrays indexed by direction, and add a second iteration over direction to compute widths after all min/max values were determined.  It depends on the details of your specific problem - how many pixels, how many directions, how much parallelism do you have available, etc.

And finally, depending on your data and your problem, I would consider using moments to fit an ellipsoid to the object and estimate w(d).  This is attractive (when it applies!) because computation of the moments is well understood and the properties of the resulting ellipse are also well understood.  In the cases where the question is “in what direction do we observe the min/max width”, it’s particularly attractive.
If you explore this option, be aware of the fine points - in particular the difference between using ALL pixels vs using only BOUNDARY pixels.


--
Kenneth Sloan
[hidden email]<mailto:[hidden email]>
"La lutte elle-même vers les sommets suffit à remplir un coeur d'homme; il faut imaginer Sisyphe heureux."


On Dec 29, 2014, at 13:19 , Slobodan Dražić <[hidden email]<mailto:[hidden email]>> wrote:

Hi everybody,

I am working on a scientific paper that deal with estimation of width at a given direction. I refer ImageJ, but I am interested in how the width of a shape at a given direction is estimated. Is it just the distance of the farthest two pixels in a given direction in a binary image or some algorithm is used?

Thanks in advance,
Slobodan

--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html


--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html



--
ImageJ mailing list: http://imagej.nih.gov/ij/list.html