Hi all,
I do not completely agree with these statements, especially the one considering that "the user need to decide how many of the closest neighbors he want to include in the average". Instead, I think "the image is as it is" and "the closest neighbors are as they are" ... In the framework of Computational Geometry (the framework in which the contributions of Gabriel Landini and Karsten Rodenacker are), the number of neighbors is perfectly defined (at least in an Euclidean framework): it is the number of Voronoï zones that are adjacent to the Voronoï zone associated to the current object. As a consequence (one algorithm for computing the Delaunay triangulation starts from the Voronoï partition), the number of neighbors is equal to the number of segments that start from a given object to join other objects. This number is thus the result of a computation instead of a choice of the user. As such, it is ONE parameter which can serve to discriminate different distributions of objects (the hexagonal close packing is certainly not the distribution we are interested in!). But many other parameters can also be used. In the discussion generated by the question from France Girault, the mean of the distances from an object to its neighbors was mainly considered. From my experience (and from theoretical considerations), I have to mention that the variance is at least as important as the mean if one wants to discriminate different distributions of objects. In fact, the "best" solution (in this framework!) is to consider the mean and the variance simultaneously. For example, plotting each image (characterized by the mean and standard deviation of the Delaunay triangulation or the Euclidean Minimum Spanning Tree (EMST)) in a 2D parameter space (mean, standard deviation) allows to recognize immediately the type of spatial distribution of the objects (periodic, random, clustered, with gradient, etc). In addition, some normalized parameters involving the mean and the standard deviation can be computed and compared. See for instance: Dussert et al. J. theor. Biol (1987) 125, 317. Marcelpoil & Usson. J. theor. Biol (1992) 154, 359. Nawrocki Raby et al. Int. J. Cancer (2001) 93, 644. Another point: With these methods of Computational Geometry, it is not necessary to reduce the objects (in binary images) to their center points. Provided the objects are not touching, the Voronoï partition, and hence the Delaunay triangulation and the EMST can be computed (and are computed more precisely) by keeping the original shape of the objects instead of reducing them to their center of mass. Now I recognize that this methodology is not the only one that can solve this type of pattern analysis problem. Unfortunately, comparative studies involving different methodologies are missing. See, however: Wallet & Dussert. J. theor. Biol. (1997) 187, 437. I hope this can help. Noel |
On May 29, 2007, at 7:59 AM, Noel BONNET wrote:
> Hi all, > > I do not completely agree with these statements, especially the one > considering that "the user need to decide how many of the closest > neighbors > he want to include in the average". > Instead, I think "the image is as it is" and "the closest neighbors > are as > they are" ... > In the framework of Computational Geometry (the framework in which the > contributions of Gabriel Landini and Karsten Rodenacker are), the > number of > neighbors is perfectly defined (at least in an Euclidean > framework): it is > the number of Voronoï zones that are adjacent to the Voronoï zone > associated > to the current object. But...but...while this approach is the currently predominant one, not EVERY problem fits the mold. If adjacent particles interact with each other in ways *other* than common borders, then neighbors which are not nearest-neighbors in the Delaunay/Voronoi sense may still be relevant. The current combinatorial-oriented version of "computational geometry" is near and dear to my heart, but I still think it is relevant to lookd at older methods. I agree with a previous poster's recommendation of Ripley's book on spatial statistics. and...just for the record...the Delaunay/Voronoi definitions of nearest-neighbor is not the *only* view (although it is certainly the most popular). -- Kenneth Sloan [hidden email] Computer and Information Sciences +1-205-934-2213 University of Alabama at Birmingham FAX +1-205-934-5473 Birmingham, AL 35294-1170 http://www.cis.uab.edu/sloan/ |
In reply to this post by Noel BONNET
May be this is a late input, but I
think that computationally simplest is simultaneous consideration of Ripley's K-function and the inter-point distance distribution (H-function). See my article in J. Neurosci. Methods: Prodanov, D.; Nagelkerke, N. and Marani, E. "Spatial clustering analysis in neuroanatomy: applications of different approaches to motor nerve fiber distribution",J Neurosci Methods, 2007, 160, 93-108 Cheers Dimiter Prodanov |
Free forum by Nabble | Edit this page |