On Thursday 05 Apr 2012 10:32:29 I wrote:
> I do not think that is correct. Don't you have to convolve these 2 kernels > to produce a 2D one? Oops... Michael Schmid just pointed to me (quite correctly) that 2D convolution is separable into consecutive 1D convolutions, so that is not a problem. I suspected that the result was not right because I tried the macro on the "bridge" sample image and it seemed to miss directional features, but trying it again on the lena image, yes this seems to be working fine. Very sorry for the false alarm! Cheers Gabriel |
Dear Gabriel
Thank you for your reply. Because I wasn't sure about the associativity, I first tested whether the convolution of the image with subsequent 1D kernels would be the same as the convolution of the image with a 2D kernel obtained after convolution of the vertical with the horizontal one and this appeared to be the case. E.g. run("Blobs (25K)");run("32-bit"); run("Duplicate...","title=1Dversion");oid=getImageID; run("Duplicate...","title=2Dversion");tid=getImageID; selectImage(oid); run("Convolve...", "text1=[1\n4\n6\n4\n1\n]"); run("Convolve...", "text1=[-1 -2 0 2 1\n]"); selectImage(tid); run("Convolve...", "text1=[-1 -2 0 2 1\n-4 -8 0 8 4\n-6 -12 0 12 6\n-4 -8 0 8 4\n-1 -2 0 2 1\n]"); imageCalculator("Subtract create",oid,tid); Thank you for the links as well. Unfortunately, the documents always describe the first two steps, namely the convolution with the kernels and the calculation of local (i.e. pixel-wise) measures. So you still end up with an image (TEM) full of local metrics, be it the mean, absolute mean or variance. I was wondering whether there is a robust metric to express the allover image texture in one figure (like energy or entropy)? Many thanks in advance. w On 05 Apr 2012, at 11:32, Gabriel Landini wrote: On Thursday 05 Apr 2012 08:11:56 Winnok H. De Vos wrote: > I am trying to analyze image patterns (blobs vs. stripes vs. networks) using > various texture metrics. [...] > After normalization and combination of the directional derivatives, the > purpose is to obtain a dataset of 14 metrics describing the texture of the > image. Just looking at your implementation, you convolve it first with a vertical kernel and then the result image with an horizontal one?: selectImage("h"+i+"v"+j);run("Convolve...", "text1="+ver[i]);run("Convolve...", "text1="+hor[j]); I do not think that is correct. Don't you have to convolve these 2 kernels to produce a 2D one? See here for some details: http://www.ccs3.lanl.gov/~kelly/ZTRANSITION/notebook/laws.shtml and here (page 15 onwards) http://gradvibot.u-bourgogne.fr/thesis/christian_mata_miquel.pdf Running your macro on a sample image generates results that are pretty similar to each other, I do not think this is the expected output. Hope it helps, Gabriel |
In reply to this post by Gabriel Landini
Hi Winnok,
as Gabriel had already mentioned, I see no problems with successive convolution steps (instead of creating n*n kernels first) - Convolution should be commutative and associative, similar to multiplication (it is equivalent to multiplication in the Fourier domain). Problems that I see: I think that Law's 3x3 matrices are fine, at least the first of them (at least the smoothing and those equivalent to the components of the Sobel filter), but the 5x5 matrices are problematic: I think that they pick out only textures that have a certain orientation and not orientations rotated by an arbitrary angle. Also note that all but the level (average), edge (Sobel) and spot (Laplace) kernels detect only very short wavelengths of waves or ripples (essentially, 2 pixels peak to peak). Also, I don't know whether the normalization makes any sense - if you have an area with pixel values close to 0, you will see a high amplitude in all of your images, from dividing the noise by a small number. Concerning your question about the metrics; I think that you could simply start with the square, but ignoring values below some lower threshold (noise) might help. Maybe the histogram of the values of the convolved image (before taking the absolute value and/or squaring) would help, if it shows a nice Gaussian centered at 0, with some outliers, only the outliers would be relevant, not the Gaussian. Michael ________________________________________________________________ On Apr 5, 2012, at 09:11, Winnok H. De Vos wrote: > Dear colleagues, > > I am trying to analyze image patterns (blobs vs. stripes vs. networks) using various texture metrics. > To this end, I made an attempt to implement Laws' Texture analysis in macro language according to the refs [K. Laws. Textured Image Segmentation, Ph.D. Dissertation, University of Southern California, January 1980.] or [K. Laws. Rapid texture identification. In SPIE Vol. 238 Image Processing for Missile Guidance, pages 376-380, 1980]. > In brief, it is based on the application of 25 combinations of the 5 small convolution kernels, shown below) followed by a windowing operation to obtain Texture Energy Measure (TEM) images. > > L5 = [ 1 4 6 4 1 ] --> Level (0) > E5 = [ -1 -2 0 2 1 ] --> Edge (1) > S5 = [ -1 0 2 0 -1 ] --> Spot (2) > W5 = [ -1 2 0 -2 1 ] --> Wave (3) > R5 = [ 1 -4 6 -4 1 ] --> Ripple (4) > > After normalization and combination of the directional derivatives, the purpose is to obtain a dataset of 14 metrics describing the texture of the image. > However, I am uncertain as to which measure would be the best to describe the information contained in the TEM images. > Now the average energy (sum of square pixel intensities/total number of pixels) is extracted. > Does anyone know if there is a better metric or should multiple metrics be extracted? > In general, I also wonder whether it is best to work with preprocessed images (i.e. with the features of interest segmented) before deriving texture metrics ? > I have the impression that otherwise differences in intensity and noise may bias the texture information towards general image quality rather than the actual pattern (shape and distribution) in the objects of interest. > Many thanks for any contribution on this matter. > Best regards, > > W > > > //%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% > // Macro: Law's Texture Analysis > //%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% > > > setBatchMode(true); > run("Clear Results"); > //dir=getDirectory("");list=getFileList(dir);open(dir+list[0]); > ver=newArray("[1\n4\n6\n4\n1\n]","[-1\n-2\n0\n2\n1\n]","[-1\n0\n2\n0\n-1\n]"," > + "[-1\n2\n0\n-2\n1\n]","[1\n-4\n6\n-4\n1\n]"); > hor=newArray("[1 4 6 4 1\n]","[-1 -2 0 2 1\n]","[-1 0 2 0 -1\n]"," > + "[-1 2 0 -2 1\n]","[1 -4 6 -4 1\n]"); > Dialog.create("Laws Texture Measurement"); > Dialog.addNumber("TEM Windowing Size",7); // windowing size (radius of the mean filter) > Dialog.addCheckbox("Show TEM images",true); > Dialog.show(); > ws=Dialog.getNumber(); > vis=Dialog.getCheckbox(); > id=getImageID;run("32-bit");title=getTitle; > for(i=0;i<hor.length;i++){ > for(j=0;j<ver.length;j++){ > selectImage(id);w=getWidth;h=getHeight; > run("Duplicate...","title=h"+i+"v"+j); > // Step 1: convolution > selectImage("h"+i+"v"+j);run("Convolve...", "text1="+ver[i]);run("Convolve...", "text1="+hor[j]); > // Step 2: Windowing for obtaining Texture Energy Measure (TEM) images, originally sum of absolute values over 15x15 window. > // Here the average value is used > selectImage("h"+i+"v"+j);run("Abs");run("Mean...","radius="+ws); > // Step 3: Contrast normalization: division by the only non zero kernel i.e. h0v0 > if(i>0 || j>0){imageCalculator("Divide 32-bit", "h"+i+"v"+j,"h0v0n");rename("h"+i+"v"+j+"n");print("h"+i+"v"+j+"n");} > else rename("h0v0n"); > } > } > for(i=0;i<hor.length;i++) > { > for(j=0;j<ver.length;j++) > { > // combine directional derivatives > if(i<j) > { > imageCalculator("Add create 32-bit", "h"+i+"v"+j+"n","h"+j+"v"+i+"n");rename("ns"+i+""+j); > } > // scaling of the other TEM images for consistency > if(i==j) > { > selectImage("h"+i+"v"+j+"n");run("Multiply...", "value=2");rename("ns"+i+""+j); > } > } > } > selectWindow("ns00");close; > > // now, a quantitative metric is extracted from the TEM images, for example the average energy (sum of square pixel intensities/total number of pixels) > index=0; > if(vis)newImage("TEMstack", "32-bit Black", w, h, 14);tid=getImageID; > for(i=0;i<hor.length;i++){ > for(j=0;j<ver.length;j++){ > if(isOpen("h"+i+"v"+j+"n")){ > selectImage("h"+i+"v"+j+"n");close; > } > if(i<=j && (i>0 || j>0)){ > selectImage("ns"+i+""+j); > if(vis){run("Select All");run("Copy");} > run("Square");getRawStatistics(np,energy);close; > if(vis){selectImage(tid);setSlice(index+1);run("Paste");} > setResult("Label",index,"ns"+i+""+j);setResult("Energy",index,energy); > index++; > } > } > } > updateResults; > if(vis){selectImage(tid);setSlice(1);resetMinAndMax;} > setBatchMode("exit and display"); > > //%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
In reply to this post by Winnok H. De Vos
On Thursday 05 Apr 2012 11:54:11 Winnok H. De Vos wrote:
> Thank you for your reply. Because I wasn't sure about the associativity, I > first tested whether the convolution of the image with subsequent 1D > kernels would be the same as the convolution of the image with a 2D kernel > obtained after convolution of the vertical with the horizontal one and this > appeared to be the case. E.g. Hi Winnok, Yes, you are right. Sorry again for my mistake. I had done the 3x3 (ie 2D) kernels before (can send a macro if interested) without realising that it could be done with the 1D ones. > Thank you for the links as well. Unfortunately, the documents always > describe the first two steps, namely the convolution with the kernels and > the calculation of local (i.e. pixel-wise) measures. So you still end up > with an image (TEM) full of local metrics, be it the mean, absolute mean or > variance. I was wondering whether there is a robust metric to express the > allover image texture in one figure (like energy or entropy)? I guess that the issue might depend on what the images contain (lots of different things or just 1 type of thing on a background) and what is the intended use (eg is this for segmentation of objects that might have a special texture, or for classifying objects already segmented, or to get a global metric of texture variability?) If the former (for segmentation) I guess that one could use these measures to classify the image (let's say using k-means or the methods used in the weka segmentation) or apply principal components analysis (there is a plugin somewhere) to reduce the many images into those that explain certain percentage of the variance. Not sure this helps answering your question. Cheers Gabriel |
In reply to this post by Michael Schmid
Dear Michael
Thanks for your reply. Indeed, the derivative operations are very local. I thought the pairwise addition of the directional derivatives would somehow balance for the orientation (cfr. sobel). As for the metrics, I wonder whether a threshold will do, because sometimes images can have poor quality, making noise a relatively strong component.Would it then be better to segment the objects of interest first? thanks in advance. w On 05 Apr 2012, at 13:02, Michael Schmid wrote: Hi Winnok, as Gabriel had already mentioned, I see no problems with successive convolution steps (instead of creating n*n kernels first) - Convolution should be commutative and associative, similar to multiplication (it is equivalent to multiplication in the Fourier domain). Problems that I see: I think that Law's 3x3 matrices are fine, at least the first of them (at least the smoothing and those equivalent to the components of the Sobel filter), but the 5x5 matrices are problematic: I think that they pick out only textures that have a certain orientation and not orientations rotated by an arbitrary angle. Also note that all but the level (average), edge (Sobel) and spot (Laplace) kernels detect only very short wavelengths of waves or ripples (essentially, 2 pixels peak to peak). Also, I don't know whether the normalization makes any sense - if you have an area with pixel values close to 0, you will see a high amplitude in all of your images, from dividing the noise by a small number. Concerning your question about the metrics; I think that you could simply start with the square, but ignoring values below some lower threshold (noise) might help. Maybe the histogram of the values of the convolved image (before taking the absolute value and/or squaring) would help, if it shows a nice Gaussian centered at 0, with some outliers, only the outliers would be relevant, not the Gaussian. Michael ________________________________________________________________ On Apr 5, 2012, at 09:11, Winnok H. De Vos wrote: > Dear colleagues, > > I am trying to analyze image patterns (blobs vs. stripes vs. networks) using various texture metrics. > To this end, I made an attempt to implement Laws' Texture analysis in macro language according to the refs [K. Laws. Textured Image Segmentation, Ph.D. Dissertation, University of Southern California, January 1980.] or [K. Laws. Rapid texture identification. In SPIE Vol. 238 Image Processing for Missile Guidance, pages 376-380, 1980]. > In brief, it is based on the application of 25 combinations of the 5 small convolution kernels, shown below) followed by a windowing operation to obtain Texture Energy Measure (TEM) images. > > L5 = [ 1 4 6 4 1 ] --> Level (0) > E5 = [ -1 -2 0 2 1 ] --> Edge (1) > S5 = [ -1 0 2 0 -1 ] --> Spot (2) > W5 = [ -1 2 0 -2 1 ] --> Wave (3) > R5 = [ 1 -4 6 -4 1 ] --> Ripple (4) > > After normalization and combination of the directional derivatives, the purpose is to obtain a dataset of 14 metrics describing the texture of the image. > However, I am uncertain as to which measure would be the best to describe the information contained in the TEM images. > Now the average energy (sum of square pixel intensities/total number of pixels) is extracted. > Does anyone know if there is a better metric or should multiple metrics be extracted? > In general, I also wonder whether it is best to work with preprocessed images (i.e. with the features of interest segmented) before deriving texture metrics ? > I have the impression that otherwise differences in intensity and noise may bias the texture information towards general image quality rather than the actual pattern (shape and distribution) in the objects of interest. > Many thanks for any contribution on this matter. > Best regards, > > W > > > //%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% > // Macro: Law's Texture Analysis > //%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% > > > setBatchMode(true); > run("Clear Results"); > //dir=getDirectory("");list=getFileList(dir);open(dir+list[0]); > ver=newArray("[1\n4\n6\n4\n1\n]","[-1\n-2\n0\n2\n1\n]","[-1\n0\n2\n0\n-1\n]"," > + "[-1\n2\n0\n-2\n1\n]","[1\n-4\n6\n-4\n1\n]"); > hor=newArray("[1 4 6 4 1\n]","[-1 -2 0 2 1\n]","[-1 0 2 0 -1\n]"," > + "[-1 2 0 -2 1\n]","[1 -4 6 -4 1\n]"); > Dialog.create("Laws Texture Measurement"); > Dialog.addNumber("TEM Windowing Size",7); // windowing size (radius of the mean filter) > Dialog.addCheckbox("Show TEM images",true); > Dialog.show(); > ws=Dialog.getNumber(); > vis=Dialog.getCheckbox(); > id=getImageID;run("32-bit");title=getTitle; > for(i=0;i<hor.length;i++){ > for(j=0;j<ver.length;j++){ > selectImage(id);w=getWidth;h=getHeight; > run("Duplicate...","title=h"+i+"v"+j); > // Step 1: convolution > selectImage("h"+i+"v"+j);run("Convolve...", "text1="+ver[i]);run("Convolve...", "text1="+hor[j]); > // Step 2: Windowing for obtaining Texture Energy Measure (TEM) images, originally sum of absolute values over 15x15 window. > // Here the average value is used > selectImage("h"+i+"v"+j);run("Abs");run("Mean...","radius="+ws); > // Step 3: Contrast normalization: division by the only non zero kernel i.e. h0v0 > if(i>0 || j>0){imageCalculator("Divide 32-bit", "h"+i+"v"+j,"h0v0n");rename("h"+i+"v"+j+"n");print("h"+i+"v"+j+"n");} > else rename("h0v0n"); > } > } > for(i=0;i<hor.length;i++) > { > for(j=0;j<ver.length;j++) > { > // combine directional derivatives > if(i<j) > { > imageCalculator("Add create 32-bit", "h"+i+"v"+j+"n","h"+j+"v"+i+"n");rename("ns"+i+""+j); > } > // scaling of the other TEM images for consistency > if(i==j) > { > selectImage("h"+i+"v"+j+"n");run("Multiply...", "value=2");rename("ns"+i+""+j); > } > } > } > selectWindow("ns00");close; > > // now, a quantitative metric is extracted from the TEM images, for example the average energy (sum of square pixel intensities/total number of pixels) > index=0; > if(vis)newImage("TEMstack", "32-bit Black", w, h, 14);tid=getImageID; > for(i=0;i<hor.length;i++){ > for(j=0;j<ver.length;j++){ > if(isOpen("h"+i+"v"+j+"n")){ > selectImage("h"+i+"v"+j+"n");close; > } > if(i<=j && (i>0 || j>0)){ > selectImage("ns"+i+""+j); > if(vis){run("Select All");run("Copy");} > run("Square");getRawStatistics(np,energy);close; > if(vis){selectImage(tid);setSlice(index+1);run("Paste");} > setResult("Label",index,"ns"+i+""+j);setResult("Energy",index,energy); > index++; > } > } > } > updateResults; > if(vis){selectImage(tid);setSlice(1);resetMinAndMax;} > setBatchMode("exit and display"); > > //%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% |
Winnok,
I think that segmenting out features of interest is probably a must. This looks like a pattern Recognition/Data Mining problem. You would want to look into doing it with an ImageJ/Weka combination. I have seen some articles on this on the Web. I think there is an ImageJ/Fiji plugin that integrates these two as well. David On Thu, Apr 5, 2012 at 4:20 AM, Winnok H. De Vos <[hidden email]>wrote: > Dear Michael > > Thanks for your reply. > Indeed, the derivative operations are very local. I thought the pairwise > addition of the directional derivatives would somehow balance for the > orientation (cfr. sobel). > As for the metrics, I wonder whether a threshold will do, because > sometimes images can have poor quality, making noise a relatively strong > component.Would it then be better to segment the objects of interest first? > thanks in advance. > w > > On 05 Apr 2012, at 13:02, Michael Schmid wrote: > > Hi Winnok, > > as Gabriel had already mentioned, I see no problems with successive > convolution steps (instead of creating n*n kernels first) - Convolution > should be commutative and associative, similar to multiplication (it is > equivalent to multiplication in the Fourier domain). > > Problems that I see: > > I think that Law's 3x3 matrices are fine, at least the first of them (at > least the smoothing and those equivalent to the components of the Sobel > filter), but the 5x5 matrices are problematic: I think that they pick out > only textures that have a certain orientation and not orientations rotated > by an arbitrary angle. > Also note that all but the level (average), edge (Sobel) and spot > (Laplace) kernels detect only very short wavelengths of waves or ripples > (essentially, 2 pixels peak to peak). > > Also, I don't know whether the normalization makes any sense - if you have > an area with pixel values close to 0, you will see a high amplitude in all > of your images, from dividing the noise by a small number. > > Concerning your question about the metrics; I think that you could simply > start with the square, but ignoring values below some lower threshold > (noise) might help. Maybe the histogram of the values of the convolved > image (before taking the absolute value and/or squaring) would help, if it > shows a nice Gaussian centered at 0, with some outliers, only the outliers > would be relevant, not the Gaussian. > > > Michael > ________________________________________________________________ > On Apr 5, 2012, at 09:11, Winnok H. De Vos wrote: > > > Dear colleagues, > > > > I am trying to analyze image patterns (blobs vs. stripes vs. networks) > using various texture metrics. > > To this end, I made an attempt to implement Laws' Texture analysis in > macro language according to the refs [K. Laws. Textured Image Segmentation, > Ph.D. Dissertation, University of Southern California, January 1980.] or > [K. Laws. Rapid texture identification. In SPIE Vol. 238 Image Processing > for Missile Guidance, pages 376-380, 1980]. > > In brief, it is based on the application of 25 combinations of the 5 > small convolution kernels, shown below) followed by a windowing operation > to obtain Texture Energy Measure (TEM) images. > > > > L5 = [ 1 4 6 4 1 ] --> Level (0) > > E5 = [ -1 -2 0 2 1 ] --> Edge (1) > > S5 = [ -1 0 2 0 -1 ] --> Spot (2) > > W5 = [ -1 2 0 -2 1 ] --> Wave (3) > > R5 = [ 1 -4 6 -4 1 ] --> Ripple (4) > > > > After normalization and combination of the directional derivatives, the > purpose is to obtain a dataset of 14 metrics describing the texture of the > image. > > However, I am uncertain as to which measure would be the best to > describe the information contained in the TEM images. > > Now the average energy (sum of square pixel intensities/total number of > pixels) is extracted. > > Does anyone know if there is a better metric or should multiple metrics > be extracted? > > In general, I also wonder whether it is best to work with preprocessed > images (i.e. with the features of interest segmented) before deriving > texture metrics ? > > I have the impression that otherwise differences in intensity and noise > may bias the texture information towards general image quality rather than > the actual pattern (shape and distribution) in the objects of interest. > > Many thanks for any contribution on this matter. > > Best regards, > > > > W > > > > > > //%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% > > // Macro: Law's Texture Analysis > > //%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% > > > > > > setBatchMode(true); > > run("Clear Results"); > > //dir=getDirectory("");list=getFileList(dir);open(dir+list[0]); > > > ver=newArray("[1\n4\n6\n4\n1\n]","[-1\n-2\n0\n2\n1\n]","[-1\n0\n2\n0\n-1\n]"," > > + "[-1\n2\n0\n-2\n1\n]","[1\n-4\n6\n-4\n1\n]"); > > hor=newArray("[1 4 6 4 1\n]","[-1 -2 0 2 1\n]","[-1 0 2 0 -1\n]"," > > + "[-1 2 0 -2 1\n]","[1 -4 6 -4 1\n]"); > > Dialog.create("Laws Texture Measurement"); > > Dialog.addNumber("TEM Windowing Size",7); // windowing size > (radius of the mean filter) > > Dialog.addCheckbox("Show TEM images",true); > > Dialog.show(); > > ws=Dialog.getNumber(); > > vis=Dialog.getCheckbox(); > > id=getImageID;run("32-bit");title=getTitle; > > for(i=0;i<hor.length;i++){ > > for(j=0;j<ver.length;j++){ > > selectImage(id);w=getWidth;h=getHeight; > > run("Duplicate...","title=h"+i+"v"+j); > > // Step 1: convolution > > selectImage("h"+i+"v"+j);run("Convolve...", > "text1="+ver[i]);run("Convolve...", "text1="+hor[j]); > > // Step 2: Windowing for obtaining Texture Energy > Measure (TEM) images, originally sum of absolute values over 15x15 window. > > // Here the average value is used > > > selectImage("h"+i+"v"+j);run("Abs");run("Mean...","radius="+ws); > > // Step 3: Contrast normalization: division by the > only non zero kernel i.e. h0v0 > > if(i>0 || j>0){imageCalculator("Divide 32-bit", > "h"+i+"v"+j,"h0v0n");rename("h"+i+"v"+j+"n");print("h"+i+"v"+j+"n");} > > else rename("h0v0n"); > > } > > } > > for(i=0;i<hor.length;i++) > > { > > for(j=0;j<ver.length;j++) > > { > > // combine directional derivatives > > if(i<j) > > { > > imageCalculator("Add create 32-bit", > "h"+i+"v"+j+"n","h"+j+"v"+i+"n");rename("ns"+i+""+j); > > } > > // scaling of the other TEM images for consistency > > if(i==j) > > { > > selectImage("h"+i+"v"+j+"n");run("Multiply...", > "value=2");rename("ns"+i+""+j); > > } > > } > > } > > selectWindow("ns00");close; > > > > // now, a quantitative metric is extracted from the TEM images, for > example the average energy (sum of square pixel intensities/total number of > pixels) > > index=0; > > if(vis)newImage("TEMstack", "32-bit Black", w, h, 14);tid=getImageID; > > for(i=0;i<hor.length;i++){ > > for(j=0;j<ver.length;j++){ > > if(isOpen("h"+i+"v"+j+"n")){ > > selectImage("h"+i+"v"+j+"n");close; > > } > > if(i<=j && (i>0 || j>0)){ > > selectImage("ns"+i+""+j); > > if(vis){run("Select All");run("Copy");} > > run("Square");getRawStatistics(np,energy);close; > > > if(vis){selectImage(tid);setSlice(index+1);run("Paste");} > > > setResult("Label",index,"ns"+i+""+j);setResult("Energy",index,energy); > > index++; > > } > > } > > } > > updateResults; > > if(vis){selectImage(tid);setSlice(1);resetMinAndMax;} > > setBatchMode("exit and display"); > > > > //%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% > |
Free forum by Nabble | Edit this page |