Hi list,
I'm using the Trainable Weka Segmentation from Fiji in one of my macros to segment round/oval objects from the background in images wich are typically about 1360x1024 pixels. I trained the classifier and saved the datas and everything works perfectly automated now since the plugin is available with macro commands. BUT: 1) it lasts very long to apply a classifier on a testimage (about 20min). Is there a way to decrease the time needed for the separation? My algoritm looks like this: for (i=0;i<list.length;i++) { run("Trainable Weka Segmentation"); //is it needed to open the segmentation to apply a classifier? call("trainableSegmentation.Weka_Segmentation.loadClassifier", ...); call("trainableSegmentation.Weka_Segmentation.loadData", ...); call("trainableSegmentation.Weka_Segmentation.setClassHomogenization", "true"); call("trainableSegmentation.Weka_Segmentation.setOpacity", "75"); call("trainableSegmentation.Weka_Segmentation.applyClassifier", directory, list[i], "showresults=true", "storeResults=true", "probabilityMaps=false", savedirectory); } I used the features: Gaussian blur, Sobel filter, Difference of gaussians, Membrane projections, Variance, Mean, Minimum, Maximum, Median, Anisotropic diffusion, Bilateral, Lipschitz, Laplacian, Neighbors I know one reason that it is so slow is because I chose so many features but I wanted to get the best results and hopefully every object separated from the background and detected as object or is it senseless to chose many features to get reliable segmented images as results? (I go on working with those images automatically without manual corrections) 2) I want to understand how the Trainable Weka Segmentation decides whether one pixel is back- or foreground. As I learned so far the features of the input image are saved in a vector, one image is created for each feature and the classifier gets trained with those images. The classifier works with decision trees combined in a random forest. I understood the mathod of a decision tree but not how a random forest works in detail. Additionally I didn't understand yet how the decisions are made in the single decision trees, i.e. the feature gaussian blur creates a gaussian blurred image of the original image and then? Does the algorithm detect some kind of value out of the gaussian blurred image during training and then when I apply the classifier compares it with the value of the gaussian blurred image of the input image? And does the random forest compare each pixel of it's training with the new input image I applied the classifier to or does it compare a special area of pixels? If it compares areas how are these defined? So far the Trainable Weka Segmentation works perfectly but actually I don't want to use something although I don't understand how it works. At the moment it's just kind of a blackbox for me I know how to usw it but not how it really functions. Every other attempt to seperate those round/oval objects from the background failed so at the moment it's only possible for me to work with this plugin. I hope someone can help me especially with the understanding on how the plugin functions. Kind Regards |
Hello GuPint,
1) Given the size of your image, I would say the number of features is the most likely cause of the computational time. Have you tried using less or smaller features? 2) The features you select are filters that are applied to the original image (with different radius depending on the parameters you choose in the Settings dialog). Each pixel of your training traces is then represented as a feature vector (formed by the pixel value of the original image at that position and the pixel values of the filtered versions of the image at the same position). Once you have the vectors, you can train any standard learning algorithm to classify between the existing classes (foreground and background in your case). By default, Trainable Weka Segmentation uses a random forest (http://en.wikipedia.org/wiki/Random_forest). In brief, a random forest is a set of N random trees, which are decision trees in which the decision of each node are taken based on M random-selected features. N and M are parameters of the method. If you prefer to use another method you feel more comfortable with, you can select it in the Settings dialog. I hope this helps! ignacio > I'm using the Trainable Weka Segmentation from Fiji in one of my macros to > segment round/oval objects from the background in images wich are typically > about 1360x1024 pixels. > I trained the classifier and saved the datas and everything works perfectly > automated now since the plugin is available with macro commands. BUT: > > 1) it lasts very long to apply a classifier on a testimage (about 20min). Is > there a way to decrease the time needed for the separation? My algoritm > looks like this: > > for (i=0;i<list.length;i++) { > run("Trainable Weka Segmentation"); //is it needed to open the > segmentation to apply a classifier? > call("trainableSegmentation.Weka_Segmentation.loadClassifier", ...); > call("trainableSegmentation.Weka_Segmentation.loadData", ...); > call("trainableSegmentation.Weka_Segmentation.setClassHomogenization", > "true"); > call("trainableSegmentation.Weka_Segmentation.setOpacity", "75"); > call("trainableSegmentation.Weka_Segmentation.applyClassifier", > list[i], "showresults=true", "storeResults=true", "probabilityMaps=false", > savedirectory); > } > > I used the features: Gaussian blur, Sobel filter, Difference of gaussians, > Membrane projections, Variance, Mean, Minimum, Maximum, Median, Anisotropic > diffusion, Bilateral, Lipschitz, Laplacian, Neighbors > > I know one reason that it is so slow is because I chose so many features but > I wanted to get the best results and hopefully every object separated from > the background and detected as object or is it senseless to chose many > features to get reliable segmented images as results? (I go on working with > those images automatically without manual corrections) > > 2) I want to understand how the Trainable Weka Segmentation decides whether > one pixel is back- or foreground. As I learned so far the features of the > input image are saved in a vector, one image is created for each feature and > the classifier gets trained with those images. The classifier works with > decision trees combined in a random forest. I understood the mathod of a > decision tree but not how a random forest works in detail. > Additionally I didn't understand yet how the decisions are made in the > single decision trees, i.e. the feature gaussian blur creates a gaussian > blurred image of the original image and then? Does the algorithm detect some > kind of value out of the gaussian blurred image during training and then > when I apply the classifier compares it with the value of the gaussian > blurred image of the input image? And does the random forest compare each > pixel of it's training with the new input image I applied the classifier to > or does it compare a special area of pixels? If it compares areas how are > these defined? > > So far the Trainable Weka Segmentation works perfectly but actually I don't > want to use something although I don't understand how it works. At the > moment it's just kind of a blackbox for me I know how to usw it but not how > it really functions. Every other attempt to seperate those round/oval > objects from the background failed so at the moment it's only possible for > me to work with this plugin. > > I hope someone can help me especially with the understanding on how the > plugin functions. > > Kind Regards > > > > -- > View this message in context: > Sent from the ImageJ mailing list archive at Nabble.com. > > -- > ImageJ mailing list: http://imagej.nih.gov/ij/list.html -- Ignacio Arganda-Carreras, Ph.D. Seung's lab, 46-5065 Department of Brain and Cognitive Sciences Massachusetts Institute of Technology 43 Vassar St. Cambridge, MA 02139 USA Phone: (001) 617-324-3747 Website: http://bioweb.cnb.csic.es/~iarganda/index_EN.html -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html |
In reply to this post by GuPInt
Hello again,
1) Yes but since I want to get best reliable results (a segmented binary > image) wich I automatically use without wanting to check manually for > artifacts inside my objects (foreground) it's not possible to speed up the > process in another way than using less features? > Well, if you think you must use so many features, you can try to speed up the classification once you have trained your classifier using the Fiji Archipelago: https://github.com/fiji/fiji/pull/20 If you send me your original image and the expected segmentation, I can try to help you reducing the number of features. > > 2) I checked the settings for training my classifier: Membrane thickness > 1, Membrane patch size 19, Minimum sigma 1.0 , Maximum sigma 16.0 > > for the random forest: maxDepth 0 (no pruning, unlimited depth), > numFeatures 2, numThreads 6, numTrees 200, seed 995128238 > > so the random forest creates 200 decision trees. 2 of all my chosen > features (gaussian blur, variance, mean, laplacian,...) get randomly picked > to create those 200 trees with 2 nodes classifiing one pixel after the > other in my testimage and if the majority of those 200 trees classify one > pixel as foreground it will be a foreground pixel in my resulting image. > Did I get that right so far? > Almost. The 200 trees do not have only two nodes, but at each node the decision is taken based on 2 features (randomly selected from the total number of features). > > Still I don't unerstand how it compares each pixel in the testimage with > the pixel in my trainingimage. So for example with the feature gaussian > blur: the classifier gets the grey value of one pixel (let's say its > classification is foreground with the greyvalue 80) of the trainingimage > and as well the grey value of the gaussian blurred pixel (maybe 90 after > blurring) of the trainingimage on the same position, does these 2 values > count as borders of the classification node in the decision tree, so that > if a pixel of the testimage has a grey value inside these borders (for > example it has the grey value 85) it is classified as foreground? > > remain unclassified at that node. Imagine that at a certain node we arrive with 100 samples of both classes, and the randomly selected features are Gaussian blur with radius 4.0 and Laplacian of radius 2.0. The node will select the most discriminating between the two features (the one that separates the 100 samples more) based on a certain metric (usually the Gini coefficient or the entropy) and will set the limit value of that feature accordingly. So at the end the node will stored with a feature name and limit value (e.g. Gaussian blur 4.0 and 80). Does it make sense? For test pixels/vectors, the only thing you need to do to classify them is to go down the tree comparing at each node with the right feature and limit value to take the correct path. By the way, each random tree is trained on a re-sampled dataset, of the same size as the input one, but resampling with repetition. I hope this clarify things. You should have a look at Breiman's site: http://stat-www.berkeley.edu/users/breiman/RandomForests/cc_home.htm Cheers! > > > Thanks very much for your help so far > > Kind regards > > > > Date: Wed, 23 Oct 2013 14:54:25 +0200 > > From: [hidden email] > > Subject: Re: Understanding of Trainable Weka Segmentation in Fiji > > To: [hidden email] > > > > Hello GuPint, > > > > 1) Given the size of your image, I would say the number of features is > the > > most likely cause of the computational time. Have you tried using less or > > smaller features? > > > > 2) The features you select are filters that are applied to the original > > image (with different radius depending on the parameters you choose in > the > > Settings dialog). Each pixel of your training traces is then represented > as > > a feature vector (formed by the pixel value of the original image at that > > position and the pixel values of the filtered versions of the image at > the > > same position). Once you have the vectors, you can train any standard > > learning algorithm to classify between the existing classes (foreground > and > > background in your case). By default, Trainable Weka Segmentation uses a > > random forest (http://en.wikipedia.org/wiki/Random_forest). In brief, a > > random forest is a set of N random trees, which are decision trees in > which > > the decision of each node are taken based on M random-selected features. > N > > and M are parameters of the method. If you prefer to use another method > you > > feel more comfortable with, you can select it in the Settings dialog. > > > > I hope this helps! > > > > ignacio > > > > > I'm using the Trainable Weka Segmentation from Fiji in one of my > macros to > > > segment round/oval objects from the background in images wich are > > typically > > > about 1360x1024 pixels. > > > I trained the classifier and saved the datas and everything works > > perfectly > > > automated now since the plugin is available with macro commands. BUT: > > > > > > 1) it lasts very long to apply a classifier on a testimage (about > 20min). > > Is > > > there a way to decrease the time needed for the separation? My algoritm > > > looks like this: > > > > > > for (i=0;i<list.length;i++) { > > > run("Trainable Weka Segmentation"); //is it needed to open the > > > segmentation to apply a classifier? > > > call("trainableSegmentation.Weka_Segmentation.loadClassifier", ...); > > > call("trainableSegmentation.Weka_Segmentation.loadData", ...); > > > call("trainableSegmentation.Weka_Segmentation.setClassHomogenization", > > > "true"); > > > call("trainableSegmentation.Weka_Segmentation.setOpacity", "75"); > > > call("trainableSegmentation.Weka_Segmentation.applyClassifier", > > directory, > > > list[i], "showresults=true", "storeResults=true", > "probabilityMaps=false", > > > savedirectory); > > > } > > > > > > I used the features: Gaussian blur, Sobel filter, Difference of > gaussians, > > > Membrane projections, Variance, Mean, Minimum, Maximum, Median, > > Anisotropic > > > diffusion, Bilateral, Lipschitz, Laplacian, Neighbors > > > > > > I know one reason that it is so slow is because I chose so many > features > > but > > > I wanted to get the best results and hopefully every object separated > from > > > the background and detected as object or is it senseless to chose many > > > features to get reliable segmented images as results? (I go on working > > with > > > those images automatically without manual corrections) > > > > > > 2) I want to understand how the Trainable Weka Segmentation decides > > whether > > > one pixel is back- or foreground. As I learned so far the features of > the > > > input image are saved in a vector, one image is created for each > feature > > and > > > the classifier gets trained with those images. The classifier works > with > > > decision trees combined in a random forest. I understood the mathod of > a > > > decision tree but not how a random forest works in detail. > > > Additionally I didn't understand yet how the decisions are made in the > > > single decision trees, i.e. the feature gaussian blur creates a > gaussian > > > blurred image of the original image and then? Does the algorithm detect > > some > > > kind of value out of the gaussian blurred image during training and > then > > > when I apply the classifier compares it with the value of the gaussian > > > blurred image of the input image? And does the random forest compare > each > > > pixel of it's training with the new input image I applied the > classifier > > to > > > or does it compare a special area of pixels? If it compares areas how > are > > > these defined? > > > > > > So far the Trainable Weka Segmentation works perfectly but actually I > > don't > > > want to use something although I don't understand how it works. At the > > > moment it's just kind of a blackbox for me I know how to usw it but not > > how > > > it really functions. Every other attempt to seperate those round/oval > > > objects from the background failed so at the moment it's only possible > for > > > me to work with this plugin. > > > > > > I hope someone can help me especially with the understanding on how the > > > plugin functions. > > > > > > Kind Regards > > > > > > > > > > > > -- > > > View this message in context: > > > http://imagej.1557.x6.nabble.com/Understanding-of-Trainable-Weka-Segmentation-in-Fiji-tp5005279.html > > > Sent from the ImageJ mailing list archive at Nabble.com. > > > > > > -- > > > ImageJ mailing list: http://imagej.nih.gov/ij/list.html > > > > > > > > > > -- > > Ignacio Arganda-Carreras, Ph.D. > > Seung's lab, 46-5065 > > Department of Brain and Cognitive Sciences > > Massachusetts Institute of Technology > > 43 Vassar St. > > Cambridge, MA 02139 > > USA > > > > Phone: (001) 617-324-3747 > > Website: http://bioweb.cnb.csic.es/~iarganda/index_EN.html > > > > -- > > ImageJ mailing list: http://imagej.nih.gov/ij/list.html > > -- > ImageJ mailing list: http://imagej.nih.gov/ij/list.html > -- Ignacio Arganda-Carreras, Ph.D. Seung's lab, 46-5065 Department of Brain and Cognitive Sciences Massachusetts Institute of Technology 43 Vassar St. Cambridge, MA 02139 USA Phone: (001) 617-324-3747 Website: http://bioweb.cnb.csic.es/~iarganda/index_EN.html -- ImageJ mailing list: http://imagej.nih.gov/ij/list.html |
Free forum by Nabble | Edit this page |