Posted by
Ignacio Arganda-Carreras on
Feb 06, 2013; 3:09am
URL: http://imagej.273.s1.nabble.com/Re-advanced-Weka-segmentation-tp5001680p5001684.html
Dear Anda,
I can only think on this old survey from 1997:
http://machine-learning.martinsewell.com/feature-selection/DashLiu1997.pdfIn general, feature selection helps reducing the amount of memory to use
during the training and the posterior classification. In AWS, the default
classifier is a random forest, which works very well even in the presence
of too many non-informative features. If you have enough RAM, my advice is
that you start playing around with as many features as possible and then
keep reducing them to select the best of them.
The Weka explorer has an option to perform feature selection as well.
I hope this helps!
ignacio
On Tue, Feb 5, 2013 at 4:52 PM, Anda Cornea <
[hidden email]> wrote:
> Hello!
>
> I am a very happy end user of the weka segmentation with very limited
> understanding of the algorithm. Can anyone recommend any easy reading of
> general concepts and practical implications of the large choice of training
> features?
>
> Thank you in advance,
>
> Anda
>
>
>
> Anda Cornea, PhD
> Director of the Imaging and Morphology Support Core
> Oregon National Primate Research Center
> Oregon Health & Science University
> 503-690-5293
>
> --
> ImageJ mailing list:
http://imagej.nih.gov/ij/list.html>
--
Ignacio Arganda-Carreras, Ph.D.
Seung's lab, 46-5065
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology
43 Vassar St.
Cambridge, MA 02139
USA
Phone: (001) 617-324-3747
Website:
http://bioweb.cnb.csic.es/~iarganda/index_EN.html--
ImageJ mailing list:
http://imagej.nih.gov/ij/list.html