How does Feature Selection - Forward Elimination work in detail?
Muhammed_Fatih_
New Altair Community Member
Hello together,
I have a question regarding regarding the picking process of Forward Elimination. The documentation of RapidMiner tells us that: "The Forward Selection operator starts with an empty selection of attributes and, in each round, it adds each unused attribute of the given ExampleSet. For each added attribute, the performance is estimated using the inner operators, e.g. a cross-validation. Only the attribute giving the highest increase of performance is added to the selection. Then a new round is started with the modified selection. [...]"
What does this mean in detail? Assumed I have dataset with 10 attributes from a1,a2, ... to a10. Does Forward Selection sequentially go from a1 to a2 and then to a3 until no increase in performace is reached or what is meant with " Only the attribute giving the highest increase of performance is added to the selection."?
Thank you in advance for your responses!
Best regards,
Fatih
I have a question regarding regarding the picking process of Forward Elimination. The documentation of RapidMiner tells us that: "The Forward Selection operator starts with an empty selection of attributes and, in each round, it adds each unused attribute of the given ExampleSet. For each added attribute, the performance is estimated using the inner operators, e.g. a cross-validation. Only the attribute giving the highest increase of performance is added to the selection. Then a new round is started with the modified selection. [...]"
What does this mean in detail? Assumed I have dataset with 10 attributes from a1,a2, ... to a10. Does Forward Selection sequentially go from a1 to a2 and then to a3 until no increase in performace is reached or what is meant with " Only the attribute giving the highest increase of performance is added to the selection."?
Thank you in advance for your responses!
Best regards,
Fatih
0
Best Answer
-
Hi!
In round one every attribute is tried (a1 ... a10). RapidMiner selects the best one. Then you have a model with one attribute, the one giving the highest performance. Let's assume a6 was the best one.
In round two the algorithm builds a model from a6 plus (a1 ... a10), so the model consists of two attributes: a6 and the one giving the best performance in combination. It could also happen that the performance didn't improve from round one, so the results are ignored and only the single-attribute model is returned.
If round two added an attribute, in round three we are trying three-attribute models, with the first two attributes being fixed from the previous rounds. And so on.
I hope this is clear.
There's a setting for "speculative rounds". This enables doing a round without performance improvement, in hope for getting a better result by executing the next selection round. It might find good combinations of attributes, but for the price of increased calculation time.
Regards,
Balázs6
Answers
-
Hi!
In round one every attribute is tried (a1 ... a10). RapidMiner selects the best one. Then you have a model with one attribute, the one giving the highest performance. Let's assume a6 was the best one.
In round two the algorithm builds a model from a6 plus (a1 ... a10), so the model consists of two attributes: a6 and the one giving the best performance in combination. It could also happen that the performance didn't improve from round one, so the results are ignored and only the single-attribute model is returned.
If round two added an attribute, in round three we are trying three-attribute models, with the first two attributes being fixed from the previous rounds. And so on.
I hope this is clear.
There's a setting for "speculative rounds". This enables doing a round without performance improvement, in hope for getting a better result by executing the next selection round. It might find good combinations of attributes, but for the price of increased calculation time.
Regards,
Balázs6 -
Hi Balazs,
thank you for your answer. This helped me! An additional question with regard to the ForwadElimination subprocess. Which classification model would you use within crossvalidation in order to enhance the running time? I am currently conducting a Forwald Elimination based on matrix with 72.000 rows and 9000 attributes. I've chosen SVM as classifier for my bipartite labeling. The model is still running since 2 days.
Thank you for your answer!
Fatih0 -
Hi,
Naive Bayes is the fastest algorithm, especially with the huge number of attributes.
However, it is not always correct to use one algorithm for the feature selection and then another final modeling. This approach can work well for you, or fail badly. Still a good idea to try.
Regards,
Balázs3