Adaboost Decision Stump

MSTRPC
MSTRPC New Altair Community Member
edited November 5 in Community Q&A
Hey all,
I have a Question about Decision Stumps in the Adaboost Algorithm, because in Literature it is recommended to use a "Weak Learner".

First I implemented an Decision Stump in the Adaboost Operator with 10 Iterations, but the Trees looked identical and my results weren't as expected. I saw that in the Tutorial Process of the Adaboost Algorithm ist used an Decision Tree with a Depth of 10. But isn't the advantage of Adaboost, that you use weak learner to get better results through iterative learning? 

With the default Decision Tree the Results are good, but I don't understand why a normal Decision Tree can be used here.


After that Process I got the Precision of the Model and in the Results there is a "w" with a value, ist this the Sum of the weights per Stump? I couldn't find any explanation. Sorry if this Question is obsolete, I am not that long into Rapidminer.


Greetings :smile:

MSTRPC

Best Answer

  • sara20
    sara20 New Altair Community Member
    edited June 2020 Answer ✓
    @MSTRPC

    Hello

    From my understanding if you take a look at the definition of  decision tree and Decision Stump you will find the answer.

    The Decision Stump operator is used for generating a decision tree with only one single split. The resulting tree can be used for classifying unseen examples. This operator can be very efficient when boosted with operators like the AdaBoost operator. The examples of the given ExampleSet have several attributes and every example belongs to a class (like yes or no). The leaf nodes of a decision tree contain the class name whereas a non-leaf node is a decision node. The decision node is an attribute test with each branch (to another decision tree) being a possible value of the attribute. ( definition is from link )

    So you can first visualize your data and take a look at your data in order to find more information about it then make any useful process according to your data and choose best algorithm for that. Also Auto model is a very good part of rapidminer in order to show the algorithms and compare the result of them.

    For more information you can take a look at here:
    https://docs.rapidminer.com/latest/studio/operators/modeling/predictive/trees/decision_stump.html


    I hope this helps
    sara

Answers

  • sara20
    sara20 New Altair Community Member
    edited June 2020 Answer ✓
    @MSTRPC

    Hello

    From my understanding if you take a look at the definition of  decision tree and Decision Stump you will find the answer.

    The Decision Stump operator is used for generating a decision tree with only one single split. The resulting tree can be used for classifying unseen examples. This operator can be very efficient when boosted with operators like the AdaBoost operator. The examples of the given ExampleSet have several attributes and every example belongs to a class (like yes or no). The leaf nodes of a decision tree contain the class name whereas a non-leaf node is a decision node. The decision node is an attribute test with each branch (to another decision tree) being a possible value of the attribute. ( definition is from link )

    So you can first visualize your data and take a look at your data in order to find more information about it then make any useful process according to your data and choose best algorithm for that. Also Auto model is a very good part of rapidminer in order to show the algorithms and compare the result of them.

    For more information you can take a look at here:
    https://docs.rapidminer.com/latest/studio/operators/modeling/predictive/trees/decision_stump.html


    I hope this helps
    sara
  • MSTRPC
    MSTRPC New Altair Community Member
    Hello, 

    thank you for the answer, this really helps me solving the problem :)

    Greetings,

    MSTRPC