Altair RISE
A program to recognize and reward our most engaged community members
Nominate Yourself Now!
Home
Discussions
Community Q&A
What algorithm does Decision Tree used in Rapidminer?
johnny5550822
Hi all,
What kind of Decision tree algorithm does Rapidminer used? Does it take care of imbalanced data?
Thanks!
Johnny
Find more posts tagged with
AI Studio
Decision Tree
Algorithms
Accepted answers
All comments
fras
If you have strongly imbalanced data do not use a decision tree.
In general, exploring your data with a decision tree is a good idea, applying the model
on unseen data not always.
You may preprocess your data by applying the operator "Sample (Bootstrapping)"
but you should switch off preprocessing in the testing step.
For further documentation please refer to the documentation of the decision tree operator.
johnny5550822
thanks for your reply. Because I know there are algorithm out there which solved imbalance problem (for decision tree), and I am not sure about the version that decision tree is using in rapidminer. Like C4.5 or something else?
MariusHelf
Hi,
I am not sure which implementation the RapidMiner decision tree is using, I suppose something similar to C4.5. If you want to make sure to use C4.5 you can use W-J48 from the Weka Extension. That operator is a free implementation of C4.5.
Best regards,
Marius
johnny5550822
Great, thanks a lot!
fmon
I suppose based on the criterion you use in the parameter setting of decision tree operator ,the RM produces a different tree using a different algorithm like c4.5.
Am I right?
If anyone has any information please share it here.
thanks
MariusHelf
Hi,
the algorithm stays the same, no matter which criterion you choose. Only in each node the "best" attribute for splitting is selected using a different method, depending on the parameter setting.
Best regards,
Marius
fmon
Hi,
Thank you for your helpful answer.
So who knows what is the algorithm used by the operator "Decision Tree" to produce a decision tree?
MariusHelf
Well, as I said, it's similar to C4.5. In each node the split attribute is chosen by iterating all attributes, finding the best split for each attribute with respect to the splitting criterion, and then using the attribute that maximizes the chosen criterion.
For nominal attributes always one branch for each value is created. For numerical/date attributes always a binary split is performed. To find the best split value all possible values in the training data are tried.
The procedure is repeated until you have pure leaves or one of the pre-pruning conditions is met. Then optionally some post-pruning is applied.
Best regards,
Marius
fmon
Thanks,
I wanted to make myself sure!
Quick Links
All Categories
Recent Discussions
Activity
Unanswered
日本語 (Japanese)
한국어(Korean)
Groups