how to interpret those different performances?

User: "Fred12"
New Altair Community Member
Updated by Jocelyn

hi,

I was using different operator settings using Boosting and Bagging with WJ48 and Random Forests...

I  basically used an optimize parameter gridsearch, inside it X-Validation, inside that MetaCost operator and AdaBoost or Bagging operator with WJ48 or Random Forest operator inside them....

 

now I get different performances, I use 70% for training, 30% for testing...:

for AdaBoost with MetaCost and WJ48 Decision Tree I get:

Unbenannt.PNG

Bagging with MetaCost and WJ48:

Unbenannt2.PNG

Bagging with MetaCost and Random Forest:

Unbenannt3.PNG

 

now which one of them is most representative? should I use 70 / 30 for cross validation? or something like 50/50 ?

In the last one, I get 83,7% accuracy, however, class 4 recall is only 60%, does it mean I should focus more on that (and therefore, this result is not optimal)?

whereas in the first example, recall is all about 75% for class 4 and 3 and >90% for class 1, and precision is all above 80%, but pred.4 is around 78% only,

but in the last performance , precision is all around 85.6% for pred.4 and 86.7% for pred.3 ...


2nd question: Is MetaCost with Boosting necessary? as I understood, there is already an implicit weighting that weights falsely classified examples more than others...

last question: Can I put more than 1 classifier into AdaBoost and Bagging? (e.g Decision Tree and Naive Bayes or SVM)? 

Find more posts tagged with