Decision trees - list of misclassified instances
Hello all,
I've just created a classifier based on a decision tree here in RapidMiner, and used X-validation to measure its accuracy.
After running the process a confusion matrix was created, showing a small number of misclassified instances. I would like to know which instances are those (not just the number), but I couldn't find how to do it >:(
Can someone please help me with this?
Thank you very much!
I've just created a classifier based on a decision tree here in RapidMiner, and used X-validation to measure its accuracy.
After running the process a confusion matrix was created, showing a small number of misclassified instances. I would like to know which instances are those (not just the number), but I couldn't find how to do it >:(
Can someone please help me with this?
Thank you very much!
Find more posts tagged with
Sort by:
1 - 4 of
41
Hi Steffen,
That was very helpful, thanks! Your first solution really does the job, but I'd prefer the second one (more elegant) if only I could make it work.
The problem is that when I write the prediction to an xls file, only one of the iterations of the X-validation shows up. The "Write as Text" operator seems to be the only one to append the info of all iterations, but it is hard to read.
Is there any obvious workaround to deal with is? Otherwise, no problem, I will compile the information I already have manually.
Once again, thx for your help,
Pedro
That was very helpful, thanks! Your first solution really does the job, but I'd prefer the second one (more elegant) if only I could make it work.
The problem is that when I write the prediction to an xls file, only one of the iterations of the X-validation shows up. The "Write as Text" operator seems to be the only one to append the info of all iterations, but it is hard to read.
Is there any obvious workaround to deal with is? Otherwise, no problem, I will compile the information I already have manually.
Once again, thx for your help,
Pedro
Are you angry about yourself ?
If you make a breakpoint after the "modelapplier"-operator (select the operator and right-click to see this option), you can see that you both the predicted and the original label available before the confusion matrix is calculated. If this is not enough, you can use tthe operator ...
- "Filter Examples" with the wrong/correct prediction option to filter the erroneous
- use a "Write"-Operator to write them out
hope this was helpful,
steffen