My honest testing performance is greater than training performance? is it luck or incorrect
I have used split operator on my data source in the ratio of 4:1 for training and honest testing I am using DT and cross-validation.
The performance result for testing is accuracy 93.21%, kappa 0.863 and for training accuracy 93.97%, kappa 0.695
I need to know whether the model is underfitting data and how should I conclude this result
The performance result for testing is accuracy 93.21%, kappa 0.863 and for training accuracy 93.97%, kappa 0.695
I need to know whether the model is underfitting data and how should I conclude this result
Find more posts tagged with
Sort by:
1 - 5 of
51

Hi,
I would use a cross validation to check for the std_dev of the performance. Then you can see how lucky you are.
Best,
Martin
hi @mschmitz
it say +/- 0.31%
it say +/- 0.31%
Thanks @mschmitz
can you comment on the value of kappa that my model is producing. I'm curious to know why it is high for honest testing compared to training.
Thanks again
can you comment on the value of kappa that my model is producing. I'm curious to know why it is high for honest testing compared to training.
Thanks again
Sort by:
1 - 1 of
11