My honest testing performance is greater than training performance? is it luck or incorrect

User: "Tsar1131"
New Altair Community Member
Updated by Jocelyn
I have used split operator on my data source in the ratio of 4:1 for training and honest testing I am using DT and cross-validation.
The performance result for testing is accuracy 93.21%, kappa 0.863 and for training accuracy 93.97%, kappa 0.695 
I need to know whether the model is underfitting data and how should I conclude this result 

Find more posts tagged with

Sort by:
1 - 5 of 51
    Hi,
    I would use a cross validation to check for the std_dev of the performance. Then you can see how lucky you are.

    Best,
    Martin
    User: "Tsar1131"
    New Altair Community Member
    OP
    hi @mschmitz
    it say +/- 0.31%
    User: "MartinLiebig"
    Altair Employee
    Accepted Answer
    so a 2sigma effect. I would not worry.
    User: "Tsar1131"
    New Altair Community Member
    OP
    Updated by Tsar1131
    Thanks @mschmitz
    can you comment on the value of kappa that my model is producing. I'm curious to know why it is high for honest testing compared to training. 
    Thanks again
    I would do the same trick there :)
    But generally, if testing is better than training its rather unproblematic