🎉Community Raffle - Win $25

An exclusive raffle opportunity for active members like you! Complete your profile, answer questions and get your first accepted badge to enter the raffle.
Join and Win

What is Training Accuracy / Testing Accuracy ?

User: "Fred12"
New Altair Community Member
Updated by Jocelyn

hi,

we had a discussion about what training accuracy is and what testing acc. is, in my opinion, there is no "training accuracy" (at least, I don't know what it is) because you measure performance always on testing data...

maybe you can use "validation data" in X-Validation for performance... however, I don't undersand what training accuracy means ?! I usually do X-Validation or Split-Validation.. or is training accuracy possible only with certain learners? at least I never encountered "training accuracy" in any performance operators...

 

Find more posts tagged with

Sort by:
1 - 3 of 31
    User: "MartinLiebig"
    Altair Employee
    Accepted Answer

    training accuracy is usually the accuracy you get if you apply the model on the training data, while testing accuracy is the accuracy for the testing data.

     

    It's sometimes useful to compare these to identify overtraining.

     

    ~Martin

    User: "Fred12"
    New Altair Community Member
    OP

    @mschmitz why does one differentiate between testing and training accuracy ? or why do I need training accuracy at all if its not representative for test performance...

    is it to see the bias of your model? e.g to see if it is overfitting your testing data or not?

    User: "IngoRM"
    New Altair Community Member
    Accepted Answer

    In general you are right and you should ignore training error completely.  It is not telling you really anything useful.  For example, a K-NN learner with k=1 will always deliver 100% accuracy on the training data set.  But this does not mean that it can classify ANYTHING correctly on any non-training data point.  I don't get why people are still somewhat obsessed with reporting training error but whatever :smileytongue:

     

    Martin's point is still valid though: If you optimize your model with parameter optimization, feature selection etc. it sometimes can be useful to observe both training and testing error (although I personally still only focus on testing errors) to get some gut feeling about the robustness of the model.  If the difference between both starts to get large quickly, you probably are too far in "overfitting land" already.

     

    Cheers,

    Ingo