🎉Community Raffle - Win $25

An exclusive raffle opportunity for active members like you! Complete your profile, answer questions and get your first accepted badge to enter the raffle.
Join and Win

Is there any method to check if a model has overfit?

User: "Curious"
New Altair Community Member
Updated by Jocelyn
Is there any process/practical method I can run to check it? 

Find more posts tagged with

Sort by:
1 - 4 of 41
    User: "hughesfleming68"
    New Altair Community Member
    Updated by hughesfleming68
    Forward testing is your only option but this also goes for models that generalize well. With regression problems you will often find that the learner settles on the direction of the last value being being the best predictor of the next value. In this case, the value of the prediction is questionable and is most likely caused by over fitting. You also have to understand your data. If your data is noisy with little serial correlation and closely resembles a random walk and at the same time your testing is giving unusually good results then it is safe assume that you have a problem. Too good to be true also applies to machine learning.

    Nevertheless, this is a good question that everyone will have to deal with at some point. The best procedure is to establish a baseline. 1 - Build your forecast using a default model. 2 - Determine which learner is most suitable for the data and 3- Forward test on unseen data. Not spending enough time on 1 and 3 is where most people go wrong.
    Hi,
    my main question is: Do i care? Overfitting means, that my training-performance is better than my testing performance. If my correctly validated test performance is good, i am usually fine.
    BR,
    Martin
    User: "Telcontar120"
    New Altair Community Member
    I think cross-validation is the standard baseline approach to measuring your performance.  As @mschmitz says, a bit of overfitting is almost inevitable with any ML algorithm.  The question isn't really "is this model overfit" as much as it is "how will this model perform on unseen data"?  Cross-validation is the best way to answer that question while still making use of the most possible information in both training and testing your model.
    User: "kypexin"
    New Altair Community Member
    Hi @Curious

    To add up to previous answers: use common sense :)

    If on a test set you got an error of 0.001 or AUC = 99.95, then something is certainly wrong. Any 'too good to be true' result may generally indicate overfitting. Also, use correlation matrix to see if some attributes correlate too high with the label.