"Regarding KNN performance"

varunm1
varunm1 New Altair Community Member
edited November 2024 in Community Q&A
Hello,

I am applying KNN with k=5. I split the data into two parts. One part is used in cross-validation and other is used to get the model from Cross-validation for testing. 

I see that the Cross-validation performance is 0.619 (AUC) and for the test data set I separated its 0.812.

Is this because Cross-validation performance can be lower if some folds don't perform well?

Also, I learned that KNN is basically not a learning algorithm. which means it doesn't learn much from training but just uses the parameters to classify. Can this be the reason?

Thanks,
Varun

Best Answer

  • IngoRM
    IngoRM New Altair Community Member
    edited February 2019 Answer ✓
    You should have a look into the performances of the single folds (just place a breakpoint after the performance operator within the cross validation).  I would not be surprised if those performances fluctuate quite a bit.  If they are, this is probably the reason: you have been "lucky" with the particular test data set, i.e. it was "easier" for the model to predict.  This is exactly the reason why we prefer cross validation wherever possible: to reduce the impact of test data bias.
    Hope this helps,
    Ingo

Answers

  • IngoRM
    IngoRM New Altair Community Member
    edited February 2019 Answer ✓
    You should have a look into the performances of the single folds (just place a breakpoint after the performance operator within the cross validation).  I would not be surprised if those performances fluctuate quite a bit.  If they are, this is probably the reason: you have been "lucky" with the particular test data set, i.e. it was "easier" for the model to predict.  This is exactly the reason why we prefer cross validation wherever possible: to reduce the impact of test data bias.
    Hope this helps,
    Ingo
  • varunm1
    varunm1 New Altair Community Member
    Thanks @IngoRM