dataset for parameter optimization
makak
New Altair Community Member
Hi all,
the ideal situation is to have 3 separate sets: for training, testing (parameter optimization) and validation. What if I train on 70%, optimize parameters on resting 30% and finally I evaluate whole dataset(100%) performance by 10-fold cross-validation. Is this correct or am I risking some overfitting this way?
And one more little question, maybe little out of point, but anyway, I have always exactly same micro and macro average from cross-validation. Is this ok, or it seems suspicious?
Thank you.
the ideal situation is to have 3 separate sets: for training, testing (parameter optimization) and validation. What if I train on 70%, optimize parameters on resting 30% and finally I evaluate whole dataset(100%) performance by 10-fold cross-validation. Is this correct or am I risking some overfitting this way?
And one more little question, maybe little out of point, but anyway, I have always exactly same micro and macro average from cross-validation. Is this ok, or it seems suspicious?
Thank you.
Tagged:
0