Reposting this as a new thread, but my basic question is, is the auto model showing a testing accuracy or a training accuracy in the results view? Because I ran a GBT in auto model on 4500 lines of data with 15 features, received "accuracy" of 90% and f-measure of 84%, but when i applied the model to new unseen data (which i actually purposely held-out from the training and cross validation process), the accuracy rates declines to well below 50%. So I am not sure if I am running the validation process incorrectly, or perhaps not understanding what the results of the cross validation are telling me - as I had expected the auto model to produce an accuracy rate that was reflective of how well the model will perform in the future (i.e. testing error), especially given that the auto model uses a cross validation process inside optimize parameter. Though I am concerned that the split data operator that occured before the CV is perhaps causing an issue for me. Appreciate any thoughts.