Why do the relative errors change?
Hello,
By executing the neural net with cross validation and the linear regression with cross validation together in the same process I get the following “relative errors”:
By executing only the neural net with cross validation separately in a single process I get the following relative error:
Why do the relative errors change if I run learning models together or if I run them separately?
Answers
-
Hi,
This surely is caused by a different random seed in both cases. If you want to avoid said behaviour, you have to tick the "use local random seed" option in the Cross-Validation operator.
However, the differences caused by changing the seed should be minimal if your process is correct. In your case it looks as if some neural nets models are not converging, therefore you have very dispair results in each fold of the cross-validation. I think you have to tune up your models and their optimization options.
Regards,
Sebastian
1 -
I have selected the "use local random seed" option in the Cross-Validation operator.
By executing the learning models with cross validation in the same process I get the following relative error: 73,34%.
By executing only the neural net with cross validation separately in a single process I get the following relative error: 203,41%.0 -
@s_sorrenti3 type in a seed like '1992' for each cross validation and try again. If that doesn't work, follow the rules of the Community by posting your XML and data.
2