Why do the relative errors change?

s_sorrenti3
s_sorrenti3 New Altair Community Member
edited November 5 in Community Q&A

Hello,
By executing the neural net with cross validation and the linear regression with cross validation together in the same process I get the following “relative errors”:

together.png
errors together.png

By executing only the neural net with cross validation separately in a single process I get the following relative error:

alone.png
error.png

Why do the relative errors change if I run learning models together or if I run them separately?

 

 

Tagged:

Answers

  • SGolbert
    SGolbert New Altair Community Member

    Hi,

     

    This surely is caused by a different random seed in both cases. If you want to avoid said behaviour, you have to tick the "use local random seed" option in the Cross-Validation operator.

     

    However, the differences caused by changing the seed should be minimal if your process is correct. In your case it looks as if some neural nets models are not converging, therefore you have very dispair results in each fold of the cross-validation. I think you have to tune up your models and their optimization options.

     

    Regards,

    Sebastian

  • s_sorrenti3
    s_sorrenti3 New Altair Community Member

    I have selected the "use local random seed" option in the Cross-Validation operator.
    By executing the learning models with cross validation in the same process I get the following relative error: 73,34%.
    By executing only the neural net with cross validation separately in a single process I get the following relative error: 203,41%.

  • Thomas_Ott
    Thomas_Ott New Altair Community Member

    @s_sorrenti3 type in a seed like '1992' for each cross validation and try again. If that doesn't work, follow the rules of the Community by posting your XML and data.