Evaluate model prediciton quality with PhysicsAI

Filippo1993_G
Filippo1993_G Altair Community Member
edited August 8 in Community Q&A

Hello, 

I am trying to evaluate the quality of predictions made using a trained model in PhysiscsAI. 

It seems to me that the model, trained with more than 250 result files, reaches a good quality.

This can be stated considering 2 aspects (as reported in the figure herunder): 

  • Minimal difference in MAE values between the model trained with TRAIN-DATA vs the model trained with TEST-DATA
  • Low value of MAE with respect to the maximum contour value obtained in the corresponding simulations: the MAE is 2 order of magnitude smaller compared to the contour value. 

image

Additionally, when I try to predict the contour value on new geometry, I obtain very high values of confidence (close or equal to 1) (see the image herunder)

image

Nevertheless, when I try to compare the prediction with the ''conventional'' simulation obtained from Optistruct (applied on the same geometry) I notice a  similar stress distribution, but different values of the contour (comparing the max values of the scales reported in the image above and hereunder, there is a difference of almost 2x in the stress value)

image

 

So the questions are: 

  1. Is my AI model of good quality?
  2. How can I compare the results obtained with Physics AI with the ones obtained with Optistruct in order to obtain useful information concerning the structural repsonse?

Thank you in advance.

Answers

  • Adriano_Koga
    Adriano_Koga
    Altair Employee
    edited August 8

    Hi,

    your MAE is related to the same physical attribute you're training, so if you have a model showing stresses around 100 MPa, then your MAE ideally should be as small as possible, but at least around 5MPa or less.

     

    The comparison that you're performing with a new geometry will give you the confidence level of that geometry. Confidence level is related to how similar the new geometry is to the training set. Confidence of 1.0 means your new geometry is really similar to the original training set, so results should match quite well, assuming you have a nice model trained.

     

    When comparing the prediction to the real solver result, you shoukld always adjust the legends to be the same, and check if the difference isn't related to a hotspot, or singularity.

  • Filippo1993_G
    Filippo1993_G Altair Community Member
    edited August 8

    Hello Adriano,

    thank you for your response.

    So, it is reasonable to have a MAE at least two orders of magnitude smaller than the value I am trying to predict. If I wanted to reduce it even further, would the best way be to increase epochs, depth, and width, or to increase the number of data points to train the model on?

    Thank you for the clarification on the confidence value, it is clearer to me now.

    Ok I'll try to use the same scale, check possible singularities and see if I'll get more accurate comparison.

     

    Thank you 

  • Adriano_Koga
    Adriano_Koga
    Altair Employee
    edited August 8

    Hello Adriano,

    thank you for your response.

    So, it is reasonable to have a MAE at least two orders of magnitude smaller than the value I am trying to predict. If I wanted to reduce it even further, would the best way be to increase epochs, depth, and width, or to increase the number of data points to train the model on?

    Thank you for the clarification on the confidence value, it is clearer to me now.

    Ok I'll try to use the same scale, check possible singularities and see if I'll get more accurate comparison.

     

    Thank you 

    It is hard to tell for sure what would be best to get a better correlation on the prediction.

    But it might be a combination of all of them.

    If you feel like your model is not performing well, then try to play a little bit with width and depth.

    If these do not affect so much, and if it is feasible, try increasing the number of data points.