Physics Informed Neural Networks

George Nentidis
George Nentidis
Altair Employee

A Physics Informed Neural Network (PINN) is a Neural Network based structure used to solve, or rather to approximate the solution of, simulation and other PDE based problems, using the laws of physics that govern the problem.

The Neural Network (NN) is a well-known Machine Learning structure used successfully with regression and classification problems, pattern and sequence recognition and others. The NN is “trained” to do so by trying to minimize the error of its predictions with the true data. Training though is solely based on those data, without taking under consideration the physical characteristic of the underlying problem, which as a result makes NNs lack robustness in certain scientific scenarios.

image

Figure 1: A classical PINN scheme for linear elasticity problems.

The PINN is trying to enhance the NN in this direction by adding in the training process the extra demand that data also obey the PDE equations that describe the problem. NNs have been proved mathematically to be able to approximate any mathematical function, the more complex the NN the better the approximation. This property is used in PINNs to train a NN to operate as a function that approximates the solution of the equations that govern the problem. The existence of the extra restrictions of the equations makes the PINN able to find an optimal solution even with a small amount of data that can be sparse or incomplete. By incorporating in the training process even more information about the physical characteristics of the problem, the data can be omitted completely making the PINN a mesh-free alternative to traditional solvers in computational science.

image

image

Figure 2: Displacement in a Plane stress problem. OptiStruct results left, PINN results right.

PINNs have several advantages as a solving method. First, they bring in the high parallelization of NN training methods, making them able to exploit the big number of cores provided by GPUs. Also, the output of a PINN training process is a function that approximates the solution to the problem at hand, and not just numbers as in usual numerical methods. Thus, the PINN is trained once and can be used again without retraining, to predict values in milliseconds even outside their training zone, like predicting a temperature or simulating grids of different resolutions. This also makes them ideal for digital twin applications where they can be used as surrogate models for real-time visualization and interactive exploration. Examples of that are the Siemens Gamesa Renewable Energy platform and the NVidia FourCastNet which is a digital twin of Earth that attempts to predict the behavior and risks of extreme weather conditions.

image

Figure 3: NVIDIA FourCastNet

There are difficulties with PINNs too though. Training a NN is a process that might require some experimentation with hyper-parameters and the complexity of the NN. Also, since training a NN is an optimization problem, it might need a lot of iterations (epochs) to finally get it to converge. Those usually makes a PINN slower to get accurate results than traditional solvers, especially for small problems. The extra time needed for any data preparation (if any data) adds to the total time too. Having that said, PINNs have been reported to solve some problems that would be timely impossible to solve otherwise. That, along with the continuous increase in number of cheap cores in GPUs, only promises better results.

There is also some mathematical preparation that usually needs to take place before training a PINN. The physical characteristics of the problem need to be entered as source code, and most of the times those are made of complex mathematical formulas that is difficult to be expressed as such. Also, some theoretical knowledge of the problem is required because certain mathematical “tricks” might need to take place for the training to work successfully. Another issue might be that PINNs do not behave well in time dependent problems especially if simulation is required to take place for long time spans. This usually is faced with different ML techniques like different NN architectures.

All those problems might look like someone needs a good mathematical and ML background to use PINNs. Most of the times though these will need to be faced only once, since experimenting with a specific problem can lead to a set of formulas and techniques that can be used as a library in problems of the same nature. Thus, an implementation of a high-level tool is possible.

PINNs is currently a subject with intense research, examined in an increasing number of problems, and with methods and techniques that increase performance, accuracy and robustness coming up all the time. They are part of Scientific Machine Learning and AI which promises to have transformative effects across many domains. PINNs will not replace traditional solvers, but they will become (as also Scientific ML/AI in general) a valuable tool in the designer’s toolbox. Altair is a company with a huge history in simulation and high-performance computing, and with a great toolset that spans also data analytics and digital twins, PINNs will find a natural place in our vision.