All Constraints are Not Equivalent: How Subtle Choices Affect Your Data

Joseph Pajot
Joseph Pajot
Altair Employee

Design problems have constraints. Learn to think like an expert and set up your problem wisely.

The world of engineering is full of constraint conditions. These requirements impose restrictions on a design, bifurcating the set of all designs into two categories. There are feasible designs that meet the requirements or infeasible designs that fail to meet the requirements. The purpose of design exploration is to find an optimal design within the feasible set. This all seems simple enough and most engineers are comfortable with these fundamental concepts. However, not all constraints are equivalent. By understanding how these differences can affect the numerical analyses we can improve the results coming out of a design exploration process.

In my view there are three types of constraints. 

  1. Conventional performance based
  2. Naturally imposed
  3. Artificially imposed

The first type of constraint, which I call conventional, is a requirement on design performance. For example, a structure must keep mechanical stresses below a yield limit to ensure nothing breaks. In CAE, these conventional constraints are typically the output of a physics calculation, such as a virtual simulation. The second and third types of constraints are not outputs from a physics calculation; evaluation requires only the independent variables. The second constraint type is a naturally imposed constraint, imposed by nature or logic itself. A simple example is the relationship between the inner and outer diameters of an annular cross section; clearly, the outer diameter must be larger than the inner diameter. The third type is an artificially imposed constraint, typically for design considerations or rules. An example of an artificially imposed constraint is enforcing that the sheet metal thickness near a fastener must step up from the nominal gauge. The two types of imposed constraints are often handed specially in software, for example Altair HyperStudy calls them input variable constraints.

An uncut design space will take the shape of an N-dimensional rectangular box but imposed constraints will slice the design space. As an illustration, consider the image below that shows data samples of reduced design spaces consistent with the examples described in the previous paragraph.

image

Sampling data from this design space can result in highly correlated variables. Additionally, applying too many imposed constraints can make the feasible zones disconnected resulting in a feasible space that looks like the air gaps in Swiss cheese. Both challenges have a negative effect on the accuracy and effectiveness of two key numerical techniques in the design exploration process: predictive modeling and optimization.

Data from design exploration comes from drawing samples within the design space. The samples are mostly evenly spaced, but this leads to extrapolation near the edges of the space. This phenomenon is unfortunate, but also inevitable, because constraint boundaries are typically where optimality exists. If you did not need constraints to define the edges of feasibility, then they would be unnecessary! Thus, optimizations will tend to produce designs at the boundaries of feasibility, but these areas will require extrapolation. Extrapolation tends to reduce predictive accuracy which slows down any iterative optimization process.

This means there is an advantage to letting an optimization process navigate between feasible and infeasible designs when possible; with data on both what is feasible and infeasible, the numerical methods can more accurately predict behavior at the boundary. For conventional constraints, this happens routinely. It is typical for optimization iterations to bounce back and forth between feasibility and quickly narrow in on the best design. For naturally imposed constraints, e.g. outer diameter greater than inner diameter, the numerical complications are impossible to avoid.  But for the third class of constraints, those artificially imposed, it is prudent to take special care.

I will illustrate the problem with a simple example. Consider the 1-dimensional data set visualized below, where the full range of the design space is 0 < x < 1.

image

The blue data represents data that is below the threshold of an artificially imposed constraint, x < 0.75.  The green data represents data that violates the artificially imposed constraint. Consider a data sampling strategy that only collected the blue points that satisfy the condition. A regression of this data will lead to a predictive function represented by the red line in the image below.

image

At the boundary of the constraint (x=0.75) the predictive errors are large, evidenced by the difference between red regression and the nearby green data point. In contrast, if all the data were used then the predictions can be more accurate, as evidenced by the blue line in the right image above. In an optimization process, predictive errors can produce an optimal design that does not perform as expected. In the worst extreme, this can make it not optimal, at all.

“Sampling the entire uncut design space may be inefficient but relaxing the imposed constraint to include infeasible designs will allow the numerical solutions to observe behavior on both sides of the constraint boundary and produce more accurate predictions.”

Sampling the entire uncut design space may be inefficient but relaxing the imposed constraint to include infeasible designs will allow the numerical solutions to observe behavior on both sides of the constraint boundary and produce more accurate predictions. Of course, during the optimization process, the requirement condition should be strictly enforced with a conventional constraint. To visualize, compare the design space samples of a strict imposed constraint (left) with a relaxed imposed constraint). The relaxed sampling contains data that violates the imposed condition, but that data will be important to predict accurate behavior at the boundary (v2==v1).

image

I hope this discussion has opened your eyes to the truth that all constraints are not equal. Part of any design process is to learn what works and does not work by observation. Unnecessarily restricting the observations can hinder the process. This truth may not be intuitive, but optimization experts get the best results by not over constraining their design problem. Now you too can use experts’ secrets in your next design exploration.