Reliability of Deterministic Designs
Optimization is one of the most, if not the most, exercised approach in engineering design exploration and there are numerous types of mathematical methods to achieve various goals. The nature of optimization methods can be deterministic or probabilistic (stochastic).
Presence of uncertainties in a system is inevitable and deterministic optimization techniques, as its name suggests, do not take them into account. The analysis of uncertainty is crucial as reliability and robustness of a design is paramount. The source of uncertainties (variations) can be completely random or random with certain degree of control. A well-known way to achieve reliable and robust system is to employ Reliability-Based Design Optimization (RBDO) in which the reliability and robustness targets are defined as optimization goals. But what if I already have an optimized system which was obtained through deterministic methods? Can I still assess the reliability of it? Yes, the influence of uncertainties can be studied through sampling-based methods, Monte, or Quasi-Monte Carlo. The optimized variables of deterministic study are subject to variations through sampling method based on distribution functions such Gaussian (Normal), Weibull, etc. Data analytics is then used to determine the statistical properties of the sample and assess the system reliability.
The following example is of a sizing optimization problem with four different thickness parameters. The performance of the design is assessed by its mass, displacement at Node 19021 and frequency at first mode.
Variable | Lower Bound | Nominal | Upper Bound |
Thickness 1 | 0.0018 | 0.002 | 0.0022 |
Thickness 2 | 9.00e-04 | 0.001 | 0.0011 |
Thickness 3 | 0.0045 | 0.005 | 0.0055 |
Thickness 4 | 0.0018 | 0.002 | 0.0022 |
Objective: Maximize (1st Frequency)
Constraint 1: Displacement at Node 19021 <= 0.0025
Constraint 2: Mass <= 2.5
At the end of the deterministic optimization, the optimized variables and performance of the system is as below.
Thickness 1 | Thickness 2 | Thickness 3 | Thickness 4 | Mass | Displacement at Node 19021 | 1st Frequency |
0.0020491 | 0.0011000 | 0.0045000 | 0.0018000 | 2.4861887 | 0.0025001 | 312.04877 |
When the final product is manufactured with given optimized thickness values above, it will certainly be subject to variations which is due to manufacturing tolerances. Each thickness is considered as controlled design with %5 variation and distribution scheme is Gaussian (normal).
Figure 1. Nominal values are from deterministic optimization and bounds are defined based on 5% variation.
In HyperStudy, reliability assessment is performed in Stochastic approach and the available sampling methods are Modified Extensible Lattice Sequence (MELS), Latin HyperCube, Hammersley and Random. Regardless of the method, reliability assessment requires high number of samples to explain the effect of variations accurately and it is recommended to use high accuracy fit models in lieu of expensive solvers. In this example, 1000 Hammersley samples were generated via a fit model.
Figure 2. Distribution of variation samples, Thickness 1 vs 2..
Figure 3. shows the reliabilities of Mass, and Displacement, for their respective constraint thresholds used in optimization. Reliability of a system or design is assessed through the constraints in the problem statement and product of reliabilities of all constraints provides the overall reliability of the system.
Reliability of Mass Constraint is 67 %
Reliability of Displacement Constraint is 48.7 %
Overall reliability of the design is 0.67 * 0.487 = 33.1 %
Figure 3. Reliability of Mass and Displacement responses for given thresholds.
Another data visualization tool that indicates reliabilities is the cumulative distribution plot. Figure 4. depicts cumulative likelihood of Mass and Displacement values which is nothing but the percentage of total occurrences below constraint threshold.
Figure 4.Cumulative Distribution of Mass and Displacement.
To conclude, any system or design is never perfectly consistent, and it is important to study the effects of inconsistencies (uncertainties) on the overall accuracy of the outcome to boost confidence and repeatability.