What is. Uncertainty Quantification (UQ)? THE INTERNATIONAL ASSOCIATION FOR THE ENGINEERING MODELLING, ANALYSIS AND SIMULATION COMMUNITY

Size: px
Start display at page:

Download "What is. Uncertainty Quantification (UQ)? THE INTERNATIONAL ASSOCIATION FOR THE ENGINEERING MODELLING, ANALYSIS AND SIMULATION COMMUNITY"

Transcription

1 What is Uncertainty Quantification (UQ)? THE INTERNATIONAL ASSOCIATION FOR THE ENGINEERING MODELLING, ANALYSIS AND SIMULATION COMMUNITY

2 What is Uncertainty Quantification (UQ)? Definition of Uncertainty Quantification: Uncertainty Quantification (UQ) is the modelling and evaluation of the impact of imperfectly known information on the design and development of products or processes to support decision making. Why do we need UQ? Recent years have seen a drive to replace reserve margins-based deterministic design procedures with quantified variation-based design procedures using stochastic analysis either to optimize design solutions further, or to quantify design robustness and reliability more precisely. With the traditional approach of deterministic design, all uncertainties are assumed to be accounted for with reserve margins, often in combination with a "worst case scenario" approach to the values used in deterministic calculations. As the complexity of design solutions increases; however, it becomes more and more difficult to define with confidence what the absolute "worst case scenario might be, and to identify the appropriate reserve margin for any given situation. This results in situations where the designs are over-designed to an unknown degree, or even potentially under-designed. The use of UQ techniques addresses these issues directly, as shown in Figure 1. Figure 1: Difference between design rule understanding and UQ UQ also brings benefits in validation of simulation against physical test, including the uncertainties associated with both the physical test data and the simulation results in a comparison that allows a confidence level to be associated between the two. This approach allows for a better understanding of the true situation in comparison with a simple observation between two sets of numbers. Thus, the stated confidence provided by UQ supports more reliable and consistent decision-making. UQ can also be used in the model development process to quantify the contribution of model assumptions on the predicted response. Performing sensitivity analyses on the assumptions identifies opportunities for model improvements or the need for additional experimental data to reduce the uncertainty around the assumptions. Figure 2 shows a scenario of propagating What is Uncertainty Quantification (UQ)?

3 variations in inputs through the model to predict the reliability function and probabilistic sensitivity factors, with the dotted lines representing uncertainty in the assumed inputs and hence establishing the bounds on the predicted reliability. The vertical red lines represent traditional conservative values on inputs and may result in non-optimal designs. The vertical black lines represent nominal or mean values. Figure 2: Uncertainty propagation process. Simulation methods and tools are used in combination with physical testing to perform the stochastic assessments, as it would not be possible to generate all of this information using physical testing alone. One of the key steps for a stochastic analysis is the formulation of the statistical models to characterize imperfect and/or unknown information. These models are required to allow the stochastics methods to sample the inputs and build a picture of the variation in the key outputs. The following section outlines the steps involved in developing statistical models. Key Steps in the UQ process: 1. Identify all inputs that are influencing the critical simulation outputs. 2. Collect all available data and estimations about the uncertainties such as: available measured data, available upper and lower limits as a result of quality processes in place, or estimations from relevant experts, or other valid sources. 3. Classify your input data into irreducible and reducible uncertainties, as these will have to be treated differently in any stochastic assessment [Reference 1]. An irreducible uncertainty (aleatory uncertainty) is an inherent variation associated with the physical system being modeled which is characterized by a probability distributions. Reducible uncertainty (epistemic uncertainty) is the lack of precision in a measured or calculated value that can be reduced by gathering more data, observations, or information. Reducible uncertainty is typically covered by conservative assumptions. It is also called "lack of knowledge uncertainty". In real life applications, both kinds of uncertainties are present. For aleatory

4 uncertainties, traditional statistical methods can be used to analyze the data and derive the appropriate statistical models. However, since the quantity and quality of available data for the inputs will determine the type of statistical model that can be developed, it is important to understand the influence on these statistical models, and to decide how this will be treated in any subsequent UQ analysis. For epistemic uncertainties, Bayesian approaches and the principle of maximum entropy can help turn information and expert opinion into a distribution. Some examples for common sorts of information are given in [Reference 2]. 4. The next step for any UQ is to perform a sensitivity analysis of important responses to all of the uncertainties. The purpose is to check any assumptions about parameter unimportance, linearity, or dimensionality of the UQ analysis [Reference 3]. 5. After developing a good understanding of the important inputs, the next step is to propagate all this variability to the output side of the simulation or experiment to characterize the variability of the key outputs. A multitude of methods is available to perform this propagation. Some generally available methods are illustrated in Figure 3. The most general and accurate method to approximate response variability and related probability is the Monte Carlo Sampling technique. However, because of the very large number of samples needed to produce precise results, this method is not suitable for situations where the calculations will be too costly. In these cases, alternative methods can be used, which continue to be very active research topics. Figure 3: Some methods for uncertainty propagation. 6. Comparing the estimated variation in the key outputs (from the previous step) with the defined allowable limits and design targets allows for a much better decision-making process during the design and development of (engineering) systems. What is Uncertainty Quantification (UQ)?

5 Summary: This short flyer defines the term uncertainty quantification and summarizes the key steps for any UQ process. These steps are agnostic to the application area. If you are interested in this area and want to help shape future guidance materials please contact NAFEMS to become a member of the Stochastics Working Group (stochastics@nafems.org). References: [1] The America Society of Mechanical Engineers Codes & Standards - V&V VERIFICATION AND VALIDATION IN COMPUTATIONAL MODELING AND SIMULATION. [ONLINE] Available at: (Accessed 6 July 2018) [2] BIPM, IEC, IFCC, ILAC, ISO, IUPAC, IUPAP and OIML, Evaluation of measurement data - Supplement 1 to the Guide to the expression of uncertainty in measurement - Propagation of distributions using a Monte Carlo method, Joint Committee for Guides in Metrology, JCGM 101:2008, 2008, (Accessed 5 July 2018) [3] Bartholomew. P, What is Sensitivity Analysis, NAFEMS, (Accessed July 2018) Simple Example: Problem description: A design team is asked to create a coat hook that differs from the classical shape. They decide to shape it as a cantilever beam with a small outstanding rim to prevent anything placed on the hook from sliding off. Anticipating a deflection they realize that the coat, or whatever is placed on the hook, will slide to the rim and therefore the design can be treated as a simple cantilever beam with a tip load, see Figure 4. Since the design team are uncertain with respect to the application of the hook, they ask for more information. The response is that the customer does not yet know where the hook will be used. It could be used for a kindergarten (one small jacket on one hook) or for a theatre (a couple delivers two winter coats at the cloakroom). The answer is very vague, but it is absolutely certain that the load will be more than children s outerwear on a hook. As a guideline, they use a mass of 15 kg. The design team struggles with the vague response of the client in respect to the anticipated loading. They have a strong feeling that the client is just guessing and the uncertainty, caused by a lack of knowledge, can be reduced. Figure 4: Cantilever beam with tip load

6 The determinstic approach Based on their experience the design team focuses on a cantilever rod with a circular cross section and a tip load, F 150 N. They decide to use a length, L = 70 mm. They have had experience working with austenitic steel and select AISI316 with a yield stress, s yeild = 225 MPa. They decide that an allowable design stress limit of 60% of the material yield stress is acceptable for the operational loading. These inputs define the circular cross section of the rod to be: The uncertainty involved The design team is very confident regarding the length of the rod, since this is the only aspect over which they have absolute control. Nevertheless they realize that even in this case the length of 70 mm is a mean value with a standard deviation. They are convinced that they can reach a standard deviation of 1.0 mm. The rod supplier offers diameters in whole millimetres and looking at the ample margin to the yield stress they decide to use a rod diameter of 9.0 mm. These diameters are mean values with a standard deviation of 0.45 mm. The production process of the rods has a manufacturing end control and diameters in excess of 9.4 mm are removed as well as diameters less than 8.5 mm. The rolling process delivers diameters that possess a normal distribution, but due to removal of excessive diameters the distribution is truncated. The yield stress is a minimum specified yield stress which implies that the mean value is higher. Via the supplier they hear from the steel mill that the material has a mean yield stress value of 270 MPa, a log-normal distribution and a standard deviation of 14 MPa. The uncertainties mentioned so far are irreducible and called aleatoric. The tip load has not been quantified in detail but based on the client s information they assume a maximum load of two coats, large size, wet from stormy weather with the possibility that the pockets contain a substantial purchase from a hardware store. As a minimum load a small child coat in a kindergarten is assumed. This information is translated into a uniform distribution between 90 N and 210 N. This is called an epistemic uncertainty as this can be easily reduced by gathering more information from the use case of the coat hook. An overview of the input parameters is presented in Table 1. Variable Mean value Standard deviation Distribution type Length [mm] Normal Diameter [mm] 8.5 < 9.0 < Truncated normal Yield stress [MPa] Log-normal Tip load [N] n/a n/a Uniform (90 to 210) Table 1 Statistical input What is Uncertainty Quantification (UQ)?

7 UQ The probability of failure is calculated, i.e. the probability that the bending stress, S exceeds the yield stress, Y or Pr (Y-S < 0). Figure 5 shows the histogram of the difference between the bending stress and the yield stress, retrieved by the response surface method, followed by Monte Carlo Sampling to obtain the required data. The analysis is performed and the probability that the difference between the bending stress and yield stress is smaller than zero amounts to 9.4x10-5 or 1 in failures. The sensitivity to the input parameters is illustrated in Figure 6, where the biggest influence originates from the tip load. Since this input contains a large variance it dominates the response. The result is less sensitive to the length, since the variance is low. The designer of the coat hook is satisfied with the result and proposes a diameter of 9.0 mm. However, since the deterministic approach resulted in a diameter of 9.3 mm the client must be convinced that the proposed design with 9.0 mm is acceptable. In this discussion the stochastic analysis with the quantification of the failure probability provided the necessary information to convince the client. Figure 5: Histogram Figure 6: Sensitivity analysis

8 Published By NAFEMS Order Ref: WT08 NAFEMS Ltd.