Refine Your Search

Search Results

Author:
Viewing 1 to 6 of 6
Technical Paper

Discrete-Direct Model Calibration and Propagation Approach Addressing Sparse Replicate Tests and Material, Geometric, and Measurement Uncertainties

2018-04-03
2018-01-1101
This paper introduces the “Discrete Direct” (DD) model calibration and uncertainty propagation approach for computational models calibrated to data from sparse replicate tests of stochastically varying systems. The DD approach generates and propagates various discrete realizations of possible calibration parameter values corresponding to possible realizations of the uncertain inputs and outputs of the experiments. This is in contrast to model calibration methods that attempt to assign or infer continuous probability density functions for the calibration parameters-which incorporates unjustified information in the calibration and propagation problem. The DD approach straightforwardly accommodates aleatory variabilities and epistemic uncertainties in system properties and behaviors, in input initial and boundary conditions, and in measurement uncertainties in the experiments.
Journal Article

A Comparison of Methods for Representing and Aggregating Uncertainties Involving Sparsely Sampled Random Variables - More Results

2013-04-08
2013-01-0946
This paper discusses the treatment of uncertainties corresponding to relatively few samples of random-variable quantities. The importance of this topic extends beyond experimental data uncertainty to situations involving uncertainty in model calibration, validation, and prediction. With very sparse samples it is not practical to have a goal of accurately estimating the underlying variability distribution (probability density function, PDF). Rather, a pragmatic goal is that the uncertainty representation should be conservative so as to bound a desired percentage of the actual PDF, say 95% included probability, with reasonable reliability. A second, opposing objective is that the representation not be overly conservative; that it minimally over-estimate the random-variable range corresponding to the desired percentage of the actual PDF. The presence of the two opposing objectives makes the sparse-data uncertainty representation problem an interesting and difficult one.
Journal Article

Comparison of Several Model Validation Conceptions against a “Real Space” End-to-End Approach

2011-04-12
2011-01-0238
This paper1 explores some of the important considerations in devising a practical and consistent framework and methodology for working with experiments and experimental data in connection with modeling and prediction. The paper outlines a pragmatic and versatile “real-space” approach within which experimental and modeling uncertainties (correlated and uncorrelated, systematic and random, aleatory and epistemic) are treated to mitigate risk in modeling and prediction. The elements of data conditioning, model conditioning, model validation, hierarchical modeling, and extrapolative prediction under uncertainty are examined. An appreciation can be gained for the constraints and difficulties at play in devising a viable end-to-end methodology. The considerations and options are many, and a large variety of viewpoints and precedents exist in the literature, as surveyed here. Rationale is given for the various choices taken in assembling the novel real-space end-to-end framework.
Journal Article

Efficiencies from Spatially-Correlated Uncertainty and Sampling in Continuous-Variable Ordinal Optimization

2008-04-14
2008-01-0708
A very general and robust approach to solving continuous-variable optimization problems involving uncertainty in the objective function is through the use of ordinal optimization. At each step in the optimization problem, improvement is based only on a relative ranking of the uncertainty effects on local design alternatives, rather than on precise quantification of the effect. One simply asks “Is that alternative better or worse than this one?” –not “HOW MUCH better or worse is that alternative to this one?” The answer to the latter question requires precise characterization of the uncertainty— with the corresponding sampling/integration expense for precise resolution. By looking at things from an ordinal ranking perspective instead, the trade-off between computational expense and vagueness in the uncertainty characterization can be managed to make cost-effective stepping decisions in the design space.
Technical Paper

Type X and Y Errors and Data & Model Conditioning for Systematic Uncertainty in Model Calibration, Validation, and Extrapolation1

2008-04-14
2008-01-1368
This paper introduces and develops the concept of “Type X” and “Type Y” errors in model validation and calibration, and their implications on extrapolative prediction. Type X error is non-detection of model bias because it is effectively hidden by the uncertainty in the experiments. Possible deleterious effects of Type X error can be avoided by mapping uncertainty into the model until it envelopes the potential model bias, but this likely assigns a larger uncertainty than is needed to account for the actual bias (Type Y error). A philosophy of Best Estimate + Uncertainty modeling and prediction is probably best supported by taking the conservative choice of guarding against Type X error while accepting the downside of incurring Type Y error. An associated methodology involving data- and model- conditioning is presented and tested on a simple but rich test problem.
Technical Paper

A Paradigm of Model Validation and Validated Models for Best-Estimate-Plus-Uncertainty Predictions in Systems Engineering1

2007-04-16
2007-01-1746
What constitutes a validated model? What are the criteria that allow one to defensibly make the claim that they are using a validated model in an analysis? These questions get to the heart of what model validation really implies (conceptually, operationally, interpretationally, etc.), and these details are currently the subject of substantial debate in the V&V community. This is perhaps because many contemporary paradigms of model validation have a limited modeling scope in mind, so the validation paradigms do not span different modeling regimes and purposes that are important in engineering. This paper discusses the different modeling regimes and purposes that it is important for a validation theory to span, and then proposes a validation paradigm that appears to span them. The author's criterion for validated models proceeds from a desire to meet an end objective of “best estimate plus uncertainty” (BEPU) in model predictions.
X