In the ordinary process of estimating uncertainty in model predictions one usually looks to some set of calibration experiments from which the model can be parameterized and then the resulting discrete set of model parameters are used to approximate the joint probability distribution of parameter vectors. That parameter uncertainty is propagated through the model to obtain predictive uncertainty. A key observation here is that usually, the modeler will attempt to find a unique “best” vector of parameters to match each calibration experiment and these “best” parameter vectors are used to estimate parameter uncertainty.
In the work presented here, it is shown how for complex models — having more than a few parameters — it can happen that each experiment can befit equally well by a multitude of parameter vectors. It is also shown that when these large numbers of candidate parameter vectors are compiled the resulting model predictions may manifest substantially more variance than would be the case without consideration of the non-uniqueness issue.
The contribution of non-uniqueness to prediction uncertainty is illustrated on two very different sorts of model. In the first case Johnson-Cook models for a titanium alloy are parameterized to match calibration experiments on three different alloy samples at different temperatures and strain rates. The resulting ensemble of parameter vectors are used to predict peak stress in a different experiment. In the second case, an epidemiological model is calibrated to history data and the parameter vectors are used to calculate a quantity of interest and uncertainty of that quantity.