Uncertainty Quantification at the Exascale (EXA-UQ)

Lead Research Organisation: University of Exeter
Department Name: Mathematics

Abstract

Exascale computing offers the prospect of running numerical models, for example of nuclear fusion and the climate, at unprecedented resolution and fidelity, but such models are still subject to uncertainty and we need to able to quantify such uncertainties (and for example use data on model outputs to calibrate the model inputs). Exascale computing comes at a cost. We will never be able to run huge ensembles go models on Exascale computers. Naive methods, such as Monte Carlo where we simply sample from the probability distribution of the model inputs, run a huge ensemble of models and produce a sample from the output distribution, are not going to be feasible. We need to develop uncertainty quantification methodology that allows us to efficiently, and effectively, perform sensitivity and uncertainty calculations with the minimum number of exascale model runs.

Our methods are based on the idea of an emulator. An emulator is a statistical approximation linking model inputs and outputs in a fast non-linear way. It also includes a measure of its own uncertainty so we know how well it is approximating the original numerical model. Our emulators are based on Gaussian processes. Normally we would run a designed experiment and use these results to train the emulator. Because of the cost of exascale computing we use a hierarchy of models from fast, low fidelity versions through higher fidelity more computationally expensive ones to the very expensive, very high fidelity one at the apex of the hierarchy. Building a joint emulator for all the models in the hierarchy allows us to gain strength from the low fidelity ones to emulate the exascale models. Although such ideas have been around for a number of years they have not been exploited much for very large models.

We will expand on the existing theory on a number of new ways. First we will look at the problem of design. To exploit the hierarchy to its fullest extent we need an experimental design that allocates model runs to the correct layer of the model hierarchy. We will extend existing sequential design methodology to work with hierarchies of model, not only finding the optimal next set of inputs for running the model but also which level it should be run in. We will also ensure that the sequential design is 'batch' sequential, allowing us to run ensembles rather than waiting for each run to return answers.

Because the inputs and outputs of exascale models are often fields of correlated values we will develop methods for handling such high dimensional inputs and outputs and how to relate them to other levels of the hierarchy.

Finally we will investigate whether AI methods other than Gaussian processes can be used to build efficient emulators.

Publications

10 25 50