Unbiased Inference for Complex Models

Lead Research Organisation: University of Oxford
Department Name: Statistics

Abstract

Large scale mathematical models play an essential role in many applications throughout science. A prime example of an application of a complex continuous model is the research on climate change. The climate is usually modelled as a non-linear mathematical-physical system based on the coupling of a model of the atmosphere and an ocean circulation model. Both of these models involve a big set of state variables, such as the temperature, humidity, the wind, flow speed and pressure, to name but a few. The state of the system is modelled as the collection of these parameters for every grid cell. The evolution of these parameters arises as a discretisation of the underlying continuous model. In order to make inference about the climate, the model needs to be run at a fixed discretisation. The accuracy of the inference depends on the number of times the model can be simulated under a finite computational budget which thus influences the level of the discretisation. This trade-off results in a widening gap between the models that we can simulate and the ones for which we can make sound statistical inference. Bayesian methods are ubiquitous in statistical modelling and machine learning for the analysis of data. The aim is to model the posterior distribution which is in most cases only available indirectly as an appropriate limit of a sequence of probability measures. The most prominent technique to access the posterior distribution is via MCMC algorithms. MCMC algorithms are fully flexible and generally applicable. However, they require simulations of the full complex model on each step which limits the number of steps that are possible for a fixed computational budget, thus requiring huge computational budgets if we want to keep a high level of accuracy. For this reason Monte Carlo methods are often neglected, even though they are the only methods targeting the correct posterior.

For a wide range of applications, for example in weather prediction, nuclear waste management and quantitative finance, the quantity to be inferred is often only given indirectly as an appropriate limit of distributions. The conventional approach to such an indirect representation requires a truncation. In most cases, the choice is to either run the MCMC only for a sufficiently long time, to fix a finite discretisation or to incorporate only a certain number of terms in the series expansion. However, all of these truncations lead to a systematic bias in the modelling of the target distribution and the exact accuracy of this bias is highly application-specific and very difficult to analyse.

The main motivation of this proposal is to establish, extend and improve schemes for direct unbiased inference and thus to remove systematic bias in Bayesian inference for complex models. In particular, it aims at improving established inference methods by an incorporation of a new class of unbiased estimators and to develop new inference algorithms that yield unbiased estimators for Bayesian inference. This approach naturally allows the distribution of computations across different discretisations which results in great computational gain which is beneficial for a wide range of applications. The BBC has, for example, recently reported on the possibility to use the mobile phone records in Western Africa in order to predict the spreading of Ebola. Mathematical models have also become a factor for the strategic planning of the Metropolitan police in London to predict crime and to identify the areas of high probabilities of certain crimes in order to increase the presence of the police accordingly. Many more of these very recent examples in the press can be found undermining the importance of the use of accurate models around us. In the medium term, a sound statistical method, allowing an unbiased evaluation of complex models would therefore optimise a wide range of production processes in industry, politics and medicine to name just a few.

Planned Impact

The potential to extract understanding by analysing complex relationships becomes apparent on an almost daily basis. The BBC has recently reported on the possibility to use mobile phone records in Western Africa in order to predict the spread of Ebola and complex mathematical models have become a factor for the strategic planning of the Metropolitan Police to predict crime and to identify the areas of high probabilities for certain crimes to name just two examples. However, a breakthrough to a computationally efficient sound statistical method that is applicable to a wide range of complex models is still lacking and currently the growth in model complexity is outpacing the statistical tools to handle such complexity in reasonable computing times. This proposal offers a fundamentally novel approach to the challenges that we are facing by exploiting the impact of a recently established scheme for unbiased estimators, viable at low computational cost, onto other branches in statistical modelling and machine learning in academia. Not only the academic environment will profit from this proposal but also a wide range of UK industries. Computational statistics and machine learning are key analytical techniques in almost all applications such as the financial industry, pharmaceutical industry, the defence industry to name but a few. A sound statistical method to make inference on the basis of complex models would be highly beneficial for time critical responses to important decisions. Therefore, we expect the developments of this proposal to provide tools which help to maintain the UK's leading edge in its industries.

The impact of this research project will be obtained through new methods, novel ideas and a better understanding of inference for and optimisation of complex models. The reason for a high demand of these methods is the tremendous interest in the evaluation of uncertainty in complex models. The spread of the resulting methods and ideas will mostly happen indirectly, and will thus only become effective on a time range of 5 to 10 years. On a smaller time scale the impact of the proposed research will mainly focus on interdisciplinary research in academia ranging from statistical analysis of (social) networks to uncertainty quantification.

Apart from its impact on knowledge and applications in academia and the UK economy, the proposed research project will also benefit everyone who is working on this project. New tools will be developed, understood and applied and thus, research skills will be refined. In particular, it will improve the abilities of the PDRA to implement, develop and analyse complex models and related statistical inference algorithms. As demonstrated above, these skills are crucial for research in academia as well as in the industry.
 
Description I have worked on developing new and insights in to existing algorithms that extract information from data. These can be broadly classified in three aspects
1) Stochastic gradient methods have been extremely popular because they are successful used to train neural networks for deep learning. We have developed guidelines and examples of cautions when they are used in other contexts such as sampling.
2) Because the data volume has increased so much there are more and more areas where models need to be blended with data that does not fit onto a single computer. For complex models this a very difficult aspect and we have mode progress on this.
3) Algorithms that help monitoring complex engineering systems have to evaluate the huge number of possible ways that the system can fail. By splitting these up in a clover way we can apply these algorithm to larger systems.
Exploitation Route I have been leading the Data Study Groups at Turing Instutite
A week-long event bringing together researchers and industry to work
on real data science problems posed by industry. Six companies will present a real world problem to an audience ranging from faculty members to postgraduate students, who then split into groups to work on and start designing solutions. At the end of the week a report is written summarising the results that have been obtained. Some of the methods developed in the grant have been applied in real world problems during these events.
Sectors Energy,Environment,Financial Services, and Management Consultancy,Healthcare

 
Description This is very preliminary impact. During the data study group https://www.turing.ac.uk/blog/industry-challenges-tackled-turing-data-study-groups/ some of the methods that have been the focus of this grant have been applied to the real world data provided by the participating companies. I am involved in taking this forward by supervising 4 PhD student doing summer internships with the two energy companies involved (NationalGrid and Shell). Moreover, the postdoc working on this research has hired by Citi bank and uses related methodogly there.
First Year Of Impact 2016
Sector Financial Services, and Management Consultancy
 
Description Colloaborative Research Agreement
Amount £52,000 (GBP)
Organisation NHS Scotland 
Sector Public
Country United Kingdom
Start 04/2018 
End 04/2019
 
Description Warwick Impact Fund
Amount £72,420 (GBP)
Organisation University of Warwick 
Sector Academic/University
Country United Kingdom
Start 12/2017 
End 04/2019
 
Title DSampler.jl is a package designed to provide an efficient, flexible, and expandable framework for samplers based on Piecewise Deterministic Markov Processes 
Description A software package for Monte Carlo and Data Science using novel Piecewise Determinstic Sampler.PDSampler.jl is a package designed to provide an efficient, flexible, and expandable framework for samplers based on Piecewise Deterministic Markov Processes and their applications. This includes the Bouncy Particle Sampler and the Zig-Zag Sampler. 
Type Of Technology Software 
Year Produced 2017 
Open Source License? Yes  
Impact A software package for Monte Carlo and Data Science using novel Piecewise Determinstic Sampler.PDSampler.jl is a package designed to provide an efficient, flexible, and expandable framework for samplers based on Piecewise Deterministic Markov Processes and their applications. This includes the Bouncy Particle Sampler and the Zig-Zag Sampler. 
URL https://github.com/alan-turing-institute/PDSampler.jl
 
Description Retrospective Monte Carlo Workshop 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Other audiences
Results and Impact Research workshop
Year(s) Of Engagement Activity 2016
URL https://warwick.ac.uk/fac/sci/statistics/crism/workshops/rmca/