Active Optimisation

Lead Research Organisation: Imperial College London
Department Name: Mathematics

Abstract

A central problem in data science is the optimisation of some objective function. In this project, we would like to assess whether one can endow optimisation schemes with the same sort of intelligence associated with (Bayes optimal) agents who plan optimal courses of action.
Bayesian optimisation has a long history, starting with the seminal work of Kushner on one-dimensional Wiener processes. It has appeared in derivative-free optimisation and has more recently been applied to machine learning and robotics. The key idea behind Bayesian optimisation is to treat the objective function as a random function and then use a small number of samples to infer its form and move to its optimum. These schemes use various acquisition functions (that specify which data point to sample next), which include the probability of improvement, expected improvement or upper confidence bounds. In the same spirit, we propose an expected free energy acquisition function.
We propose to re-cast any optimisation problem as a process of active inference by regarding the evaluation of the objective function as an observation - and the choice of the location in parameter space as an action. On this view, selecting the best course of action translates into choosing the best location in some parameter space at which to evaluate the objective function (i.e. log likelihood function in Bayesian model inversion).
The solution we propose is to leverage recent advances in theoretical neurobiology, which suggest that the most favourable actions (i.e. Bayes optimal) are those that minimise the free energy expected following the action. Technically, the expected free energy of a particular action is the usual variational free energy (used in variational Bayes) expected under the posterior predictive density of outcomes and any hidden states generating those outcomes. Crucially, expected free energy can be decomposed into uncertainty resolving and goal seeking aspects, which endows the search with the optimal combination of epistemic and utilitarian terms - lacking in most current schemes.
Most importantly, by selecting those actions (i.e., parameters to evaluate) that minimise expected free energy, one implicitly resolves the exploration-exploitation dilemma. In other words, if the form of the objective function is unknown, one can either devote evaluation resources to reducing uncertainty about its form (i.e. explore) or, one can strive to find its extremum (i.e. exploit). Because expected free energy subsumes both imperatives, active searches of this sort initially resolve uncertainty about the form of the landscape until a sufficient amount of uncertainty has been reduced to enable prior preferences to dominate. In the current setting, these prior preferences are simple: namely, the search expects to sample high log likelihood outcomes, by moving to a new region of parameter space, where the parameters cause the outcomes, through the log likelihood function of a generative model of those outcomes.
Alignment to EPSRC's strategies and research areas

This project falls within the EPSRC Mathematical Sciences "Statistics and applied probability" research area.

Any companies or collaborators involved

This project is undertaken in collaboration with the Department of Mathematics at Imperial College London and the Wellcome Centre for Human Neuroimaging at University College London.

Planned Impact

Probabilistic modelling permeates the Financial services, healthcare, technology and other Service industries crucial to the UK's continuing social and economic prosperity, which are major users of stochastic algorithms for data analysis, simulation, systems design and optimisation. There is a major and growing skills shortage of experts in this area, and the success of the UK in addressing this shortage in cross-disciplinary research and industry expertise in computing, analytics and finance will directly impact the international competitiveness of UK companies and the quality of services delivered by government institutions.
By training highly skilled experts equipped to build, analyse and deploy probabilistic models, the CDT in Mathematics of Random Systems will contribute to
- sharpening the UK's research lead in this area and
- meeting the needs of industry across the technology, finance, government and healthcare sectors

MATHEMATICS, THEORETICAL PHYSICS and MATHEMATICAL BIOLOGY

The explosion of novel research areas in stochastic analysis requires the training of young researchers capable of facing the new scientific challenges and maintaining the UK's lead in this area. The partners are at the forefront of many recent developments and ideally positioned to successfully train the next generation of UK scientists for tackling these exciting challenges.
The theory of regularity structures, pioneered by Hairer (Imperial), has generated a ground-breaking approach to singular stochastic partial differential equations (SPDEs) and opened the way to solve longstanding problems in physics of random interface growth and quantum field theory, spearheaded by Hairer's group at Imperial. The theory of rough paths, initiated by TJ Lyons (Oxford), is undergoing a renewal spurred by applications in Data Science and systems control, led by the Oxford group in conjunction with Cass (Imperial). Pathwise methods and infinite dimensional methods in stochastic analysis with applications to robust modelling in finance and control have been developed by both groups.
Applications of probabilistic modelling in population genetics, mathematical ecology and precision healthcare, are active areas in which our groups have recognized expertise.

FINANCIAL SERVICES and GOVERNMENT

The large-scale computerisation of financial markets and retail finance and the advent of massive financial data sets are radically changing the landscape of financial services, requiring new profiles of experts with strong analytical and computing skills as well as familiarity with Big Data analysis and data-driven modelling, not matched by current MSc and PhD programs. Financial regulators (Bank of England, FCA, ECB) are investing in analytics and modelling to face this challenge. We will develop a novel training and research agenda adapted to these needs by leveraging the considerable expertise of our teams in quantitative modelling in finance and our extensive experience in partnerships with the financial institutions and regulators.

DATA SCIENCE:

Probabilistic algorithms, such as Stochastic gradient descent and Monte Carlo Tree Search, underlie the impressive achievements of Deep Learning methods. Stochastic control provides the theoretical framework for understanding and designing Reinforcement Learning algorithms. Deeper understanding of these algorithms can pave the way to designing improved algorithms with higher predictability and 'explainable' results, crucial for applications.
We will train experts who can blend a deeper understanding of algorithms with knowledge of the application at hand to go beyond pure data analysis and develop data-driven models and decision aid tools
There is a high demand for such expertise in technology, healthcare and finance sectors and great enthusiasm from our industry partners. Knowledge transfer will be enhanced through internships, co-funded studentships and paths to entrepreneurs

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/S023925/1 01/04/2019 30/09/2027
2281351 Studentship EP/S023925/1 01/10/2019 30/09/2023 Lancelot Da Costa