Encoding No-Arbitrage Constraints in Neural Networks

Lead Research Organisation: University of Oxford

Abstract

Neural Networks (NN) form an expressive family of statistical models. While this makes them employable in a wide range of applications, our comprehension of how NNs perform inference is limited. In certain contexts this leads to implausible or inadmissible results. To illustrate the latter, consider the problem of predicting a patient's blood pressure based on a history of health data. The blood pressure consists of two data points - the systolic and the diastolic blood pressure - whose difference is by definition non-negative. However, a prediction performed by a simple NN need not respect this constraint, leading to an implausible prognosis. Similar constraints occur in a number of areas such as physics, economics, medicine and call for carefully designed NN architectures.


Option pricing is a prototypical example of constrained inference. An option is a financial contract that gives the holder the right but not the obligation to exchange an asset at a predetermined exercise date and price. Option prices have a strongly constrained structure that follows from financial considerations. Inconsistently priced options lead to so-called arbitrage opportunities that allow a buyer to build a portfolio that generates profit without risk.


Our specific interest is in Bermudan options, which have several exercise dates. If two Bermudan contracts are identical except for the permissible exercise dates, and one admits all exercise dates of the other, it fetches a higher price. This leads to a monotonicity constraint on the set of exercise dates. Its discrete nature differs from more classical gradient-based restrictions such as convexity. We tackle this by combining the traditional pricing paradigm via partial differential equations (PDE), which naturally encodes the no-arbitrage considerations, with the expressiveness of NN.


Control of McKean-Vlasov Dynamics with Absorption and Positive Feedback. Risk in financial systems is frequently modelled in the form of exogenous shocks. However, in reality we often observe macroscopic effects that arise endogenously from the accumulation of small-scale events. In a banking network, for example, the default of a single bank may trigger a cascade of bankruptcies which leads to a global collapse. We can capture such dynamics through so-called interacting particle systems (IPS) with absorption and positive feedback. If a particle (e.g. a bank) hits a set barrier (the bank defaults), it is removed from the system, and as a result other particles are pushed towards the barrier (other banks incur a loss as a result of the default).


Large systems, in which individual particles have a negligible effect on the collective and are nearly statistically independent, can be studied through a representative particle, which is subject to the aggregated state and feedback of the other particles. They are referred to as McKean-Vlasov systems.


In our project we introduce a central controller that seeks to stabilise a large system at minimal cost. A central bank, for instance, may inject cash to avert the collapse of the banking system while keeping inflation at a minimum. We study under which conditions optimal controls exist. In contrast with classical stochastic control theory (with a single agent as opposed to a system of particles), the state space of a McKean-Vlasov system is not finite-dimensional. Instead, the aggregated state is described by a (sub)probability distribution, which requires a novel analysis.


These projects are aligned with the following EPSRC research areas: (1) artificial intelligence technologies, (2) mathematical analysis, (3) non-linear systems, (4) operational research.


The industrial partner/collaborator for the first project is JP Morgan.

Planned Impact

Probabilistic modelling permeates the Financial services, healthcare, technology and other Service industries crucial to the UK's continuing social and economic prosperity, which are major users of stochastic algorithms for data analysis, simulation, systems design and optimisation. There is a major and growing skills shortage of experts in this area, and the success of the UK in addressing this shortage in cross-disciplinary research and industry expertise in computing, analytics and finance will directly impact the international competitiveness of UK companies and the quality of services delivered by government institutions.
By training highly skilled experts equipped to build, analyse and deploy probabilistic models, the CDT in Mathematics of Random Systems will contribute to
- sharpening the UK's research lead in this area and
- meeting the needs of industry across the technology, finance, government and healthcare sectors

MATHEMATICS, THEORETICAL PHYSICS and MATHEMATICAL BIOLOGY

The explosion of novel research areas in stochastic analysis requires the training of young researchers capable of facing the new scientific challenges and maintaining the UK's lead in this area. The partners are at the forefront of many recent developments and ideally positioned to successfully train the next generation of UK scientists for tackling these exciting challenges.
The theory of regularity structures, pioneered by Hairer (Imperial), has generated a ground-breaking approach to singular stochastic partial differential equations (SPDEs) and opened the way to solve longstanding problems in physics of random interface growth and quantum field theory, spearheaded by Hairer's group at Imperial. The theory of rough paths, initiated by TJ Lyons (Oxford), is undergoing a renewal spurred by applications in Data Science and systems control, led by the Oxford group in conjunction with Cass (Imperial). Pathwise methods and infinite dimensional methods in stochastic analysis with applications to robust modelling in finance and control have been developed by both groups.
Applications of probabilistic modelling in population genetics, mathematical ecology and precision healthcare, are active areas in which our groups have recognized expertise.

FINANCIAL SERVICES and GOVERNMENT

The large-scale computerisation of financial markets and retail finance and the advent of massive financial data sets are radically changing the landscape of financial services, requiring new profiles of experts with strong analytical and computing skills as well as familiarity with Big Data analysis and data-driven modelling, not matched by current MSc and PhD programs. Financial regulators (Bank of England, FCA, ECB) are investing in analytics and modelling to face this challenge. We will develop a novel training and research agenda adapted to these needs by leveraging the considerable expertise of our teams in quantitative modelling in finance and our extensive experience in partnerships with the financial institutions and regulators.

DATA SCIENCE:

Probabilistic algorithms, such as Stochastic gradient descent and Monte Carlo Tree Search, underlie the impressive achievements of Deep Learning methods. Stochastic control provides the theoretical framework for understanding and designing Reinforcement Learning algorithms. Deeper understanding of these algorithms can pave the way to designing improved algorithms with higher predictability and 'explainable' results, crucial for applications.
We will train experts who can blend a deeper understanding of algorithms with knowledge of the application at hand to go beyond pure data analysis and develop data-driven models and decision aid tools
There is a high demand for such expertise in technology, healthcare and finance sectors and great enthusiasm from our industry partners. Knowledge transfer will be enhanced through internships, co-funded studentships and paths to entrepreneurs

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/S023925/1 01/04/2019 30/09/2027
2435698 Studentship EP/S023925/1 01/10/2020 30/09/2024 Philipp Jettkant