Algorithmic Trading with Stochastic Control and Reinforcement Learning

Lead Research Organisation: Brunel University London
Department Name: Mathematics

Abstract

We aim to research the field of algorithmic trading. In particular we will explore optimal execution, market-making, and statistical arbitrage strategies. Stochastic control has been used extensively to optimise trading strategies in each of these three areas, given a stochastic model for asset prices and volatility. Reinforcement learning provides a model-independent approach to solving optimisation problems.
There have been several attempts to apply machine learning to quantitative trading problems. Reinforcement learning in particular is an area of promise. According to Sutton and Barto 2018 SB18, reinforcement learning seeks to learn how to choose actions that maximise rewards in a given environment. It has some similarities to the stochastic control approach in that it also seeks to optimise a value function. As a machine learning approach, however, it incorporates empirical data and must be trained on historical or simulated data to reach an optimal value. Guo et al GLSW17 explain how reinforcement learning has been applied to various fields such as "neuroscience, game theory, multi-agent systems, operations research and control systems". In 2006, Nemyvaka, Feng and Kearns NFK06 applied reinforcement learning to optimize trading strategies using limit order book data from NASDAQ, and found significant improvements over "submit and leave" strategies. In their 2014 paper, Hendrix and Wilcox HW14 applied reinforcement learning to improve the Almgren-Chriss model by 10 percent, training and testing their model on South African equities. Fernandez-Tapia FT15 uses reinforcement learning to design market-making strategies. He uses a discrete-time version of the Avellaneda-Stoikov model, and performs on-line learning to update the optimal bid and ask quotes as new data is obtained. This is an area of active research in both academia and industry. We intend to explore this area in our future research.

People

ORCID iD

Adam Hesse (Student)

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/T518116/1 01/10/2020 30/09/2025
2495600 Studentship EP/T518116/1 01/01/2021 28/02/2023 Adam Hesse