Optimisation for Game Theory and Machine Learning

Lead Research Organisation: University of Oxford
Department Name: Computer Science

Abstract

The project lies in the general area of mathematical analysis of algorithms, and computational complexity. Thus it focuses on provable performance guarantees of algorithms, and the fundamental limits to solvability of certain computational problems. This line of work is important in developing our understanding of why and how the associated problems are solved.

The problems of interest to this project consist of various related problems in optimisation (both continuous and discrete), some of which arise in Algorithmic Game Theory, and some arising in machine learning and neural networks. For example, the process of training a neural network involves searching for values of the weights of the neural network that minimise its disagreement with a data set. This usually uses some version of gradient descent, and is thus treated as a problem of continuous local optimisation. A widespread observation is that the efficacy of such learning systems is poorly understood: both the predictive power of the system, and the ability (in practice) of the local optimsation of find a solution reasonably quickly, deserve to be better understood. Multiple solutions may exist, and we address questions about the trade-off between quality of solution, and how hard it is to find. "Generative Adversarial Networks" have attracted widespread interest recently; these neural networks model the problem of learning a probability distribution, as a zero-sum game. This in turn leads to new "minimax" optimisation problems, and questions about how efficiently they can be solved.

The resulting optimisation problems are diverse, and include problems of discrete optimisation (in the case of clustering) and multi-objective optimisation is the case of generative adversarial networks, for example. But their complexity-theoretic analysis is a unifying theme, using tools from the study of search problems for which efficiently-checkable solutions are guaranteed to exist. The project aims to advance the theory of hard problems in this domain, which in turn assists with the explanation of efficaceous algorithms, via an understanding of what features of problems in practice they exploit. We are also interested in designing novel algorithms, or novel refinements of existing algorithms, having better performance guarantees than pre-existing ones. For example, this might build on "optimistic gradient descent" which has been shown to have desirable properties in some scenarios of multi-objective optimisation.

Publications

10 25 50