A PAC-Bayesian approach to generative models

Lead Research Organisation: University College London
Department Name: Computer Science

Abstract

This PhD project aims at providing theoretical guarantees on the generalisation abilities to (deep) neural networks. Kernel methods and neural networks have been recently used to efficiently (computationally or with respect to predictive performance) learn the structure of data (including learning distributions, densities, generative models, conditioning, data set similarity measures, to name but a few). This project focuses on kernels which are defined parametrically through deep neural networks, for examples where images are mapped through a convolutional network to a lower-dimensional space with more uniform support. These models achieve impressive performance, and it has been speculated that the hierarchical structure of many generative processes underlies the success of deep neural networks.

We intend to contribute to the emerging theory of generalisation for deep neural networks. Very few results have been obtained and most of them rely on the PAC-Bayes theory, which is thus regarded as a very promising direction to better understand how deep neural networks are able to generalise to new data and/or tasks. This PhD project will leverage the PAC-Bayes theory to study generative adversarial networks (GANs), and explore how transfer learning and few-shot learning can be improved.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/S021566/1 01/04/2019 30/09/2027
2251009 Studentship EP/S021566/1 23/09/2019 22/09/2023 Felix Richard Biggs