Gaussian Noise Injections in Neural Networks

Lead Research Organisation: University of Oxford
Department Name: Statistics

Abstract

Gaussian noise injections (GNIs) are a simple and widely-used regularisation method for training neural networks, where one injects additive or multiplicative Gaussian noise to the network activations at every iteration of the optimisation algorithm, which is typically chosen as stochastic gradient descent (SGD). Though the regularisation conferred by Gaussian noise injections can be observed empirically, and there have been many studies on the benefits of noising data the mechanisms by which these injections operate are not fully understood. This project will aim to uncover these mechanisms to better understand the regularising effect of GNIs.
The project will first and foremost study the `explicit' effect such injections have on a neural network's loss landscape. Early results suggest that this effect is equivalent to a prior in Hilbert space which penalises neural networks with low frequency spectrums in the Fourier domain. Using this connection to Hilbert space we will ascertain the minima GNIs induce and by doing so will develop `optimal' levels for noise injections to help guide practitioners using such methods. An auxiliary aim will be to ascertain what benefits such a prior in Hilbert space may have and to uncover some new effects of GNIs.
Secondly the project will look to study these injections from the perspective of the dynamics of stochastic gradient descent (SGD). GNIs are likely to alter the dynamics of SGD by introducing additional noise in the update steps of SGD, a process which we call the `implicit' effect. Preliminary results indicate that this added noise is likely to induce bias that degrades neural network performance. By uncovering the mechanisms for this bias we will be able to develop new noise injection procedures that are unbiased and are likely to outperform networks trained with GNIs.
Finally as a more specific case study, we will turn to Variational Autoencoders (VAEs); a set of networks which intrinsically use GNIs as part of their normal mode of operation. VAEs have a large set of benefits over deterministic auto-encoders (AEs), which do not use GNIs in their normal mode of operation. Given our studies on the explicit and implicit effect of GNIs we will aim to explain some of the behaviour of VAEs in terms of their optimal minima, their bias, and their robustness to adversarial attack relative to AEs. By doing so our aim will be to develop novel VAEs that are unbiased and that leverage some of our other findings on the benefits and downsides of the explicit and implicit effect of GNIs.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/N509711/1 01/10/2016 30/09/2021
2299582 Studentship EP/N509711/1 01/10/2019 31/03/2022 Alexander Camuto
EP/R513295/1 01/10/2018 30/09/2023
2299582 Studentship EP/R513295/1 01/10/2019 31/03/2022 Alexander Camuto