What works best for self-supervised learning & why?

Lead Research Organisation: University College London
Department Name: Computer Science

Abstract

Recently it was shown that a network trained to estimate how much natural images had been rotated from their canonical orientation (self-supervised training) developed a rich representation of image structure. If the network was then fine-tuned for object class recognition (supervised training) using semantically-labelled data, it was able to achieve good performance with a fraction of the data needed had it been trained in supervised manner from the start. Self-supervised training thus offers a route to high performing networks when data is scarce. Other image manipulations that could be used to drive self-supervised learning include:
non-lossy - contrast inversion; hue rotation; warping
lossy - masking; bit reduction; noisy perturbation; spectral whitening; grayscaling
This PhD will develop and apply a battery of assessments for methods of self-supervised learning, and develop and test a theoretical explanation for what works best, possibly relating the explanation to existing neuroscience concepts such as predictive coding.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/S021566/1 01/04/2019 30/09/2027
2250955 Studentship EP/S021566/1 23/09/2019 22/09/2023 Augustine Mavor-Parker