Universal Learning across Tasks, Domains and Modalities

Lead Research Organisation: University College London
Department Name: Computer Science

Abstract

Research Questions: This project aims to investigate how different machine learning tasks can be integrated into a single predictive model, a problem known as multi-task learning or universal learning when tasks involve different types of data. In this context most tasks will largely be in the domain of supervised learning, i.e. training models on combinations of input data and ground truth annotations such as identifying labeled objects in images, or generating captions for videos. The project will aim at addressing the task interference problem when handling diverse tasks, and will investigate whether by including tasks that span different modalities, such as text, images and video, can result models that perform well on tasks where little data is available.

Approach: The project will approach these problems by studying models that can cascade the responses to different tasks, and aim at incorporating the coarse-to-fine computation paradigm to multi-task inference. We will explore variants where the sequence of task execution is determined dynamically so as to maximize the performance of a given task, and consider extensions to the multi-modality problem. Evaluation will focus on low-shot task learning, indicating the improved learning complexity attained by exploiting standard tasks for which large-scale annotation exists (e.g. object classification and detection) to address tasks with minimal, or no supervision (e.g. monocular 3D object reconstruction).

Novel Engineering Content: The focus of the project will primarily be on applying neural network based models to tasks that involve images, video, or text, thus it will integrate elements of computer vision and natural language processing research. Given that there is a disconnect between the theoretical benefits and empirical results of models trained on multiple tasks, there will also be an element of learning theory involved: this will explore whether there are certain features of deep learning models that can be modified, such as gradient computation or task similarity, to reduce issues like task interference.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/S021566/1 01/04/2019 30/09/2027
2250981 Studentship EP/S021566/1 23/09/2019 22/09/2023 Dan Stoddart