Optimisation of Probabilistic Deep Learning Approaches for Hardware Acceleration

Lead Research Organisation: Imperial College London
Department Name: Electrical and Electronic Engineering

Abstract

The focus of this project is to investigate and develop methodologies and approximation schemes for the efficient mapping of probabilistic models in deep learning, such as Bayesian Neural Networks, on low-power embedded devices such as CPUs (Central Processing Units), NPUs (Neural Processing Units), and FPGAs (Field Programmable Gate Arrays).

Despite the significant progress of neural network acceleration, it is well known that conventional neural networks can be prone to overfitting and poor generalisation - where the model fails to generalise well from the training data to test (unseen) data. It is often the case that typical deep neural network models do not provide reliable estimates of uncertainty alongside their predictions - they can be overconfident on unseen data. Robust uncertainties are important for real-life scenarios - e.g., autonomous driving, healthcare etc., where decisions may be made using uncertainty metrics as a basis. There exists a plethora of research, often under a probabilistic framework, with the aim of tackling this problem in deep learning, such as Bayesian Neural Networks and Deep Ensembles. However, these approaches are often computationally cumbersome. The goal of this research is to improve our understanding of the computational aspects of these types of models and to explore various optimisation strategies across the full stack (from model to hardware) for efficient implementation of them on low-power embedded platforms - e.g. CPU, NPU etc.

Area: Artificial intelligence technologies

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/W522004/1 01/10/2021 30/09/2026
2621264 Studentship EP/W522004/1 02/10/2021 30/09/2025 Guoxuan Xia