Detecting when deep learning goes wrong in medical image analysis

Lead Research Organisation: Aberystwyth University
Department Name: Computer Science

Abstract

Deep learning has made huge progress in both general and medical image analysis in recent years. Nevertheless, deep learning algorithms have been shown to fail silently on certain images. For example, a classifier may seriously misclassify an image with a high apparent confidence in the (incorrect) result (in the sense of a high softmax probability). Clearly such errors could be catastrophic for patient outcomes. This project would investigate the identification of medical images likely to prove problematic to a deep learning system. Previous work on general image classification has used various approaches such as: estimating the uncertainty in the input directly; use of a reject function (i.e. training a separate good/bad image classifier); detecting anomalies from the dataset (e.g. by finding the nearest neighbours, or reconstructing the image using an autoencoder); or identifying new classes not in the original dataset. Recent work has developed a novel evaluation approach and concluded current approaches are unreliable in practice. In this PhD, existing approaches would be investigated in the context of medical image analysis - focussing on mammography and prostate MRI - and novel algorithms would be explored.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/S023992/1 01/04/2019 30/09/2027
2431364 Studentship EP/S023992/1 01/10/2020 30/09/2024 William George Robinson