Counterfactual Visual Explanations in Ophthalmic Imaging

Lead Research Organisation: University College London
Department Name: Institute of Health Informatics

Abstract

Neural networks, a type of machine learning, achieve above-human performance in classifying or 'diagnosing' medical images. However, the inner workings of these networks are not interpretable to humans. Being able to understand why a neural network makes a certain classification decision is an important step in getting machine learning systems deployed in the clinic. This project focuses on building counterfactual images, which in essence allows the user to ask 'what if' questions of a neural network in order to understand its output better.

The project nicely matches the stated aims of the centre:

1) Extracting more information from patient data to accelerate diagnosis:
The huge collection of OCT data from patients at Moorfields is a fantastic resource with
already demonstrated potential to accelerate and improve accuracy of diagnosis; it
provides the perfect exemplar for maximising the potential of human-AI partnership.
2) Creating adaptive and flexible systems that improve the operation of healthcare
organisations:
The human-AI partnership paradigm that motivates this project addresses this aim
directly and has much wider potential beyond the demonstration in ophthalmology.
3) Delivering personalised and targeted treatments for patients:
Better diagnostic and referral decisions directly enable personalised treatment and care
design

Supervisors: Pearse Keane, Danny Alexander

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
NE/W502716/1 01/04/2021 31/03/2022
2245851 Studentship NE/W502716/1 01/10/2019 30/12/2023 Peter Woodward-Court