Explainable AI for diagnosing and treating cardiovascular disease

Lead Research Organisation: King's College London
Department Name: Imaging & Biomedical Engineering

Abstract

Aim of the PhD Project:

Heart disease is number one killer worldwide.
AI models can automatically diagnose disease, but they lack explanatory power.
Project aims to develop AI tool for diagnosis and treatment planning in cardiology that can explain its decisions to cardiologists.
Project Description / Background:

The use of artificial intelligence (AI), and specifically deep learning, for diagnosis and treatment planning in cardiology is an active research area [1,2]. However, whilst deep learning techniques have produced impressive results, a significant problem remains. Often, the techniques that produce the most accurate results lack one important feature that is important for the clinical acceptance of new technology: explanatory power. Put simply, most deep learning models are able to make predictions but are not able to explain in human-interpretable terms how the prediction was arrived at. Without such explanations, in many applications clinicians will be reluctant to base clinical decisions upon recommendations from such "black-box" models.

In this project we focus on deep learning models that take images (and possibly other clinical data) as input. Producing explanations from image-based deep learning models is a significant challenge. In the literature, most attempts at such "interpretable machine learning" or "explainable AI" have focused on one of two approaches: (1) try to visualise the inside of the "black box", e.g. by using "saliency maps" which show areas of the input image that were important in making the decision, (2) train a simpler model which may be more interpretable to some degree. Both of these approaches are likely to be inadequate in many medical applications. For example, in cardiology, which is our focus in this project, an "explanation" that will be acceptable to a cardiologist is likely to require information about pathological processes and/or concepts such as tissue properties and electrical/mechanical activation patterns.

A key challenge will be to find ways of linking the model's automated decision with "higher level" human-interpretable concepts. In this project, we will investigate ways of making these links in the application of patient diagnosis, stratification and treatment planning in heart failure. One promising area that has recently emerged from the computer vision literature is the investigation of ways of querying the importance of human-interpretable concepts to deep learning models [3]. We have recently started to apply these methods in cardiology with highly promising initial results [4]. Other interesting avenues for exploration include methods that incorporate explanations into the training objective [5], as well as ways of putting humans (i.e. clinicians) "in the loop" of the training of deep learning models [6]. This type of approach could be used to encourage the deep learning model to learn features that are clinically meaningful, effectively creating a dialogue between clinicians and deep learning models.

There are many intriguing directions to explore in this field which are relatively untouched in the medical domain, and the potential for novelty is high. Our ultimate aim is to produce a computer aided decision-support tool to assist cardiologists in stratifying patients with heart failure and planning its treatment. The tool would act like a "trusted colleague" or "second reader" that the cardiologist could engage with to find their opinion about difficult cases as well as the reasoning behind this opinion. This is a highly ambitious aim and this project represents the first part of this journey, but if successful the impact could be great.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/S022104/1 01/10/2019 31/03/2028
2442177 Studentship EP/S022104/1 01/10/2020 30/09/2024 Robin Andlauer