Concept-based explanations for medical imaging

Lead Research Organisation: University of Oxford
Department Name: Health Data Science CDT


The black box nature of deep learning models reduce their potential clinical use; it being difficult to validate, and therefore trust, deep learning model predictions. Interpretability methods aim to address this by providing ways to explain or present model predictions in understandable terms to a human. Explanations of erroneous predictions can give suggestions for improvements and can aid in algorithm debugging. They also provide a way to determine if a model is conforming to ethical standards with the ability to check for bias to concepts such as race.Recent work in machine learning and computer vision has used concept-based explanations, where concepts are defined in the activation space of the neural network. Model sensitivity to those concepts is explored instead of individual pixels in the input space as in common techniques like saliency maps. However, to-date these techniques have not been widely applied to medical imaging, where, particularly for medical ultrasound, semantically different concepts can be similar in appearance. By providing interpretable deep learning we aim to increase the uptake of deep learning in medical imaging, with clinicians being more likely to trust and use the technology.
In this doctoral research, we propose to use concept-based interpretability methods - like testing with concept activation vectors (TCAV) - to explore which concepts a deep convolutional neural network utilises in its prediction. Using an initial exemplar problem of Gestational Age estimation of a fetus from ultrasound scans, we aim to explore if the presence/shape of specific structures in the fetal brain are important for prediction of gestational age in ultrasound, as they are concepts a clinician would use. Several large datasets are available for this research, including the INTERGROWTH-21st dataset, containing 121,000 images approximately uniformly distributed across 13-42 weeks gestation. Firstly, we will use TCAV to demonstrate if a GA prediction model uses similar concepts to clinicians to make a prediction, providing trust in its use. We plan to explore different methods of concept discovery and validation to make them suitable for this imaging modality. The research may require us to develop new metrics which measure how truthful the method is to the model's underlying behaviour and how well the concepts are represented within the network. New metrics may give us an increased understanding of when/where these methods succeed and fail and can drive further method developments. Additional image and video-based prediction tasks may be considered as the research progresses to expand the "vocabulary" of explanations that can be offered.
The project will utilise and improve the application and comparison of concept-based deep learning interpretability methods for medical imaging. These methods will provide a useful way to explore a model's behaviour and be a valuable tool in increasing the uptake of deep learning image- and video-based prediction models in clinical care.
This project falls within the EPSRC healthcare technologies, ICT and Artificial Intelligence and robotics research areas.


10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/S02428X/1 01/04/2019 30/09/2027
2432736 Studentship EP/S02428X/1 01/10/2020 30/09/2024 Angus James Nicolson