Explainable deep learning for advanced medical image analysis in computed aided diagnosis

Lead Research Organisation: University of Strathclyde
Department Name: Electronic and Electrical Engineering

Abstract

For effective modelling of ventricles and myocardium from CMR images, it is crucial for accurate extraction of myocardium & quantitative pathological measurements from multisequence of images, esp. for patients suffering myocardial infarction (MI). For example, cine images can be used to delineate ventricular boundaries within one cardiac cycle, T2-mapping can be used to estimate the area-at-risk, the T1-mapping can provide quantitative measurements of extracellular volume or fibrosis, and the Late Gadolinium Enhanced (LGE) images provide essential information on infarcted myocardium. Sample images are provided in the end.

Although many advanced methods have been developed for segmenting cine images, such as active contours, watershed, level set, graph cut, and AI approaches (deep-learning), the segmentation of LGE images is still challenging and is not studied well because of the low resolution and poor signal-to-noise ratio, heterogeneous intensity, and unclear boundaries due to partial volume effects, etc. Thus, to extract infarcted myocardium is usually done manually by experienced experts, which is generally time-consuming, tedious and subject to inter/intra- observer variations. Because of this, MICCAI is organizing a challenging to address LGE segmentation this year, the MS-CMRSeg competition. Moreover, it is not trivial to integrate LGE images into cine images because of different imaging time and motion artefacts, but a necessary step in modelling MI patients.

In existing modelling frameworks, both cine and LGE images are manually delineated, the integration of LGE into cine is based on a simple affine transformation. This time-consuming process limits the study on large cohorts of MI patients. The large inter-observer variations will further reduce the modelling predictive capability. Therefore, accurately delineating ventricles and infarcted myocardium from multi-sequence CMR images are very important, which can further remarkably enhance the subsequent modelling and other diagnosis tasks.

Actually, the project will exploit more prior knowledge by either considering the style transfer between different modalities data or extracting more contextual features by adding different constraints, i.e., shape, contour, etc. In practical applications, it is rather straightforward to collect cine image and annotation. To this end, an efficient style transfer method, for example, Generative Adversarial Network (GAN), Variational Autoencoder (VAE), etc., can generate a realistic and synthetic LGE training set by combining the annotated cine images and unannotated LGE images. The generated data can enlarge the training set and enhance the generalised ability of the trained model. To further improve the efficiency and interpret the designed network in a more effective way, this project will be dedicated to incorporating more contextual features by designing new regularisation terms such as shape or contour constraints, etc. By introducing more knowledge learnt from the conventional methods and the medical community, the designed network can be further refined for increased robustness and feasibility.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/T517938/1 01/10/2020 30/09/2025
2431268 Studentship EP/T517938/1 01/10/2020 31/03/2024 Alexander Ulrichsen