Learning-based multi-modal registration and data fusion for robust cancer tissue classification

Lead Research Organisation: University of Lincoln
Department Name: School of Computer Science

Abstract

This proposal aims to develop novel multi-modal image registration methods based on unsupervised learning and convolutional neural networks (CNNs). In contrast to classical methods that optimise for each new pair of images, CNNs learn a registration function that computes the optimal transformation from single pass through a heavily parallelised neural network, taking a fraction of a second on a GPU or a few seconds on a CPU (Fu et al. 2019). This speed-up is especially important when considering the tens to hundreds of sections that must be aligned in 3D studies. The project may be broadly organised into the following three work packages:

1. Unsupervised registration for artefact-aware 3D histology reconstruction
The first challenge to address is intra-modality registration, such as the alignment of serial histology sections, into a reconstructed 3D volume and subsequent visualisation. We will develop methods based on spatial transformer networks to
perform sequential registration of serial histological sections, and CNNs to detect anomalies in the images generated through the sectioning process, such as tissue tearing, folding, compression, inconsistent staining and missing tissue. This
information will be integrated into the registration workflow, to signify regions of high deformability.

2. Representation learning for multi-modal image registration
Neural network architectures will be developed for unsupervised representation learning from different styles of data (histology, MSI, MRI), optimising for features which enable accurate multi-modal registration (based on known transformations) between the different imaging formats. Understanding which features in the data are common may improve the understanding of which aspects can be reliably compared between modalities as well as leading to the development of simpler models and registration workflows. The possibility of developing a modality independent learning based registration workflow will also be explored. This will combine the outcomes of the innovations described above,
with the aim of providing histology, MSI and MRI data from the same sample to the workflow and obtaining a set of transformations converting each image into a common 3D space.

3. Comparison with state-of-the-art and release
We will evaluate the performance of the proposed method in comparison to classical iterative methods (elastix, niftyreg, ANTS), as well as recent learning-based registration frameworks such as VoxelMorph (Balakrishnan et al. 2018). We envisage that the proposed methods will attract interest among biomedical researchers working on pre-clinical or postsurgical cancer studies, where ad hoc methods are frequently adopted due to the nuances of the study design. The registration method will also be integrated into an existing open-source framework for multi-modal image analysis (https://github.com/AlanRace/SpectralAnalysis) and released to the scientific community. This will further increase the impact
of the developed methods, by making them available in freely available and easy-to-use software, reducing the barrier to entry

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/T518177/1 01/10/2020 30/04/2026
2604830 Studentship EP/T518177/1 01/10/2021 30/04/2025 Kimberley Bird