Machine Learning and Medical Image Analysis for Point-of-Care Ultrasound Systems

Lead Research Organisation: University of Oxford
Department Name: Engineering Science

Abstract

Access to diagnostic ultrasound (US) in low and middle income countries (LMICs) is impeded because of a lack of experienced sonographers and the expertise required to scan well. Furthermore, conventional US systems carry the burden of high equipment costs reducing their feasibility for use in such environments. Statistics show that worldwide, 99% of maternal deaths occur in LMICs indicating an important unmet clinical need to provide US-based diagnosis [1]. Recent state-of-the-art advances have engineered low-cost US devices which show high potential for use in point-of-care (POC) scenarios. Similarly, advancements in machine learning architectures now offer superior performance over predicate solutions and open realms of possibility in computer pattern recognition of 2D US images and video. There is a real potential for automated computer analysis of US to be deployed within POC US systems in countries such as India as well as in Africa, thus addressing the skills crisis and diagnostic need.

The proposed doctoral research aims to develop and evaluate a novel automated image analysis framework that utilises a simplified US scanning protocol of multiple scanning sweeps to provide a clinical decision support tool for healthcare workers unfamiliar with ultrasound. 2D US images and video are rich in spatial-temporal features and acoustic patterns and the research will consider how these can be extracted and combined to good effect within machine learning architectures to reveal important pieces of clinical information. A first key challenge of this project will be to translate the clinical criteria, obtained from the literature and clinical collaborators, into a machine learning framework. A second challenge will be how to design and implement appropriate machine learning architecture for the chosen clinical tasks. A deep learning approach is most likely to be a feasible method. Finally, feasibility studies will be performed in collaboration with clinical partners in the UK and India to evaluate the developed methods and consider the potential usability in practice.

The research falls within the EPSRC's 'Healthcare Technologies' and 'Engineering' research themes. In particular, the research develops work within 'Image and Vision Computing', 'Human-Computer Interaction', and 'Medical Imaging' sub-themes. The research also fits into the EPSRC Grand Challenges through 'Transforming Community Health and Care' and 'Optimising Treatment'.

The doctoral research will be conducted associated with the GCRF-funded CALOPUS Project (Computer-Assisted Low-cost Point-of-Care Ultrasound: EP/R013853/1) which is a joint collaboration between the Institute of Biomedical Engineering (IBME) and Nuffield Department of Women's and Reproductive Health, University of Oxford, and the Translational Health Science and Technology Institute (THSTI), Faridabad, India.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/R513295/1 01/10/2018 30/09/2023
2288295 Studentship EP/R513295/1 01/10/2019 31/03/2023 Alexander Gleed
 
Description We have explored ways in which a user unfamiliar with ultrasound can assess the placenta location. We have developed an ultrasound image analysis algorithm that uses machine learning to identify and label the placenta and maternal bladder in ultrasound video. The ultrasound video is obtained by a user who takes a simple U-shaped video sweep low across the maternal abdomen. The algorithm automatically provides an assistive video overlay which aids a user in assessing the placenta location by highlighting key anatomies and landmarks. This may aid a user in risk stratifying placentae which are low, thus enabling women at risk to seek the appropriate level of care. In the process of building this algorithm, we have explored the 2-D shape of the placenta and its implications with respect to the U-shaped video sweep and the generated assistive video overlay.

We have also investigated novel techniques to effectively combine multiple ultrasound videos. The videos are obtained from simple ultrasound video sweeps on subjects. Anatomical structures are seen across multiple video sweeps and a key challenge is how to combine this information without additional constraints, such as the position of the ultrasound transducer (something out of scope of this project). We have had success with this early work thus far, developing relational models (graphs) with machine learning which learns how to interpret multiple video signals for clinical and computational benefit.

The award objectives were to explore and develop ways in which ultrasound can be simplified and understood, with the aid of clinical ultrasound video sweep protocols and machine learning algorithms. Our work thus far has been successful, based on two research projects described above, and the production of a doctoral thesis which will be complete by the end of this year. This fund has contributed to the doctoral training of future research workforce.
Exploitation Route Our work is of interest to any groups that are interested in simplifying the use of ultrasound. For example, there are several academic research groups worldwide who are investigating simplifying the use of ultrasound, in addition to partnership with industrial groups that provide ultrasound products. There are also a number of industry groups which design and sell products which simplify the use of ultrasound.
Sectors Digital/Communication/Information Technologies (including Software),Healthcare

 
Description Interdisciplinary collaboration with the Translational Health Science and Technology Institute, India through the CALOPUS project 
Organisation Translational Health Science And Technology Institute
Country India 
Sector Public 
PI Contribution I am a team member in the interdisciplinary CALOPUS project, which is a partnership between the University of Oxford, UK (Professor J. Alison Noble and Professor Aris T. Papageorghiou) and the Translational Health Science and Technology Institute, India (Professor Shinjini Bhatnagar, Dr Ramachandran Thiruvengadam, Dr Bapu Koundinya Desiraju). I have investigated several research topics in this project as a doctoral research student. I have developed an ultrasound image analysis algorithm that provides automatic image guidance to a user to assess the placenta location from a simple U-shaped ultrasound video sweep. I have also explored ways in which multiple ultrasound video sweeps can be combined to achieve computational understanding in ultrasound video. I have also participated in a series of international workshops organised by our collaboration around the themes of AI and maternal-child health.
Collaborator Contribution The Translational Health Science and Technology Institute partners have provided cross-mentoring on healthcare issues related to AI and maternal-child health. They have facilitated UK team members' participation in a series of international workshops. They have externally validated several ultrasound image analysis algorithms developed in the UK, using the data science expertise at THSTI.
Impact Several publications have been output as a result of this collaboration.
Start Year 2019