Dynamic Peri-operative Cerenkov Luminescence Imaging for Robotic Assisted Surgery (EDCLIRS)

Lead Research Organisation: University College London
Department Name: Computer Science

Abstract

Prostate cancer occurs in about one in seven of the male population and is fatal in about 20% of those cases, being the second most common cancer after lung cancer. Surgical intervention seeks to remove sufficient malignant tissue without leaving residuals that can lead to recurrence. At the same time new surgical techniques are emerging to minimise the impact on healthy tissue and preserve nerves and the quality of life. However, currently, the assessment of the success in removing all cancerous tissue depends on evaluation in a pathology lab and means that the surgery will not be curative or needs to be revisited. It is therefore crucial to develop technology that can allow the surgeon to make decisions during surgery that can reduce the chances of recurring disease. One well established indicator of cancerous tissue is the injection of a radioactively labelled tracer that differentiates between malignancy and normal tissue. This tracer can be imaged using positron emission tomography, but this is not a technology that can be utilized within a surgical setting. Recently a new methodology has been developed which allows the radioactive tracer to be imaged using ordinary cameras, by exploiting the emission by radioactive particles of Cerenkov light, in the visible spectrum. This phenomenon opens the possibility to place cameras on endoscopes and combine them with existing methods for robotic assisted surgery. In this project, we will pursue this idea, and make use of techniques for tracking movement of the cameras and the patient, including them in a model of light emission and detection, and realizing a real-time dynamic imaging system assisting the surgeon to excise all cancerous tissue while preserving as much healthy tissue as possible.

Planned Impact

Impact on Healthcare

The primary intention of the grant and the proposed Translational Alliance is to enable a real-time imaging technique as adjunct to existing peri-operative technology with the potential to reduce the expensive and invasive requirement for patient readmission to surgery, disease recurrence, co-morbidity and death. We expect this objective, if achieved, to have a significant impact on reducing mortality in the 41,736 people diagnosed with prostate cancer every year in UK (Cancer Research UK, 2011). Starting from this ambitious aim the long-term impact of the technology we develop has much wider potential with possible applications across a range of major cancer indications. These can only be targeted through a long-term collaboration between the project partners facilitated through an EPSRC Translational Alliance.

Impact on Medical Technology

The systems we are developing are cutting edge in medical imaging technology. By alliance with LightPoint Medical we will have a translational path much faster than in typical research projects and expect to collaborate in detail with this company to realise next-generation peri-operative imaging technology. The proposed technology directly addresses the EPSRC grand challenges in Healthcare Technologies for developing new ways to enhance efficacy and reduce the risk to patients during surgery. Our cross-cutting technology aims to deliver new capabilities to the surgeon allowing repeatable precision with minimal invasiveness through novel imaging technologies by enabling imaging of dynamic tissues without signal loss through spatiotemporal low-rank plus sparse plus motion modelling. The project and alliance will benefit from our recent our EPSRC Capital Equipment Award in Robotics and Autonomous Systems, which will provide the robotic architectures and laboratory platforms for developing the proposed computational imaging research.

Academic Impact

The incorporation of computer vision techniques within real-time surgical procedures is a highly active and challenging area that is strongly represented at international conferences such as MICCAI and IPCAI. Including tomographic medical imaging modalities through registration and augmented reality visualisation is a more difficult problem but has made notable advances in recent years. Active imaging and new modalities for molecular imaging during surgery are an exciting new development in the field that can have disruptive impact. To achieve this, the acceleration of image reconstruction through spatial-temporal subsampling is a topic of strong interest, with connections to the very active area of Compressed Sensing. What we are proposing is an intersection of these areas incorporating highly novel non-linear motion models, coupled imaging physics, and high performance model based reconstruction algorithms supported by a robotic platform. We will disseminate these ideas through publication in high-impact journals and presentation at the leading conferences in the field.

Impact on wider research community

The cross-cutting combination of robotics and imaging methodologies, through i) data fusion, ii) multimodality systems, and iii) coupled Physics imaging are recognised as a highly active topic, not only in medical imaging, but more widely e.g. in seismology and non-destructive testing. Through the UCL Centre for Medical Image Computing and the UCL Centre for Inverse Problems we are active in encouraging cross-fertilization between such different application areas. Furthermore the recently initiated Alan Turing Institute, which will be located in close proximity to UCL, will allow participation in wider programmes of research in large scale data processing, combing with state-of-the-art theoretical developments in mathematics, statistical and machine learning techniques. This will allow a rapid and effective mechanism to present our results to the wider UK community and internationally.

Publications

10 25 50

publication icon
Arjas A (2022) Neural Network Kalman Filtering for 3-D Object Tracking From Linear Array Ultrasound Data. in IEEE transactions on ultrasonics, ferroelectrics, and frequency control

publication icon
Arridge S (2019) Solving inverse problems using data-driven models in Acta Numerica

publication icon
Arridge S (2020) Networks for Nonlinear Diffusion Problems in Imaging. in Journal of mathematical imaging and vision

publication icon
Davis SPX (2018) Slice-illuminated optical projection tomography. in Optics letters

publication icon
Di Sciacca G (2021) Enhanced diffuse optical tomographic reconstruction using concurrent ultrasound information. in Philosophical transactions. Series A, Mathematical, physical, and engineering sciences

publication icon
Hauptmann A (2020) Multi-Scale Learned Iterative Reconstruction. in IEEE transactions on computational imaging

publication icon
Jones G (2017) Bayesian Estimation of Intrinsic Tissue Oxygenation and Perfusion From RGB Images. in IEEE transactions on medical imaging

publication icon
Lunz S (2021) On Learned Operator Correction in Inverse Problems in SIAM Journal on Imaging Sciences

publication icon
Maneas E (2022) Deep Learning for Instrumented Ultrasonic Tracking: From Synthetic Training Data to In Vivo Application. in IEEE transactions on ultrasonics, ferroelectrics, and frequency control

publication icon
Nikitichev DI (2017) Medical-grade Sterilizable Target for Fluid-immersed Fetoscope Optical Distortion Calibration. in Journal of visualized experiments : JoVE

publication icon
Pachtrachai K (2018) CHESS-Calibrating the Hand-Eye Matrix With Screw Constraints and Synchronization in IEEE Robotics and Automation Letters

publication icon
Pachtrachai K (2019) Hand-Eye Calibration With a Remote Centre of Motion in IEEE Robotics and Automation Letters

publication icon
Robu MR (2017) Intelligent viewpoint selection for efficient CT to video registration in laparoscopic liver surgery. in International journal of computer assisted radiology and surgery

 
Description We investigated a range of computer vision techniques to track and localise imaging probes in the surgical video field which have shown promise for enhancing surgical navigation.

We developed methods for spatial temporal image processing based on deep learning which allowed for accelerated acquisition of undersampled data.

3D beta-tomosynthesis was investigated and was found to not yield a sufficient signal for tumour localisation within a solid organ.
Exploitation Route Methods developed for spatial temporal analysis of dynamic data have wide application in other imaging modalities such as MRI, CT and Ultrasound.
The use of beta probes is a promising commercial direction for robotic assisted surgery,
Sectors Healthcare,Pharmaceuticals and Medical Biotechnology

 
Description The possibility of tomosynthesis of beta-emitting sources was investigated. The industrial partner (LightPoint) has produced a CE-approved imaging probe for intra-operative use which is now available on the market and is being tested in clinical trials within the UK and the EU. Additional funding has been secured by this partner from Innovate UK and the EC to conduct trials and also develop further image navigation technology for their new device.
First Year Of Impact 2019
Sector Healthcare
Impact Types Economic

 
Description Actuated Robotic Imaging Skins
Amount £2,780,000 (GBP)
Organisation Royal Academy of Engineering 
Sector Charity/Non Profit
Country United Kingdom
Start 10/2019 
End 09/2029