Image guided surgery through spatio-temporal signal amplification

Lead Research Organisation: University College London
Department Name: Computer Science

Abstract

Through advances in instrumentation and high resolution digital video, surgical techniques are increasingly becoming more minimally invasive. Reducing the access trauma of surgery has many advantages for the patient such as reduced hospitalisation, scarring, co-morbidity and post-operative pain. However, limiting the surgeon's access to the surgical site inevitably increases the complexity of operations. Clinically, it is crucial to enhance visualisation during minimally invasive surgery and in particular to enable the surgeon to see structures underneath the exposed organ surface and to observe the functional characteristics of tissues. The availability of this information in real-time during surgery can assist the surgeon to prevent damage to critical anatomical structures and to preserve the viability of healthy tissues.

Information about the location of blood vessels is actually inherently embedded in the endoscopic video signal from minimally invasive surgery in the form of motion. This can be observed easily when the vessel is large and near the tissue's surface. However, when the vessel is small and embedded within the tissue, the motion may be very subtle and not naturally visible by the naked eye because the human visual system is tuned to specific frequencies and motion amplitudes. Similar variations are present in the radiometric channels of endoscopic video, where colour fluctuations are surrogate measures of changes in tissue perfusion linked to the cardiac cycle. These subtle spatio-temporal video variations can be computationally detected and measured which constitutes the focus of the proposed project.

The difficulty in exposing subtle variations in endoscopic video is that the surgical site is highly deformable and dynamic, which typically obscures the location of sub-surface vessels. To compensate for large scene dynamics, we will use a combination of registration and tracking techniques that temporally align specific regions of tissue. Once small variations have been identified, observations from different angles acquired either by a stereo endoscope or by a moving device will facilitate the localisation of vessels under the tissue. Mathematically, the theory of sparsity regularization and non-smooth regularisation will be exploited to solve for the vessel position leveraging existing research in tomography imaging e.g. using non-convex penalties and adaptive algorithms.

The computational techniques to be developed, in the form of inverse problems for recovering and localising the source of subtle motion or colour variations, have wide applicability to many image and vision computing problems. In particular, applications where large dynamic effects need to be considered and removed apriori and where the problem is under determined due to the complexity of the structures under investigation.

Planned Impact

Integration of the proposed research methods into clinical practice can lead to significant patient benefits by providing more effective and less traumatic minimally invasive surgical procedures with reduced complication rates. The overall effect of increasing the uptake and efficacy of minimally invasive surgery can have dramatic advantages for healthcare delivery and the related social/economic factors. Enabling patients to return to normal living faster due to minimised trauma reduces the costs of medical treatment and the overall long term effect of interventional healthcare.

For example, in laparoscopic liver resection procedures where the target organ is highly vascular but the surgeon has a very limited view of the embedded blood vessels, highlighting the location of vessels can shorten procedural length and reduce the risk of vessel rupture. Amplifying subtle colour variations in the video signal can further provide additional information about organ perfusion and the viability of anastomosis or organ function post-revascularisation. This clinical application will be evaluated during the proposed project in collaboration with Professor Brian Davidson at the Royal Free Hospital and his newly established perfusion and imaging laboratory.

The proposed research could also facilitate the advancement of new surgical techniques. For example in new surgical specialisations such as fetal therapy for twin-twin-transfusion syndrome, the location of blood vessels beneath the surface of the placenta is critical information to ensure the viability of laser ablation and the overall success of an intervention. Additional important clinical applications will emerge when the developed techniques are transferred to different intra-operative imaging modalities such as, e.g. fluoroscopy and ultrasound.

The project is well aligned with the EPSRC grand challenges in Healthcare Technologies. In particular for developing new ways to enhance efficacy, minimise costs and reduce risk to patients during surgery and push the frontiers of physical interventions to achieve repeatable high precision with minimal invasiveness. The cross-cutting capabilities developed in the project to target these grand challenges include disruptive technologies for sensing and analysis; novel computational and mathematical sciences; and novel imaging technologies. Specifically the lattes theme where the project will develop higher performance, lower cost imaging with techniques for image reconstruction and high throughput, real-time imaging at the point of care.

Publications

10 25 50
 
Description Working on the project we have identified physiological signal characteristics that can be used to amplify motion within endoscopic video. We can choose specific frequencies from the cardiac cycle and rely on its characteristic form to process video and amplify motion around blood vessels that is normally not visible. By only using specific signal components we can reduce the effect of other motion components that make it difficult to interpret the amplified signal.
Exploitation Route The work can be used within a clinical trial to understand the human factors of how such augmented video would be perceived and used by surgeons.
Sectors Healthcare,Pharmaceuticals and Medical Biotechnology

 
Description We have demonstrated the use of motion magnification during robotic nerve sparing prostate cancer surgery. The algorithms, which rely only on keyhole surgery video, can be used to indetify motion due to blood vessels embedded within the tissue and potentially prevent inadvertent injury. We demonstrated the technology within the clinical setting in University College Hospital at Westmoreland Street. The underpinning technology was subsequently used as the basis for developing new solutions for endoscopic procedures. These ultimately led to the formation of a UCL spin-out company, Odin Vision Ltd, which has developed AI assisted products to assist lower and upper GI procedures. These have been used to assist endoscopic diagnosis in hundreds of patients within the NHS.
First Year Of Impact 2018
Sector Healthcare,Pharmaceuticals and Medical Biotechnology
Impact Types Societal,Economic

 
Description Actuated Robotic Imaging Skins
Amount £2,780,000 (GBP)
Organisation Royal Academy of Engineering 
Sector Charity/Non Profit
Country United Kingdom
Start 10/2019 
End 09/2029
 
Description DTG Scholarship
Amount £96,000 (GBP)
Organisation University of Leeds 
Department Faculty of Engineering
Sector Academic/University
Country United Kingdom
Start 05/2016 
End 05/2020
 
Description EPSRC UK IMAGE-GUIDED THERAPIES NETWORK+
Amount £463,769 (GBP)
Funding ID EP/N027078/2 
Organisation Engineering and Physical Sciences Research Council (EPSRC) 
Sector Public
Country United Kingdom
Start 06/2018 
End 07/2019
 
Description Early Career Fellowship
Amount £1,239,250 (GBP)
Funding ID EP/P012841/1 
Organisation Engineering and Physical Sciences Research Council (EPSRC) 
Sector Public
Country United Kingdom
Start 04/2017 
End 03/2022
 
Description EndoMapper: Real-time mapping from endoscopic video
Amount € 3,697,227 (EUR)
Organisation European Commission 
Sector Public
Country European Union (EU)
Start 12/2019 
End 11/2023
 
Description Self-guided Microrobotics for Automated Brain Dissection
Amount £508,091 (GBP)
Funding ID ES/T011866/1 
Organisation Economic and Social Research Council 
Sector Public
Country United Kingdom
Start 02/2020 
End 01/2023
 
Description p-Tentacle: Pneumatically Attachable Flexible Rails For Surgical Applications
Amount £99,000 (GBP)
Funding ID EP/R511638/1 
Organisation Engineering and Physical Sciences Research Council (EPSRC) 
Sector Public
Country United Kingdom
Start 04/2019 
End 03/2020
 
Title Endoscopic Vision Challenge 
Description This is the challenge design document for the "Endoscopic Vision Challenge", accepted for MICCAI 2020. Minimally invasive surgery using cameras to observe the internal anatomy is the preferred approach to many surgical procedures. Furthermore, other surgical disciplines rely on microscopic images. As a result, endoscopic and microscopic image processing as well as surgical vision are evolving as techniques needed to facilitate computer assisted interventions (CAI). Algorithms that have been reported for such images include 3D surface reconstruction, salient feature motion tracking, instrument detection or activity recognition. However, what is missing so far are common datasets for consistent evaluation and benchmarking of algorithms against each other. As a vision CAI challenge at MICCAI, our aim is to provide a formal framework for evaluating the current state of the art, gather researchers in the field and provide high quality data with protocols for validating endoscopic vision algorithms. Sub-Challenge 1: CATARACTS - Surgical Workflow Analysis Surgical microscopes or endoscopes are commonly used to observe the anatomy of the organs in surgeries. Analyzing the video signals issued from these tools are evolving as techniques needed to empower computerassisted interventions (CAI). A fundamental building block to such capabilities is the ability to automatically understand what the surgeons are performing throughout the surgery. In other words, recognizing the surgical activities being performed by the surgeon and segmenting videos into semantic labels, that differentiates and localizes tissue types and different instruments, can be deemed as an essential steps toward CAI. The main motivation for these tasks is to design efficient solutions for surgical workflow analysis, with potential applications in post operative analysis of the surgical intervention, surgical training and real-time decision support. Our application domain is cataract surgery. As a challenge, our aim is to provide a formal framework for evaluating new and current state-of-the-art methods and gather researchers in the field of surgical workflow analysis. Analyzing the surgical workflow is a prerequisite for many applications in computer assisted interventions (CAI), such as real-time decision support, surgeon skill evaluation and report generation. To do so, one crucial step is to recognize the activities being performed by the surgeon throughout the surgery. Visual features have proven their efficiency in such tasks in the recent years, thus, a dataset of cataract surgery videos is used for this task. We have defined twenty surgical activities for cataract procedures. This task consists of identifying the activity at time t using solely visual information from the cataract videos. In particular, it focuses on the online workflow analysis of the cataract surgery, where the algorithm estimates the surgical phase at time t without seeing any future information. Sub-Challenge 2: CATARACTS - Semantic Segmentation Video processing and understanding can be used to empower computer assisted interventions (CAI) as well as the development of detailed post-operative analysis of the surgical intervention. A fundamental building block to such capabilities is the ability to understand and segment video frames into semantic labels that differentiate and localize tissue types and different instruments. Deep learning has advanced semantic segmentation techniques dramatically in recent years. Different papers have proposed and studied deep learning models for the task of segmenting color images into body organs and instruments. These studies are however performed different dataset and different level of granualirities, like instrument vs. background, instrument category vs background and instrument category vs body organs. In this challenge, we create a fine-grained annotated dataset that all anatomical structures and instruments are labelled to allow for a standard evaluation of models using the same data at different granularities. We introduce a high quality dataset for semantic segmentation in Cataract surgery. We generated this dataset from the CATARACTS challenge dataset, which is publicly available. To the best of our knowledge, this dataset has the highest quality annotation in surgical data to date. Sub-Challenge 3: MIcro-Surgical Anastomose Workflow recognition on training sessions Automatic and online recognition of surgical workflows is mandatory to bring computer assisted surgery (CAS) applications inside the operating room. According to the type of surgery, different modalities could be used for workflow recognition. In the case where the addition of multiple sensors is not possible, the information available for manual surgery is generally restricted to video-only. In the case of robotic-assisted surgery, kinematic information is also available. It is expected that multimodal data would make easier automatic recognition methods. The "MIcro-Surgical Anastomose Workflow recognition" (MISAW) sub-challenge provides a unique dataset for online automatic recognition of surgical workflow by using both kinematic and stereoscopic video information on a micro-anastomosis training task. Participants are challenged to recognize online surgical workflow at different granularity levels (phases, steps, and activities) by taking advantage of both modalities available. Participants can submit results for the recognition of one or several granularity levels. In the case of several granularities, participants are encouraged (but not required) to submit the result of a multi granularity workflow recognition, i.e. recognize different granularity levels thanks to a unique model. Sub-Challenge 4: SurgVisDom - Surgical visual domain adaptation: from virtual reality to real, clinical environments Surgical data science is revolutionizing minimally invasive surgery. By developing algorithms to be context-aware, exciting applications to augment surgeons are becoming possible. However, there exist many sensitivities around surgical data (or health data more generally) needed to develop context-aware models. This challenge seeks to explore the potential for visual domain adaptation in surgery to overcome data privacy concerns. In particular, we propose to use video from virtual reality simulation data from clinical-like tasks to develop algorithms to recognize activities and then to test these algorithms on videos of the same task in a clinical setting (i.e., porcine model). 
Type Of Material Database/Collection of data 
Year Produced 2020 
Provided To Others? Yes  
Impact The EndoVis challenge series in now running yearly at MICCAI. It has led to the establishment of new data repositories, challenge benchmarking guidelines and significant growth in algorithm development for surgical and endoscopic applications. 
URL https://zenodo.org/record/3715645
 
Title Surgical instrument database of video 
Description Database of videos with surgical instruments within a lab setting. Useful for computer vision and image analysis development. 
Type Of Material Database/Collection of data 
Year Produced 2016 
Provided To Others? Yes  
Impact There is now an active challenge at MICCAI running on this topic. It is organised by the student, Max Allen, who developed the initial concepts. 
URL http://www.surgicalvision.cs.ucl.ac.uk/resources/benchmarking/#home
 
Description Intuitive Surgical CA 
Organisation Intuitive Surgical Inc
Country United States 
Sector Private 
PI Contribution Development of algorithms and AI models to process video and other robotic data from the Intuitive Surgical system in clinical environments.
Collaborator Contribution Access to data from the Intuitive Surgical system.
Impact N/A
Start Year 2013
 
Description da Vinci Research Kit Consortium partnership 
Organisation Intuitive Surgical Inc
Country United States 
Sector Private 
PI Contribution We have contributed in the development of computer vision software and algorithms for stereoscopic endoscope video.
Collaborator Contribution Donation of equipment and exchange of knowledge and access to data.
Impact The partnership and consortium have resulted in funding from NIH, multiple papers, knowledge exchange and student engagement as well as dissemination activities.
Start Year 2016
 
Title Method And Apparatus For Estimating The Value Of A Physical Parameter In A Biological Tissue 
Description A method and apparatus are provided for estimating the value of a physical parameter of biological tissue. The method comprises acquiring a colour image of the biological tissue from a single image capture device; extracting from the colour image at least two images in respective optical wavebands having a different spectral sensitivity from one another, whereby a given location in the biological tissue is present in each of the extracted images; providing a physical model of the optical properties of the biological tissue, wherein the optical properties of the biological tissue are sensitive to the value of said physical parameter; and estimating the value of the physical parameter at said given location based on an intensity value at that location for each extracted image. The estimating utilises the physical model of the optical properties of the biological tissue and the spectral sensitivity for each respective waveband. 
IP Reference US2019320875 
Protection Patent granted
Year Protection Granted 2019
Licensed No
Impact No impacts at present.
 
Title Surgical robot arm visualization software 
Description Cinder application designed to enable visualisation of robotic arms at a given configuration overlaid on a camera feed. This is particularly useful for surgical robotics where we can estimate arm position from joint kinematic data and then render the joints on the camera's field of view. 
Type Of Technology Software 
Year Produced 2016 
Open Source License? Yes  
Impact Multiple research sites have used it for development and testing of surgical vision algorithms involving robotic instruments. 
URL http://www.surgicalvision.cs.ucl.ac.uk/resources/code/
 
Company Name ODIN MEDICAL LIMITED 
Description Odin specialises in developing computer aided diagnostic solutions for endoscopic diagnostic and therapeutic procedures. 
Year Established 2018 
Impact Too early to report key achievements.
 
Description Interview for Korean national newspaper 
Form Of Engagement Activity A magazine, newsletter or online publication
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Public/other audiences
Results and Impact Dr. Danail Stoyanov was interviewed by one of the largest Korean newspapers on the role of robotics in surgery and healthcare.
Year(s) Of Engagement Activity 2017
URL http://mnews.joins.com/article/2200741
 
Description Keynote talk at CARE Workshop, MICCAI 2018, Quebec, Canada 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Dr. Stoyanov gave a keynote talk at the Computer Assisted and Robotic Endoscopy workshop held at MICCAI 2018. The talk had a lively discussion afterwards focusing on the future impact of endoscopic computer vision in surgery.
Year(s) Of Engagement Activity 2017
 
Description London Open House @ Here East 
Form Of Engagement Activity Participation in an open day or visit at my research institution
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact London Open House @ Here East
Year(s) Of Engagement Activity 2018
 
Description Science Museum Half Term 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Public/other audiences
Results and Impact This event was a follow on from Science Museum Lates. Talking to families about robotics and imaging in healthcare under the theme of 'Medicine' in the new 'Medicine: The Wellcome' Galleries.
Year(s) Of Engagement Activity 2020
 
Description Science Museum Lates 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Public/other audiences
Results and Impact Two teams at WEISS took part in the Science Museum Lates, talking to over 600 curious adults about robotics and imaging in healthcare under the theme of 'Medicine'. This was followed up with family half term sessions and a lunch event for older people, all in their new Medicine: The Wellcome Galleries.
Year(s) Of Engagement Activity 2020
URL https://www.ucl.ac.uk/interventional-surgical-sciences/news/2020/feb/weiss-features-science-museum-l...
 
Description Science Museum Older People Event 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Public/other audiences
Results and Impact 62 people attended the lunch event on robotics and imagining in healthcare under the theme of 'Medicine', in the new 'Medicine: The Wellcome Galleries'.
Year(s) Of Engagement Activity 2020
 
Description Science of Surgery 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Public/other audiences
Results and Impact Science of Surgery welcomed over 300 public visitors to WEISS on 12 April 2019 to take part in interactive activities exploring how research across the centre advancing modern surgery & interventions.
Year(s) Of Engagement Activity 2019
URL https://www.ucl.ac.uk/interventional-surgical-sciences/science-surgery
 
Description Surgical Stand-Off 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact The first of two public digital event pitted six WEISS researchers against each other to pitch their research to 38 members of the public and a panel of six esteemed judges in an attempt to win the first Surgical Science Stand Off. The second event attracted 54 members of the public.
Year(s) Of Engagement Activity 2020
URL https://www.ucl.ac.uk/interventional-surgical-sciences/news/2020/nov/surgical-science-stand-ii
 
Description Talk at the Winter Meeting of the European Association of Endoscopic Surgeons (EAES) 2018 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Dr. Danail Stoyanov gave a talk on the application of AI in endoscopic imaging, which invoked an interesting discussion on translation and potential clinical opportunities.
Year(s) Of Engagement Activity 2018
 
Description UCL Being Human Event 'Exploring Under the Skin' 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Other audiences
Results and Impact Talks and an exhibition and interactive exploration of how the humanities, arts and sciences have separately and collectively come together in the endeavour of exploring under our skin and our own reactions to this, from artworks, to the curiosity cabinets of the 1700s and through to modern ways that medicine and the humanities collide.
Year(s) Of Engagement Activity 2018
URL https://beinghumanfestival.org/event/exploring-under-the-skin/