Multispectral polarization-resolved endoscopy and vision for intraoperative imaging of tissue microstructure and function

Lead Research Organisation: University College London
Department Name: Computer Science

Abstract

Abstracts are not currently available in GtR for all funded research. This is normally because the abstract was not required at the time of proposal submission, but may be because it included sensitive information such as personal details.

Publications

10 25 50
publication icon
Ahmad O (2020) Barriers and pitfalls for artificial intelligence in gastroenterology: Ethical and regulatory issues in Techniques and Innovations in Gastrointestinal Endoscopy

publication icon
Alabi O (2022) Robust fetoscopic mosaicking from deep learned flow fields. in International journal of computer assisted radiology and surgery

publication icon
Bano S (2020) FetNet: a recurrent convolutional network for occlusion identification in fetoscopic videos in International Journal of Computer Assisted Radiology and Surgery

publication icon
Bano S (2020) Deep learning-based fetoscopic mosaicking for field-of-view expansion. in International journal of computer assisted radiology and surgery

 
Description The conducted research has led to the development of a new imaging system to be used during surgery. This has now been feasibility tested in one human patient and surrogate systems have been used to image several patients at University College Hospital.
Exploitation Route Too early to say but the research could lead to new medical device development. Some exploration is under way to see if the IP developed through this award can lead to commercialisation.
Sectors Digital/Communication/Information Technologies (including Software),Healthcare

 
Description Our research has fed into public engagement events and dissemination in the academic domain. One related patent has been granted and will be used to engage industry stakeholders, after exploration of a UCL spin-out company is considered fully.
First Year Of Impact 2021
Sector Digital/Communication/Information Technologies (including Software),Electronics,Healthcare
Impact Types Societal,Economic

 
Description Actuated Robotic Imaging Skins
Amount £2,780,000 (GBP)
Organisation Royal Academy of Engineering 
Sector Charity/Non Profit
Country United Kingdom
Start 10/2019 
End 09/2029
 
Description CADDIE - Computer Aided Detection and Diagnosis for Intelligent Endoscopy
Amount £584,000 (GBP)
Organisation Innovate UK 
Sector Public
Country United Kingdom
Start 05/2019 
End 04/2021
 
Description EARTH SCAN
Amount £1,200,000 (GBP)
Organisation European Space Agency 
Sector Public
Country France
Start 11/2019 
End 10/2022
 
Description EndoMapper: Real-time mapping from endoscopic video
Amount € 3,697,227 (EUR)
Organisation European Commission 
Sector Public
Country European Union (EU)
Start 12/2019 
End 11/2023
 
Title Depth from Endoscopy 
Description The database containts a set of paired images from a colonoscopy/endoscopy simulator that generates RGB images and also depth images. The data can be used to train models to infer depth from RGB and extended to infer the motion of the camera and the map of the endoluminal environment. 
Type Of Material Database/Collection of data 
Year Produced 2019 
Provided To Others? Yes  
Impact We are seeing groups world wide beginnging to use the dataset both in academie and in industry (recent Google Health publication). 
URL http://cmic.cs.ucl.ac.uk/ColonoscopyDepth/
 
Title Endoscopic Vision Challenge 
Description This is the challenge design document for the "Endoscopic Vision Challenge", accepted for MICCAI 2020. Minimally invasive surgery using cameras to observe the internal anatomy is the preferred approach to many surgical procedures. Furthermore, other surgical disciplines rely on microscopic images. As a result, endoscopic and microscopic image processing as well as surgical vision are evolving as techniques needed to facilitate computer assisted interventions (CAI). Algorithms that have been reported for such images include 3D surface reconstruction, salient feature motion tracking, instrument detection or activity recognition. However, what is missing so far are common datasets for consistent evaluation and benchmarking of algorithms against each other. As a vision CAI challenge at MICCAI, our aim is to provide a formal framework for evaluating the current state of the art, gather researchers in the field and provide high quality data with protocols for validating endoscopic vision algorithms. Sub-Challenge 1: CATARACTS - Surgical Workflow Analysis Surgical microscopes or endoscopes are commonly used to observe the anatomy of the organs in surgeries. Analyzing the video signals issued from these tools are evolving as techniques needed to empower computerassisted interventions (CAI). A fundamental building block to such capabilities is the ability to automatically understand what the surgeons are performing throughout the surgery. In other words, recognizing the surgical activities being performed by the surgeon and segmenting videos into semantic labels, that differentiates and localizes tissue types and different instruments, can be deemed as an essential steps toward CAI. The main motivation for these tasks is to design efficient solutions for surgical workflow analysis, with potential applications in post operative analysis of the surgical intervention, surgical training and real-time decision support. Our application domain is cataract surgery. As a challenge, our aim is to provide a formal framework for evaluating new and current state-of-the-art methods and gather researchers in the field of surgical workflow analysis. Analyzing the surgical workflow is a prerequisite for many applications in computer assisted interventions (CAI), such as real-time decision support, surgeon skill evaluation and report generation. To do so, one crucial step is to recognize the activities being performed by the surgeon throughout the surgery. Visual features have proven their efficiency in such tasks in the recent years, thus, a dataset of cataract surgery videos is used for this task. We have defined twenty surgical activities for cataract procedures. This task consists of identifying the activity at time t using solely visual information from the cataract videos. In particular, it focuses on the online workflow analysis of the cataract surgery, where the algorithm estimates the surgical phase at time t without seeing any future information. Sub-Challenge 2: CATARACTS - Semantic Segmentation Video processing and understanding can be used to empower computer assisted interventions (CAI) as well as the development of detailed post-operative analysis of the surgical intervention. A fundamental building block to such capabilities is the ability to understand and segment video frames into semantic labels that differentiate and localize tissue types and different instruments. Deep learning has advanced semantic segmentation techniques dramatically in recent years. Different papers have proposed and studied deep learning models for the task of segmenting color images into body organs and instruments. These studies are however performed different dataset and different level of granualirities, like instrument vs. background, instrument category vs background and instrument category vs body organs. In this challenge, we create a fine-grained annotated dataset that all anatomical structures and instruments are labelled to allow for a standard evaluation of models using the same data at different granularities. We introduce a high quality dataset for semantic segmentation in Cataract surgery. We generated this dataset from the CATARACTS challenge dataset, which is publicly available. To the best of our knowledge, this dataset has the highest quality annotation in surgical data to date. Sub-Challenge 3: MIcro-Surgical Anastomose Workflow recognition on training sessions Automatic and online recognition of surgical workflows is mandatory to bring computer assisted surgery (CAS) applications inside the operating room. According to the type of surgery, different modalities could be used for workflow recognition. In the case where the addition of multiple sensors is not possible, the information available for manual surgery is generally restricted to video-only. In the case of robotic-assisted surgery, kinematic information is also available. It is expected that multimodal data would make easier automatic recognition methods. The "MIcro-Surgical Anastomose Workflow recognition" (MISAW) sub-challenge provides a unique dataset for online automatic recognition of surgical workflow by using both kinematic and stereoscopic video information on a micro-anastomosis training task. Participants are challenged to recognize online surgical workflow at different granularity levels (phases, steps, and activities) by taking advantage of both modalities available. Participants can submit results for the recognition of one or several granularity levels. In the case of several granularities, participants are encouraged (but not required) to submit the result of a multi granularity workflow recognition, i.e. recognize different granularity levels thanks to a unique model. Sub-Challenge 4: SurgVisDom - Surgical visual domain adaptation: from virtual reality to real, clinical environments Surgical data science is revolutionizing minimally invasive surgery. By developing algorithms to be context-aware, exciting applications to augment surgeons are becoming possible. However, there exist many sensitivities around surgical data (or health data more generally) needed to develop context-aware models. This challenge seeks to explore the potential for visual domain adaptation in surgery to overcome data privacy concerns. In particular, we propose to use video from virtual reality simulation data from clinical-like tasks to develop algorithms to recognize activities and then to test these algorithms on videos of the same task in a clinical setting (i.e., porcine model). 
Type Of Material Database/Collection of data 
Year Produced 2020 
Provided To Others? Yes  
Impact The EndoVis challenge series in now running yearly at MICCAI. It has led to the establishment of new data repositories, challenge benchmarking guidelines and significant growth in algorithm development for surgical and endoscopic applications. 
URL https://zenodo.org/record/3715645
 
Title Surgical Visual Domain Adaptation: Results from the MICCAI 2020 SurgVisDom Challenge 
Description Surgical data science is revolutionizing minimally invasive surgery by enabling context-aware applications. However, many challenges exist around surgical data (and health data, more generally) needed to develop context-aware models. This work - presented as part of the Endoscopic Vision (EndoVis) challenge at the Medical Image Computing and Computer Assisted Intervention (MICCAI) 2020 conference - seeks to explore the potential for visual domain adaptation in surgery to overcome data privacy concerns. In particular, we propose to use video from virtual reality (VR) simulations of surgical exercises in robotic-assisted surgery to develop algorithms to recognize tasks in a clinical-like setting. We present the performance of the different approaches to solve visual domain adaptation developed by challenge participants. Our analysis shows that the presented models were unable to learn meaningful motion based features form VR data alone, but did significantly better when small amount of clinical-like data was also made available. Based on these results, we discuss promising methods and further work to address the problem of visual domain adaptation in surgical data science. We also release the challenge dataset publicly at https://www.synapse.org/surgvisdom2020. 
Type Of Material Database/Collection of data 
Year Produced 2021 
Provided To Others? Yes  
Impact Domain adaptation can result in supporting the 3Rs by allowing AI model development on simulation datasets. 
URL https://arxiv.org/abs/2102.13644
 
Description Collaboration with Odin Vision Ltd 
Organisation Odin Vision
Sector Private 
PI Contribution Research and supervision of PhD students to develop novel AI technologies for endoscopy.
Collaborator Contribution Provision of infrastructure for video data processing, storage and labelling to develop AI models. Funding for PhD studentships.
Impact No outputs yet as the collaboration has just begun.
Start Year 2020
 
Description Intuitive Surgical CA 
Organisation Intuitive Surgical Inc
Country United States 
Sector Private 
PI Contribution Development of algorithms and AI models to process video and other robotic data from the Intuitive Surgical system in clinical environments.
Collaborator Contribution Access to data from the Intuitive Surgical system.
Impact N/A
Start Year 2013
 
Title Method And Apparatus For Estimating The Value Of A Physical Parameter In A Biological Tissue 
Description A method and apparatus are provided for estimating the value of a physical parameter of biological tissue. The method comprises acquiring a colour image of the biological tissue from a single image capture device; extracting from the colour image at least two images in respective optical wavebands having a different spectral sensitivity from one another, whereby a given location in the biological tissue is present in each of the extracted images; providing a physical model of the optical properties of the biological tissue, wherein the optical properties of the biological tissue are sensitive to the value of said physical parameter; and estimating the value of the physical parameter at said given location based on an intensity value at that location for each extracted image. The estimating utilises the physical model of the optical properties of the biological tissue and the spectral sensitivity for each respective waveband. 
IP Reference US2019320875 
Protection Patent granted
Year Protection Granted 2019
Licensed No
Impact No impacts at present.
 
Title CADDIE - Computer Aided Detection and Diagnosis for Intelligent Endoscopy 
Description CADDIE - Computer Aided Detection and Diagnosis for Intelligent Endoscopy - is an Innovate UK funded project to put Odin Vision Ltd (spin out from UCL) technology into a clinical trial at UCLH. The company's first product utilises AI to assist the detection and characterisation of polyps during video colonoscopy. 
Type Support Tool - For Medical Intervention
Current Stage Of Development Early clinical assessment
Year Development Stage Completed 2019
Development Status Under active development/distribution
Impact Similar technology is being developed to support interventions in upper GI endoscopy. 
 
Company Name ODIN MEDICAL LIMITED 
Description Odin specialises in developing computer aided diagnostic solutions for endoscopic diagnostic and therapeutic procedures. 
Year Established 2018 
Impact Too early to report key achievements.
 
Description Endoscopic Imaging and AI - Artificial Intelligence in Medicine, Enterprise Network Europe, London, UK 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Industry/Business
Results and Impact Dr. Stoyanov gave a talk at an event organised to facilitate exchange and new business opportunities between the UK and Austria co-organised with the Austrian Trade Commission. The event was well attended and involved interesting discussion on how AI will influence clinical decision making.
Year(s) Of Engagement Activity 2018
 
Description Science Museum Half Term 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Public/other audiences
Results and Impact This event was a follow on from Science Museum Lates. Talking to families about robotics and imaging in healthcare under the theme of 'Medicine' in the new 'Medicine: The Wellcome' Galleries.
Year(s) Of Engagement Activity 2020
 
Description Science Museum Lates 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Public/other audiences
Results and Impact Two teams at WEISS took part in the Science Museum Lates, talking to over 600 curious adults about robotics and imaging in healthcare under the theme of 'Medicine'. This was followed up with family half term sessions and a lunch event for older people, all in their new Medicine: The Wellcome Galleries.
Year(s) Of Engagement Activity 2020
URL https://www.ucl.ac.uk/interventional-surgical-sciences/news/2020/feb/weiss-features-science-museum-l...
 
Description Science Museum Older People Event 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Public/other audiences
Results and Impact 62 people attended the lunch event on robotics and imagining in healthcare under the theme of 'Medicine', in the new 'Medicine: The Wellcome Galleries'.
Year(s) Of Engagement Activity 2020
 
Description Science of Surgery 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Public/other audiences
Results and Impact Science of Surgery welcomed over 300 public visitors to WEISS on 12 April 2019 to take part in interactive activities exploring how research across the centre advancing modern surgery & interventions.
Year(s) Of Engagement Activity 2019
URL https://www.ucl.ac.uk/interventional-surgical-sciences/science-surgery
 
Description Surgical Stand-Off 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact The first of two public digital event pitted six WEISS researchers against each other to pitch their research to 38 members of the public and a panel of six esteemed judges in an attempt to win the first Surgical Science Stand Off. The second event attracted 54 members of the public.
Year(s) Of Engagement Activity 2020
URL https://www.ucl.ac.uk/interventional-surgical-sciences/news/2020/nov/surgical-science-stand-ii
 
Description UCL Being Human Event 'Exploring Under the Skin' 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Other audiences
Results and Impact Talks and an exhibition and interactive exploration of how the humanities, arts and sciences have separately and collectively come together in the endeavour of exploring under our skin and our own reactions to this, from artworks, to the curiosity cabinets of the 1700s and through to modern ways that medicine and the humanities collide.
Year(s) Of Engagement Activity 2018
URL https://beinghumanfestival.org/event/exploring-under-the-skin/
 
Description UCLH Research Open Day 2018 & 2019 
Form Of Engagement Activity Participation in an open day or visit at my research institution
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Public/other audiences
Results and Impact An Open Day allowing attendees to walk through the last 70 years of NHS stories and research. Talks by doctors and scientists about the work they are doing in order to improve healthcare of patients both at UCLH and nationally.
Year(s) Of Engagement Activity 2018,2019
 
Description WEISS Science of Surgery Event 
Form Of Engagement Activity Participation in an open day or visit at my research institution
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Public/other audiences
Results and Impact An open day event taking place at the WEISS research centre in Charles Bell House, 43-45 Foley Street, Fitzrovia, London: hands-on activities (such as give-it-a-go surgery, magic tricks and creating your own soundtrack using nano-sensors) exploring how science and technology is shaping our lives through health and medicine, from medical imaging, to robots and sensors.
Year(s) Of Engagement Activity 2019,2022