Robotic Assisted Imaging

Lead Research Organisation: University College London
Department Name: Computer Science

Abstract

The paradigm of modern surgical treatment is to reduce the invasive trauma of procedures by using small keyhole ports to enter the body. Robotic assistant systems provide tele-manipulated instruments that facilitate minimally invasive surgery by improving the ergonomics, dexterity and precision of controlling manual keyhole surgery instruments. Robotic surgery is now common for minimally invasive prostate and renal cancer procedures. But imaging inside the body is currently restricted by the access port and only provides information at visible organ surfaces which is often insufficient for easy localisation within the anatomy and avoiding inadvertent damage to healthy tissues.

This project will develop robotic assisted imaging which will exploit the autonomy and actuation capabilities provided by robotic platforms, to optimise the images that can be acquired by current surgical imaging modalities. In the context of robotic assisted surgery, now an established surgical discipline, advanced imaging can help the surgeon to operate more safely and efficiently by allowing the identification of structures that need to be preserved while guiding the surgeon to anatomical targets that need to be removed. Providing better imaging and integration with the robotic system will result in multiple patient benefits by ensuring safe, accurate surgical actions that lead to improved outcomes.

To expose this functionality, new theory, computing, control algorithms and real-time implementations are needed to underpin the integration of imaging and robotic systems within dynamic environments. Information observed by the imaging sensor needs to feed back into the robotic control loop to guide automatic sensor positioning and movement that maintains the alignment of the sensor to moving organs and structures. This level of automation is largely unexplored in robotic assisted surgery at present because it involves multiple challenges in visual inference, reconstruction and tracking; calibration and re-calibration of sensors and various robot kinematic strategies; integration with surgical workflow and user studies. Combined with the use of pre-procedural planning, robotic assisted imaging can lead to pre-planned imaging choices that are motivated by different clinical needs.

As well as having direct applications in surgery, the robotic assisted imaging paradigm will be applicable to many other sectors transformed by robotics, for example manufacturing or inspection, especially when working within non-rigid environments. For this cross sector impact to be achieved the project will build the deep theoretical and robust software platforms that are ideally suited for foundational fellowship support.

Planned Impact

The impact of the proposed fellowship research will be widespread and has cross disciplinary potential. With the predicted uptake of robotic systems in the near future, this has applications in multiple sectors:

Healthcare: The focus for end-point outcomes of the fellowship project, is to improve real-time imaging during surgery by using the current systems for robotic prostate and kidney surgeries, as well as, the currently available intra-operative imaging modalities by developing computational support and capabilities. This has the potential to reduce surgical complications, patient readmission, disease specific indications such as kidney ischemia damage during vessel clamping, and generally all operative risks while increasing the success of surgery. These objectives, if achieved, will have a significant measureable impact on reducing mortality in patients and increasing the quality of life post-surgery, specifically for this fellowship in kidney and prostate cancers, but with potential applications in a wide range of clinical indications. Such impact would have obvious societal and economic measures.

Impact on Medical Technology: Facilitating new imaging during surgery has important synergies with surgical instrumentation, especially with emerging robotic systems. Multinational corporations in this space have recently made significant investments and acquisitions (Google, Johnson & Johnson - Ethicon, Medtronic, Stryker, and others) which are likely to result in a surge in surgical robot devices in the next five years. This will be a fertile ground for commercial exploitation of project outputs. The industrial partner, Intuitive Surgical, also provides a clear translational path within their platform, which is already used in over 500k procedures per year in different surgical specialisations. Advisory group steering will also help with links to medical device exploitation pathways.

Academic Impact: Computer vision techniques for real-time surgical procedures is a highly active and challenging area that is strongly represented at international conferences such as MICCAI, IPCAI and more recently ICRA and IROS by linking to robotics. Additional dissemination will happen at the leading vision meetings like CVPR, ECCV and ICCV for theoretical work on fundamental vision problems both in geometry (calibration and reconstruction) and inference (detection, tracking). The intersection of these areas incorporating sensor modelling, error propagation handling, highly novel nonlinear motion modelling, visual servo and control system integration will lead to publications in such high profile meetings as well as in high-impact journals (see Pathways to Impact and Cost of Support).

Impact on the Wider Research Community: Combination of imaging methodologies and robotics through i) data fusion, ii) multimodality systems, and iii) modelling and inverse problems, are recognised active topics, not only in surgical robotics and medical imaging, but more widely e.g. in seismology, non-destructive testing, nuclear maintenance and decommissioning, manufacturing and aerospace. Cross-fertilization between such different application areas is naturally stimulated through UCL Robotics, CMIC and the various other application focused vehicles throughout UCL. The Alan Turing Institute, in close proximity to UCL, will allow participation in wider programmes of research in large scale data processing exploiting theoretical developments in mathematics, signal processing and machine learning techniques. This and the EPSRC Networks will allow a rapid and effective mechanism to present project results.

Impact on teaching: By developing software throughout the project within structured frameworks, the project outputs will contribute to toolbox software that will be used within new and established programmes such as the MSc in Robotics and Computation and the MRes in Robotics.

Publications

10 25 50

publication icon
Camboni D (2021) Endoscopic Tactile Capsule for Non-Polypoid Colorectal Tumour Detection in IEEE Transactions on Medical Robotics and Bionics

publication icon
Chadebecq F (2023) Artificial intelligence and automation in endoscopy and surgery. in Nature reviews. Gastroenterology & hepatology

publication icon
Chadebecq F (2020) Refractive Two-View Reconstruction for Underwater 3D Vision. in International journal of computer vision

publication icon
Cheung S (2020) Comparison of manual versus robot-assisted contralateral gate cannulation in patients undergoing endovascular aneurysm repair in International Journal of Computer Assisted Radiology and Surgery

publication icon
Clancy NT (2018) Spectral Imaging Of Thermal Damage Induced During Microwave Ablation In The Liver. in Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference

publication icon
Colleoni E (2021) Robotic Instrument Segmentation With Image-to-Image Translation in IEEE Robotics and Automation Letters

publication icon
Colleoni E (2022) SSIS-Seg: Simulation-Supervised Image Synthesis for Surgical Instrument Segmentation. in IEEE transactions on medical imaging

 
Description Key research findings so far include new algorhtms to calibrate between camera and robotic coordinate frames dealing with different, new design of soft-robotic rails that can be used to support robotic assisted imaging in surgery, algorithms for ultra-fast reconstruction of the surgical site from video and also for understanding the site using high level segmentation, instrument tracking, video phase identification. These are all building towards a framework for better robotic assistance during surgery utilising imaging, robot control and interfacing to the surgeon.
Exploitation Route The work has led to a continuing collaboration with Intuitive Surgical Inc. (CA, USA) who support us with the da Vinci surgical system and access to its data as well as the dVRK platforms. Algorithms are typically made open source for dissemination through github and also we are continually making open datasets whenever possible for use by the community.
Sectors Healthcare

 
Description The research has led to multiple public speaking opportunities and public enagement events. We have published extensively to disseminate the work. Several patents have also been filed. Findings around extracting information from surgical video have been used to drive product development at UK SMEs like Digital Surgery Ltd (acquired by Medtronic plc in 2020). Products powered by algorithms to detect and track instrumentation, segment video into sub-components (phases), are now available on the international market.
First Year Of Impact 2020
Sector Digital/Communication/Information Technologies (including Software),Healthcare
Impact Types Economic,Policy & public services

 
Description Actuated Robotic Imaging Skins
Amount £2,780,000 (GBP)
Organisation Royal Academy of Engineering 
Sector Charity/Non Profit
Country United Kingdom
Start 10/2019 
End 09/2029
 
Description CADDIE - Computer Aided Detection and Diagnosis for Intelligent Endoscopy
Amount £584,000 (GBP)
Organisation Innovate UK 
Sector Public
Country United Kingdom
Start 05/2019 
End 04/2021
 
Description EARTH SCAN
Amount £1,200,000 (GBP)
Organisation European Space Agency 
Sector Public
Country France
Start 11/2019 
End 10/2022
 
Description EndoMapper: Real-time mapping from endoscopic video
Amount € 3,697,227 (EUR)
Organisation European Commission 
Sector Public
Country European Union (EU)
Start 12/2019 
End 11/2023
 
Description IGT Network+
Amount £50,000 (GBP)
Organisation Engineering and Physical Sciences Research Council (EPSRC) 
Sector Public
Country United Kingdom
Start 06/2017 
End 01/2018
 
Description MedCity Collaborate to Innovate
Amount £100,000 (GBP)
Organisation MedCity 
Sector Public
Country United Kingdom
Start 04/2017 
End 04/2018
 
Description NIHR i4i - Image-Guided Micro-Precise Flexible Robotic Tools for Retinal Therapeutics Delivery
Amount £1,017,363 (GBP)
Funding ID II-LB-0716-20002 
Organisation National Institute for Health Research 
Department NIHR i4i Invention for Innovation (i4i) Programme
Sector Public
Country United Kingdom
Start 08/2017 
End 08/2020
 
Description NIHR i4i Programme - A multi-modality, surgical planning and guidance system to improve the up-take of laparoscopic liver resection
Amount £1,300,000 (GBP)
Funding ID II-LA-1116-20005 
Organisation National Institute for Health Research 
Department NIHR i4i Invention for Innovation (i4i) Programme
Sector Public
Country United Kingdom
Start 01/2018 
End 01/2021
 
Description Simulation of Complex Off-Road Environments for Autonomous Vehicle Development
Amount £929,000 (GBP)
Organisation Innovate UK 
Sector Public
Country United Kingdom
Start 04/2019 
End 03/2021
 
Description UKRI AI Centre for Doctoral Training in Foundational Artificial Intelligence
Amount £6,443,206 (GBP)
Funding ID EP/S021566/1 
Organisation Engineering and Physical Sciences Research Council (EPSRC) 
Sector Public
Country United Kingdom
Start 03/2019 
End 09/2027
 
Description p-Tentacle: Pneumatically Attachable Flexible Rails For Surgical Applications
Amount £99,000 (GBP)
Funding ID EP/R511638/1 
Organisation Engineering and Physical Sciences Research Council (EPSRC) 
Sector Public
Country United Kingdom
Start 04/2019 
End 03/2020
 
Title 2020 CATARACTS Semantic Segmentation Challenge 
Description Surgical scene segmentation is essential for anatomy and instrument localization which can be further used to assess tissue-instrument interactions during a surgical procedure. In 2017, the Challenge on Automatic Tool Annotation for cataRACT Surgery (CATARACTS) released 50 cataract surgery videos accompanied by instrument usage annotations. These annotations included frame-level instrument presence information. In 2020, we released pixel-wise semantic annotations for anatomy and instruments for 4670 images sampled from 25 videos of the CATARACTS training set. The 2020 CATARACTS Semantic Segmentation Challenge, which was a sub-challenge of the 2020 MICCAI Endoscopic Vision (EndoVis) Challenge, presented three sub-tasks to assess participating solutions on anatomical structure and instrument segmentation. Their performance was assessed on a hidden test set of 531 images from 10 videos of the CATARACTS test set. 
Type Of Material Database/Collection of data 
Year Produced 2021 
Provided To Others? Yes  
Impact The dataset is at the moment the largest surgical video (microsurgical) dataset with labels for semantic segmentation. It was used in the EndoVis challenge at MICCAI which received sponsorship from Medtronic plc. Since then is has stimulated algorithm and model development based on the data. 
URL https://arxiv.org/abs/2110.10965
 
Title Deep Placental Vessel Segmentation for Fetoscopic Mosaicking 
Description During fetoscopic laser photocoagulation, a treatment for twin-to-twin transfusion syndrome (TTTS), the clinician first identifies abnormal placental vascular connections and laser ablates them to regulate blood flow in both fetuses. The procedure is challenging due to the mobility of the environment, poor visibility in amniotic fluid, occasional bleeding, and limitations in the fetoscopic field-of-view and image quality. Ideally, anastomotic placental vessels would be automatically identified, segmented and registered to create expanded vessel maps to guide laser ablation, however, such methods have yet to be clinically adopted. We propose a solution utilising the U-Net architecture for performing placental vessel segmentation in fetoscopic videos. The obtained vessel probability maps provide sufficient cues for mosaicking alignment by registering consecutive vessel maps using the direct intensity-based technique. Experiments on 6 different in vivo fetoscopic videos demonstrate that the vessel intensity-based registration outperformed image intensity-based registration approaches showing better robustness in qualitative and quantitative comparison. We additionally reduce drift accumulation to negligible even for sequences with up to 400 frames and we incorporate a scheme for quantifying drift error in the absence of the ground-truth. Our paper provides a benchmark for fetoscopy placental vessel segmentation and registration by contributing the first in vivo vessel segmentation and fetoscopic videos dataset. 
Type Of Material Database/Collection of data 
Year Produced 2020 
Provided To Others? Yes  
Impact First fetoscopic dataset to be released for open research. Led to the development of the enhanced FetReg dataset released in 2021. 
URL https://arxiv.org/abs/2007.04349
 
Title Depth from Endoscopy 
Description The database containts a set of paired images from a colonoscopy/endoscopy simulator that generates RGB images and also depth images. The data can be used to train models to infer depth from RGB and extended to infer the motion of the camera and the map of the endoluminal environment. 
Type Of Material Database/Collection of data 
Year Produced 2019 
Provided To Others? Yes  
Impact We are seeing groups world wide beginnging to use the dataset both in academie and in industry (recent Google Health publication). 
URL http://cmic.cs.ucl.ac.uk/ColonoscopyDepth/
 
Title Endoscopic Vision Challenge 
Description This is the challenge design document for the "Endoscopic Vision Challenge", accepted for MICCAI 2020. Minimally invasive surgery using cameras to observe the internal anatomy is the preferred approach to many surgical procedures. Furthermore, other surgical disciplines rely on microscopic images. As a result, endoscopic and microscopic image processing as well as surgical vision are evolving as techniques needed to facilitate computer assisted interventions (CAI). Algorithms that have been reported for such images include 3D surface reconstruction, salient feature motion tracking, instrument detection or activity recognition. However, what is missing so far are common datasets for consistent evaluation and benchmarking of algorithms against each other. As a vision CAI challenge at MICCAI, our aim is to provide a formal framework for evaluating the current state of the art, gather researchers in the field and provide high quality data with protocols for validating endoscopic vision algorithms. Sub-Challenge 1: CATARACTS - Surgical Workflow Analysis Surgical microscopes or endoscopes are commonly used to observe the anatomy of the organs in surgeries. Analyzing the video signals issued from these tools are evolving as techniques needed to empower computerassisted interventions (CAI). A fundamental building block to such capabilities is the ability to automatically understand what the surgeons are performing throughout the surgery. In other words, recognizing the surgical activities being performed by the surgeon and segmenting videos into semantic labels, that differentiates and localizes tissue types and different instruments, can be deemed as an essential steps toward CAI. The main motivation for these tasks is to design efficient solutions for surgical workflow analysis, with potential applications in post operative analysis of the surgical intervention, surgical training and real-time decision support. Our application domain is cataract surgery. As a challenge, our aim is to provide a formal framework for evaluating new and current state-of-the-art methods and gather researchers in the field of surgical workflow analysis. Analyzing the surgical workflow is a prerequisite for many applications in computer assisted interventions (CAI), such as real-time decision support, surgeon skill evaluation and report generation. To do so, one crucial step is to recognize the activities being performed by the surgeon throughout the surgery. Visual features have proven their efficiency in such tasks in the recent years, thus, a dataset of cataract surgery videos is used for this task. We have defined twenty surgical activities for cataract procedures. This task consists of identifying the activity at time t using solely visual information from the cataract videos. In particular, it focuses on the online workflow analysis of the cataract surgery, where the algorithm estimates the surgical phase at time t without seeing any future information. Sub-Challenge 2: CATARACTS - Semantic Segmentation Video processing and understanding can be used to empower computer assisted interventions (CAI) as well as the development of detailed post-operative analysis of the surgical intervention. A fundamental building block to such capabilities is the ability to understand and segment video frames into semantic labels that differentiate and localize tissue types and different instruments. Deep learning has advanced semantic segmentation techniques dramatically in recent years. Different papers have proposed and studied deep learning models for the task of segmenting color images into body organs and instruments. These studies are however performed different dataset and different level of granualirities, like instrument vs. background, instrument category vs background and instrument category vs body organs. In this challenge, we create a fine-grained annotated dataset that all anatomical structures and instruments are labelled to allow for a standard evaluation of models using the same data at different granularities. We introduce a high quality dataset for semantic segmentation in Cataract surgery. We generated this dataset from the CATARACTS challenge dataset, which is publicly available. To the best of our knowledge, this dataset has the highest quality annotation in surgical data to date. Sub-Challenge 3: MIcro-Surgical Anastomose Workflow recognition on training sessions Automatic and online recognition of surgical workflows is mandatory to bring computer assisted surgery (CAS) applications inside the operating room. According to the type of surgery, different modalities could be used for workflow recognition. In the case where the addition of multiple sensors is not possible, the information available for manual surgery is generally restricted to video-only. In the case of robotic-assisted surgery, kinematic information is also available. It is expected that multimodal data would make easier automatic recognition methods. The "MIcro-Surgical Anastomose Workflow recognition" (MISAW) sub-challenge provides a unique dataset for online automatic recognition of surgical workflow by using both kinematic and stereoscopic video information on a micro-anastomosis training task. Participants are challenged to recognize online surgical workflow at different granularity levels (phases, steps, and activities) by taking advantage of both modalities available. Participants can submit results for the recognition of one or several granularity levels. In the case of several granularities, participants are encouraged (but not required) to submit the result of a multi granularity workflow recognition, i.e. recognize different granularity levels thanks to a unique model. Sub-Challenge 4: SurgVisDom - Surgical visual domain adaptation: from virtual reality to real, clinical environments Surgical data science is revolutionizing minimally invasive surgery. By developing algorithms to be context-aware, exciting applications to augment surgeons are becoming possible. However, there exist many sensitivities around surgical data (or health data more generally) needed to develop context-aware models. This challenge seeks to explore the potential for visual domain adaptation in surgery to overcome data privacy concerns. In particular, we propose to use video from virtual reality simulation data from clinical-like tasks to develop algorithms to recognize activities and then to test these algorithms on videos of the same task in a clinical setting (i.e., porcine model). 
Type Of Material Database/Collection of data 
Year Produced 2020 
Provided To Others? Yes  
Impact The EndoVis challenge series in now running yearly at MICCAI. It has led to the establishment of new data repositories, challenge benchmarking guidelines and significant growth in algorithm development for surgical and endoscopic applications. 
URL https://zenodo.org/record/3715645
 
Title FetReg: Placental Vessel Segmentation and Registration in Fetoscopy Challenge Dataset 
Description Fetoscopy laser photocoagulation is a widely used procedure for the treatment of Twin-to-Twin Transfusion Syndrome (TTTS), that occur in mono-chorionic multiple pregnancies due to placental vascular anastomoses. This procedure is particularly challenging due to limited field of view, poor manoeuvrability of the fetoscope, poor visibility due to fluid turbidity, variability in light source, and unusual position of the placenta. This may lead to increased procedural time and incomplete ablation, resulting in persistent TTTS. Computer-assisted intervention may help overcome these challenges by expanding the fetoscopic field of view through video mosaicking and providing better visualization of the vessel network. However, the research and development in this domain remain limited due to unavailability of high-quality data to encode the intra- and inter-procedure variability. Through the \textit{Fetoscopic Placental Vessel Segmentation and Registration (FetReg)} challenge, we present a large-scale multi-centre dataset for the development of generalized and robust semantic segmentation and video mosaicking algorithms for the fetal environment with a focus on creating drift-free mosaics from long duration fetoscopy videos. In this paper, we provide an overview of the FetReg dataset, challenge tasks, evaluation metrics and baseline methods for both segmentation and registration. Baseline methods results on the FetReg dataset shows that our dataset poses interesting challenges, offering large opportunity for the creation of novel methods and models through a community effort initiative guided by the FetReg challenge. 
Type Of Material Database/Collection of data 
Year Produced 2021 
Provided To Others? Yes  
Impact This is the first such dataset to be made available for research. It is a multi-centre resource. It is supporting AI model development for fetoscopic applications. 
URL https://arxiv.org/abs/2106.05923
 
Title Surgical Visual Domain Adaptation: Results from the MICCAI 2020 SurgVisDom Challenge 
Description Surgical data science is revolutionizing minimally invasive surgery by enabling context-aware applications. However, many challenges exist around surgical data (and health data, more generally) needed to develop context-aware models. This work - presented as part of the Endoscopic Vision (EndoVis) challenge at the Medical Image Computing and Computer Assisted Intervention (MICCAI) 2020 conference - seeks to explore the potential for visual domain adaptation in surgery to overcome data privacy concerns. In particular, we propose to use video from virtual reality (VR) simulations of surgical exercises in robotic-assisted surgery to develop algorithms to recognize tasks in a clinical-like setting. We present the performance of the different approaches to solve visual domain adaptation developed by challenge participants. Our analysis shows that the presented models were unable to learn meaningful motion based features form VR data alone, but did significantly better when small amount of clinical-like data was also made available. Based on these results, we discuss promising methods and further work to address the problem of visual domain adaptation in surgical data science. We also release the challenge dataset publicly at https://www.synapse.org/surgvisdom2020. 
Type Of Material Database/Collection of data 
Year Produced 2021 
Provided To Others? Yes  
Impact Domain adaptation can result in supporting the 3Rs by allowing AI model development on simulation datasets. 
URL https://arxiv.org/abs/2102.13644
 
Title Surgical instrument database of video 
Description Database of videos with surgical instruments within a lab setting. Useful for computer vision and image analysis development. 
Type Of Material Database/Collection of data 
Year Produced 2016 
Provided To Others? Yes  
Impact There is now an active challenge at MICCAI running on this topic. It is organised by the student, Max Allen, who developed the initial concepts. 
URL http://www.surgicalvision.cs.ucl.ac.uk/resources/benchmarking/#home
 
Description Collaboration with DeepMind Technologies - Data Efficient Reinforcement Learning - PhD Studentship - Dhruva Tirumala 
Organisation DeepMind Technologies Limited
Country United Kingdom 
Sector Private 
PI Contribution Development of RL algorithms with priors.
Collaborator Contribution Development of RL algorithms with priors.
Impact Too early.
Start Year 2020
 
Description Collaboration with Medtronic plc - Surgical Activity Recognition PhD Project - Matt Lee 
Organisation Medtronic
Department Medtronic Ltd
Country United Kingdom 
Sector Private 
PI Contribution Development of AI algorithms to understand surgical process.
Collaborator Contribution Development of data structures and pipelines for underpinning AI algorithms to understand surgical process.
Impact Too early.
Start Year 2021
 
Description Collaboration with Odin Vision Ltd 
Organisation Odin Vision
Sector Private 
PI Contribution Research and supervision of PhD students to develop novel AI technologies for endoscopy.
Collaborator Contribution Provision of infrastructure for video data processing, storage and labelling to develop AI models. Funding for PhD studentships.
Impact No outputs yet as the collaboration has just begun.
Start Year 2020
 
Description Collaboration with University of Toronto - Funded by ESRC "Self-guided Microrobotics for Automated Brain Dissection" 
Organisation University of Toronto
Country Canada 
Sector Academic/University 
PI Contribution We are developing AI systems to study microscopic images and drive micro-scale robot control for cell harvesting
Collaborator Contribution UoT develop the optical system and the harvesting robot capabilities. They are also leading the clinical data management and translational pathway.
Impact Multi-disciplinary between computer science, engineering, chemisty and neuroscience.
Start Year 2020
 
Description Intuitive Surgical CA 
Organisation Intuitive Surgical Inc
Country United States 
Sector Private 
PI Contribution Development of algorithms and AI models to process video and other robotic data from the Intuitive Surgical system in clinical environments.
Collaborator Contribution Access to data from the Intuitive Surgical system.
Impact N/A
Start Year 2013
 
Description da Vinci Research Kit Consortium partnership 
Organisation Intuitive Surgical Inc
Country United States 
Sector Private 
PI Contribution We have contributed in the development of computer vision software and algorithms for stereoscopic endoscope video.
Collaborator Contribution Donation of equipment and exchange of knowledge and access to data.
Impact The partnership and consortium have resulted in funding from NIH, multiple papers, knowledge exchange and student engagement as well as dissemination activities.
Start Year 2016
 
Title CAPSULE ENDOSCOPY 
Description An apparatus for capsule endoscopy, the apparatus comprising a capsule that comprises: at least one inflatable bladder configured to form a toroid having a hole and an outer periphery when inflated; a plurality of continuous tracks, each extending through the hole and around the outer periphery of the at least one inflatable bladder; and a propulsion system configured to drive the continuous tracks; wherein the capsule is configured such that the continuous tracks slip over the at least one inflatable bladder when driven by the propulsion system. 
IP Reference US2021204801 
Protection Patent application published
Year Protection Granted 2021
Licensed Commercial In Confidence
Impact Too early to state.
 
Title Method And Apparatus For Estimating The Value Of A Physical Parameter In A Biological Tissue 
Description A method and apparatus are provided for estimating the value of a physical parameter of biological tissue. The method comprises acquiring a colour image of the biological tissue from a single image capture device; extracting from the colour image at least two images in respective optical wavebands having a different spectral sensitivity from one another, whereby a given location in the biological tissue is present in each of the extracted images; providing a physical model of the optical properties of the biological tissue, wherein the optical properties of the biological tissue are sensitive to the value of said physical parameter; and estimating the value of the physical parameter at said given location based on an intensity value at that location for each extracted image. The estimating utilises the physical model of the optical properties of the biological tissue and the spectral sensitivity for each respective waveband. 
IP Reference US2019320875 
Protection Patent granted
Year Protection Granted 2019
Licensed No
Impact No impacts at present.
 
Title CADDIE - Computer Aided Detection and Diagnosis for Intelligent Endoscopy 
Description CADDIE - Computer Aided Detection and Diagnosis for Intelligent Endoscopy - is an Innovate UK funded project to put Odin Vision Ltd (spin out from UCL) technology into a clinical trial at UCLH. The company's first product utilises AI to assist the detection and characterisation of polyps during video colonoscopy. 
Type Support Tool - For Medical Intervention
Current Stage Of Development Early clinical assessment
Year Development Stage Completed 2019
Development Status Under active development/distribution
Impact Similar technology is being developed to support interventions in upper GI endoscopy. 
 
Title Surgical robot arm visualization software 
Description Cinder application designed to enable visualisation of robotic arms at a given configuration overlaid on a camera feed. This is particularly useful for surgical robotics where we can estimate arm position from joint kinematic data and then render the joints on the camera's field of view. 
Type Of Technology Software 
Year Produced 2016 
Open Source License? Yes  
Impact Multiple research sites have used it for development and testing of surgical vision algorithms involving robotic instruments. 
URL http://www.surgicalvision.cs.ucl.ac.uk/resources/code/
 
Company Name ODIN MEDICAL LIMITED 
Description Odin specialises in developing computer aided diagnostic solutions for endoscopic diagnostic and therapeutic procedures. 
Year Established 2018 
Impact Too early to report key achievements.
 
Description Endoscopic Imaging and AI - Artificial Intelligence in Medicine, Enterprise Network Europe, London, UK 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Industry/Business
Results and Impact Dr. Stoyanov gave a talk at an event organised to facilitate exchange and new business opportunities between the UK and Austria co-organised with the Austrian Trade Commission. The event was well attended and involved interesting discussion on how AI will influence clinical decision making.
Year(s) Of Engagement Activity 2018
 
Description Interview for Korean national newspaper 
Form Of Engagement Activity A magazine, newsletter or online publication
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Public/other audiences
Results and Impact Dr. Danail Stoyanov was interviewed by one of the largest Korean newspapers on the role of robotics in surgery and healthcare.
Year(s) Of Engagement Activity 2017
URL http://mnews.joins.com/article/2200741
 
Description Keynote talk at CARE Workshop, MICCAI 2018, Quebec, Canada 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Dr. Stoyanov gave a keynote talk at the Computer Assisted and Robotic Endoscopy workshop held at MICCAI 2018. The talk had a lively discussion afterwards focusing on the future impact of endoscopic computer vision in surgery.
Year(s) Of Engagement Activity 2017
 
Description London Open House @ Here East 
Form Of Engagement Activity Participation in an open day or visit at my research institution
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact London Open House @ Here East
Year(s) Of Engagement Activity 2018
 
Description Science Museum Half Term 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Public/other audiences
Results and Impact This event was a follow on from Science Museum Lates. Talking to families about robotics and imaging in healthcare under the theme of 'Medicine' in the new 'Medicine: The Wellcome' Galleries.
Year(s) Of Engagement Activity 2020
 
Description Science Museum Lates 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Public/other audiences
Results and Impact Two teams at WEISS took part in the Science Museum Lates, talking to over 600 curious adults about robotics and imaging in healthcare under the theme of 'Medicine'. This was followed up with family half term sessions and a lunch event for older people, all in their new Medicine: The Wellcome Galleries.
Year(s) Of Engagement Activity 2020
URL https://www.ucl.ac.uk/interventional-surgical-sciences/news/2020/feb/weiss-features-science-museum-l...
 
Description Science Museum Older People Event 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Public/other audiences
Results and Impact 62 people attended the lunch event on robotics and imagining in healthcare under the theme of 'Medicine', in the new 'Medicine: The Wellcome Galleries'.
Year(s) Of Engagement Activity 2020
 
Description Science of Surgery 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Public/other audiences
Results and Impact Science of Surgery welcomed over 300 public visitors to WEISS on 12 April 2019 to take part in interactive activities exploring how research across the centre advancing modern surgery & interventions.
Year(s) Of Engagement Activity 2019
URL https://www.ucl.ac.uk/interventional-surgical-sciences/science-surgery
 
Description Surgical Stand-Off 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact The first of two public digital event pitted six WEISS researchers against each other to pitch their research to 38 members of the public and a panel of six esteemed judges in an attempt to win the first Surgical Science Stand Off. The second event attracted 54 members of the public.
Year(s) Of Engagement Activity 2020
URL https://www.ucl.ac.uk/interventional-surgical-sciences/news/2020/nov/surgical-science-stand-ii
 
Description Talk at the Winter Meeting of the European Association of Endoscopic Surgeons (EAES) 2018 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Dr. Danail Stoyanov gave a talk on the application of AI in endoscopic imaging, which invoked an interesting discussion on translation and potential clinical opportunities.
Year(s) Of Engagement Activity 2018
 
Description UCL Being Human Event 'Exploring Under the Skin' 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Other audiences
Results and Impact Talks and an exhibition and interactive exploration of how the humanities, arts and sciences have separately and collectively come together in the endeavour of exploring under our skin and our own reactions to this, from artworks, to the curiosity cabinets of the 1700s and through to modern ways that medicine and the humanities collide.
Year(s) Of Engagement Activity 2018
URL https://beinghumanfestival.org/event/exploring-under-the-skin/