Context Aware Augmented Reality for Endonasal Endoscopic Surgery

Lead Research Organisation: University College London
Department Name: Medical Physics and Biomedical Eng

Abstract

This project aims to develop tools to guide a surgeon during surgery to remove cancers on the pituitary gland.

Access to the pituitary gland is difficult, and one current approach is the endonasal approach, through the nose. However, while this approach is minimally invasive which is better for the patient, it is technically challenging for the surgeon. It is difficult for the surgeon to manoeuvre the tools, but also difficult for the surgeon to maintain contextual awareness and remember the location of and identify critical structures.

One proposed solution is to combine pre-operative scan data, such as information from Magnetic Resonance Imaging (MRI), or Computed Tomography (CT) scans, and use them in conjunction with the video. Typically, engineers have proposed "Augmented Reality", where the information from MRI/CT scans is simply overlaid on top of the endoscopic video. But this approach has not found favour with clinical teams, and the result is often confusing and difficult to use.

In this project we have assembled a team of surgeons and engineers to re-think the Augmented Reality paradigm from the ground up. First, the aim is to identify the most relevant information to display on-screen at each stage of the operation. Then machine learning will be used to analyse the endoscopic video, and automatically identify which stage of the procedure the surgeon is working on. The guidance system will then automatically switch modes, and provide the most useful information for each stage of the procedure. Finally, we will automate the alignment of pre-operative data to the endoscopic video, using machine learning techniques.

The end result should be more accurate, and more clinically relevant than the current state of the art methods, and represent a genuine step change in performance for image-guidance during skull-base procedures.

Publications

10 25 50