📣 Help Shape the Future of UKRI's Gateway to Research (GtR)

We're improving UKRI's Gateway to Research and are seeking your input! If you would be interested in being interviewed about the improvements we're making and to have your say about how we can make GtR more user-friendly, impactful, and effective for the Research and Innovation community, please email gateway@ukri.org.

AID-PitSurg: AI-enabled Decision support in Pituitary Surgery to reduce complications

Lead Research Organisation: UNIVERSITY COLLEGE LONDON
Department Name: Computer Science

Abstract

The pituitary is a small gland at the base of the brain that produces hormones that control several important bodily functions. Pituitary tumours are one of the most common types of brain tumours, where a symptomatic tumour can cause hormonal imbalances and other health problems. Transsphenoidal surgery is the gold standard treatment for most symptomatic pituitary tumours. This is a minimally invasive surgery as it is performed through the nostrils and nasal sinuses leaving no visible scars from the procedure.

Transsphenoidal surgery is challenging and high risk due to the narrow approach and proximity of critical neurovascular structures such as the optic nerves and carotid arteries, resulting in a relatively high rate of complications. The most common of these complications requiring medical or surgical treatment are dysnatraemia (related to pituitary dysfunction), and post-operative cerebrospinal fluid (CSF) rhinorrhoea (related to insufficient repair of the skull base). Thus, leading to increased hospitalization and recovery time with high risk of life-threatening conditions.

To reduce the risk of these complications, this research project aims to develop a real-time Artificial Intelligence (AI) assisted decision support framework that can understand the surgical procedure, predict surgical errors and identify intraoperative causes of complications. The AI model will recognise surgical steps, detect surgical instruments, and identify specific instrument-tissue interactions during the sellar phase (for dysnatraemia) and closure phase (for CSF rhinorrhoea) of the surgery. The framework will use multimodal data, including pre- and post-operative clinical data and surgical scene perception, to predict and alert the surgeon of any surgical errors and potential post-operative complications in real-time.

By developing this framework, the project aims to improve surgical outcomes by reducing the frequency of post-operative complications, shortening the length of hospital stays, and improving patients' recovery.

Publications

10 25 50
 
Description The project has just entered its 6th month into funding. The main objective of our project is to improve surgical outcomes through the use of AI in minimally invasive brain tumour surgery.
Our clinical team is establishing vocabulary for annotating surgical videos for surgeon skill analysis. Our hypothesis is that this will allow in indicating any potential error in the most critical parts of the brain tumour surgery.
Our engineering team is working closely with the industrial partners in establishing prototype framework for realtime integration of AI algorithms into surgical settings. We are also actively developing state-of-the-art AI algorithms for surgical context understanding.
Exploitation Route With the development of AI algorithms for complete surgical context understanding and their integration into surgical settings, surgical outcome can be improved, resulting in reduced cognitive load on the surgeon during the surgery and significantly reduced patient's post-surgical care cost. Through this funding, we will be able to develop this AI system and demonstrate its use through a pre-clinical trial. Beyond the scope of this project, we are establishing connections with multiple neurosurgical centres worldwide, to gather data and achieve model generalisation. Our ambitions is to integrate AI in surgical setting in the next 3-4 years time where this project is of significant importance enabling the development of the proof-of-concept AI system.
Sectors Healthcare

 
Title Endoscopic Pituitary Surgery on a High-fidelity Bench-top Phantom 
Description The first public dataset containing both instrument and surgical skill assessment annotations in a high-fidelity bench-top phantom (www.store.upsurgeon.com/products/tnsbox/) of the nasal phase of the endoscopic TransSphenoidal Approach (eTSA). The dataset includes 15-videos ({video_number}.mp4), the corresponding mOSATS with level of surgical expertise (mOSATS.csv), and instrument segmentation annotations (annotations.csv). The companion paper with baseline results is titled: "Automated Surgical Skill Assessment in Endoscopic Pituitary Surgery using Real-time Instrument Tracking on a High-fidelity Bench-top Phantom" (Adrito Das et al, in-press). Please cite this paper if you have used this dataset. 
Type Of Material Database/Collection of data 
Year Produced 2024 
Provided To Others? Yes  
Impact We are the first to release a public data on this topic creating an opportunity for the research community to overcome the challenges in this research field. 
URL https://rdr.ucl.ac.uk/articles/dataset/Endoscopic_Pituitary_Surgery_on_a_High-fidelity_Bench-top_Pha...
 
Title Endoscopic Pituitary Surgery on a High-fidelity Bench-top Phantom 
Description The first public dataset containing both instrument and surgical skill assessment annotations in a high-fidelity bench-top phantom (www.store.upsurgeon.com/products/tnsbox/) of the nasal phase of the endoscopic TransSphenoidal Approach (eTSA). The dataset includes 15-videos ({video_number}.mp4), the corresponding mOSATS with level of surgical expertise (mOSATS.csv), and instrument segmentation annotations (annotations.csv). The companion paper with baseline results is titled: "Automated Surgical Skill Assessment in Endoscopic Pituitary Surgery using Real-time Instrument Tracking on a High-fidelity Bench-top Phantom" (Adrito Das et al, in-press). Please cite this paper if you have used this dataset. 
Type Of Material Database/Collection of data 
Year Produced 2024 
Provided To Others? Yes  
Impact We are the first to release a public data on this topic creating an opportunity for the research community to overcome the challenges in this research field. 
URL https://rdr.ucl.ac.uk/articles/dataset/Endoscopic_Pituitary_Surgery_on_a_High-fidelity_Bench-top_Pha...
 
Title PitSurgRT: real-time localization of critical anatomical structures in endoscopic pituitary surgery 
Description Endoscopic pituitary surgery entails navigating through the nasal cavity and sphenoid sinus to access the sella using an endoscope. This procedure is intricate due to the proximity of crucial anatomical structures (e.g. carotid arteries and optic nerves) to pituitary tumours, and any unintended damage can lead to severe complications including blindness and death. Intraoperative guidance during this surgery could support improved localization of the critical structures leading to reducing the risk of complications. A deep learning network PitSurgRT is proposed for real-time localization of critical structures in endoscopic pituitary surgery. The network uses high-resolution net (HRNet) as a backbone with a multi-head for jointly localizing critical anatomical structures while segmenting larger structures simultaneously. Moreover, the trained model is optimized and accelerated by using TensorRT. Finally, the model predictions are shown to neurosurgeons, to test their guidance capabilities. Compared with the state-of-the-art method, our model significantly reduces the mean error in landmark detection of the critical structures from 138.76 to 54.40 pixels in a 1280 x 720-pixel image. Furthermore, the semantic segmentation of the most critical structure, sella, is improved by 4.39% IoU. The inference speed of the accelerated model achieves 298 frames per second with floating-point-16 precision. In the study of 15 neurosurgeons, 88.67% of predictions are considered accurate enough for real-time guidance. 
Type Of Material Computer model/algorithm 
Year Produced 2024 
Provided To Others? Yes  
Impact The proposed method is highly promising in providing real-time intraoperative guidance of the critical anatomical structures in endoscopic pituitary surgery. 
URL https://link.springer.com/article/10.1007/s11548-024-03094-2
 
Title PitVQA: A Dataset of Visual Question Answering in Pituitary Surgery 
Description Visual Question Answering (VQA) within the surgical domain, utilising Large Language Models (LLMs), offers a distinct opportunity to improve intra-operative decision-making and facilitate intuitive surgeon-AI interaction. However, the development of LLMs for surgical VQA is hindered by the scarcity of diverse and extensive datasets with complex reasoning tasks. Moreover, contextual fusion of the image and text modalities remains an open research challenge due to the inherent differences between these two types of information and the complexity involved in aligning them. This paper introduces PitVQA, a novel dataset specifically designed for VQA in endonasal pituitary surgery and PitVQA-Net, an adaptation of the GPT2 with a novel image-grounded text embedding for surgical VQA. PitVQA comprises 25 procedural videos and a rich collection of question-answer pairs spanning crucial surgical aspects such as phase and step recognition, context understanding, tool detection and localization, and tool-tissue interactions. PitVQA-Net consists of a novel image-grounded text embedding that projects image and text features into a shared embedding space and GPT2 Backbone with an excitation block classification head to generate contextually relevant answers within the complex domain of endonasal pituitary surgery. Our image-grounded text embedding leverages joint embedding, cross-attention and contextual representation to understand the contextual relationship between questions and surgical images. We demonstrate the effectiveness of PitVQA-Net on both the PitVQA and the publicly available EndoVis18-VQA dataset, achieving improvements in balanced accuracy of 8% and 9% over the most recent baselines, respectively. Our PitVQA dataset comprises 25 videos of endoscopic pituitary surgeries from the National Hospital of Neurology and Neurosurgery in London, United Kingdom, similar to the dataset used in the MICCAI PitVis challenge. All patients provided informed consent, and the study was registered with the local governance committee. The surgeries were recorded using a high-definition endoscope (Karl Storz Endoscopy) with a resolution of 720p and stored as MP4 files. All videos were annotated for the surgical phases, steps, instruments present and operation notes guided by a standardised annotation framework, which was derived from a preceding international consensus study on pituitary surgery workflow. Annotation was performed collaboratively by 2 neurosurgical residents with operative pituitary experience and checked by an attending neurosurgeon. We extracted image frames from each video at 1 fps and removed any frames that were blurred or occluded. Ultimately, we obtained a total of 109,173 frames, with the videos of minimum and maximum length yielding 2,443 and 7,179 frames, respectively. We acquired frame-wise question-answer pairs for all the categories of the annotation. Overall, there are 884,242 question-answer pairs from 109,173 frames, which is around 8 pairs for each frame. There are 59 classes overall, including 4 phases, 15 steps, 18 instruments, 3 variations of instruments present in a frame, 5 positions of the instruments, and 14 operation notes in the annotation classes. The length of the questions ranges from a minimum of 7 words to a maximum of 12 words. 
Type Of Material Database/Collection of data 
Year Produced 2024 
Provided To Others? Yes  
Impact We are the first to release a public data on this topic creating an opportunity for the research community to overcome the challenges in this research field. 
URL https://rdr.ucl.ac.uk/articles/dataset/PitVQA_A_Dataset_of_Visual_Question_Answering_in_Pituitary_Su...
 
Title PitVQA: A Dataset of Visual Question Answering in Pituitary Surgery 
Description Visual Question Answering (VQA) within the surgical domain, utilising Large Language Models (LLMs), offers a distinct opportunity to improve intra-operative decision-making and facilitate intuitive surgeon-AI interaction. However, the development of LLMs for surgical VQA is hindered by the scarcity of diverse and extensive datasets with complex reasoning tasks. Moreover, contextual fusion of the image and text modalities remains an open research challenge due to the inherent differences between these two types of information and the complexity involved in aligning them. This paper introduces PitVQA, a novel dataset specifically designed for VQA in endonasal pituitary surgery and PitVQA-Net, an adaptation of the GPT2 with a novel image-grounded text embedding for surgical VQA. PitVQA comprises 25 procedural videos and a rich collection of question-answer pairs spanning crucial surgical aspects such as phase and step recognition, context understanding, tool detection and localization, and tool-tissue interactions. PitVQA-Net consists of a novel image-grounded text embedding that projects image and text features into a shared embedding space and GPT2 Backbone with an excitation block classification head to generate contextually relevant answers within the complex domain of endonasal pituitary surgery. Our image-grounded text embedding leverages joint embedding, cross-attention and contextual representation to understand the contextual relationship between questions and surgical images. We demonstrate the effectiveness of PitVQA-Net on both the PitVQA and the publicly available EndoVis18-VQA dataset, achieving improvements in balanced accuracy of 8% and 9% over the most recent baselines, respectively. Our PitVQA dataset comprises 25 videos of endoscopic pituitary surgeries from the National Hospital of Neurology and Neurosurgery in London, United Kingdom, similar to the dataset used in the MICCAI PitVis challenge. All patients provided informed consent, and the study was registered with the local governance committee. The surgeries were recorded using a high-definition endoscope (Karl Storz Endoscopy) with a resolution of 720p and stored as MP4 files. All videos were annotated for the surgical phases, steps, instruments present and operation notes guided by a standardised annotation framework, which was derived from a preceding international consensus study on pituitary surgery workflow. Annotation was performed collaboratively by 2 neurosurgical residents with operative pituitary experience and checked by an attending neurosurgeon. We extracted image frames from each video at 1 fps and removed any frames that were blurred or occluded. Ultimately, we obtained a total of 109,173 frames, with the videos of minimum and maximum length yielding 2,443 and 7,179 frames, respectively. We acquired frame-wise question-answer pairs for all the categories of the annotation. Overall, there are 884,242 question-answer pairs from 109,173 frames, which is around 8 pairs for each frame. There are 59 classes overall, including 4 phases, 15 steps, 18 instruments, 3 variations of instruments present in a frame, 5 positions of the instruments, and 14 operation notes in the annotation classes. The length of the questions ranges from a minimum of 7 words to a maximum of 12 words. 
Type Of Material Database/Collection of data 
Year Produced 2024 
Provided To Others? Yes  
Impact We are the first to release a public data on this topic creating an opportunity for the research community to overcome the challenges in this research field. 
URL https://rdr.ucl.ac.uk/articles/dataset/PitVQA_A_Dataset_of_Visual_Question_Answering_in_Pituitary_Su...
 
Title PitVis Challenge: Endoscopic Pituitary Surgery videos 
Description The first public dataset containing both step and instrument annotations of the endoscopic TransSphenoidal Approach (eTSA). The dataset includes 25-videos (video_{video_number}.mp4) and the corresponding step and instrument annotation (annotations_{video_number}.csv). Annotation metadata mapping the numerical value to its formal description is provided (map_steps.csv and map_instrument.csv), as well as video medadata (video_encoder_details.txt). Helpful scripts and baseline models can be found on: https://github.com/dreets/pitvis. This dataset is released as part of the PitVis Challenge, a sub-challenge of the EndoVis Challenge hosted at the annual MICCAI conference (Vancouver, Canada on 06-Oct-2024). More details about the challenge can be found on the challenge website: https://www.synapse.org/Synapse:syn51232283/wiki/621581. The companion paper with comparative models is titled: "PitVis Challenge: Workflow Recognition in videos of Endoscopic Pituitary Surgery" (Adrito Das et al, in-press). Please cite this paper if you have used this dataset. 
Type Of Material Database/Collection of data 
Year Produced 2024 
Provided To Others? Yes  
Impact We are the first to release a public data on this topic creating an opportunity for the research community to overcome the challenges in this research field. 
URL https://rdr.ucl.ac.uk/articles/dataset/PitVis_Challenge_Endoscopic_Pituitary_Surgery_videos/26531686...
 
Title PitVis-2023 Challenge: Endoscopic Pituitary Surgery videos 
Description The first public dataset containing both step and instrument annotations of the endoscopic TransSphenoidal Approach (eTSA). The dataset includes 25-videos (video_{video_number}.mp4) and the corresponding step and instrument annotation (annotations_{video_number}.csv). Annotation metadata mapping the numerical value to its formal description is provided (map_steps.csv and map_instrument.csv), as well as video medadata (video_encoder_details.txt). Helpful scripts and baseline models can be found on: https://github.com/dreets/pitvis. This dataset is released as part of the PitVis Challenge, a sub-challenge of the EndoVis Challenge hosted at the annual MICCAI conference (Vancouver, Canada on 06-Oct-2024). More details about the challenge can be found on the challenge website: https://www.synapse.org/Synapse:syn51232283/wiki/621581. The companion paper with comparative models is titled: "PitVis-2023 Challenge: Workflow Recognition in videos of Endoscopic Pituitary Surgery" (Adrito Das et al.). Please cite this paper if you have used this dataset: https://arxiv.org/abs/2409.01184. 
Type Of Material Database/Collection of data 
Year Produced 2024 
Provided To Others? Yes  
Impact We are the first to release a public data on this topic creating an opportunity for the research community to overcome the challenges in this research field. 
URL https://rdr.ucl.ac.uk/articles/dataset/PitVis_Challenge_Endoscopic_Pituitary_Surgery_videos/26531686...
 
Title PitVis-2023 Challenge: Endoscopic Pituitary Surgery videos 
Description The first public dataset containing both step and instrument annotations of the endoscopic TransSphenoidal Approach (eTSA). The dataset includes 25-videos (video_{video_number}.mp4) and the corresponding step and instrument annotation (annotations_{video_number}.csv). Annotation metadata mapping the numerical value to its formal description is provided (map_steps.csv and map_instrument.csv), as well as video medadata (video_encoder_details.txt). Helpful scripts and baseline models can be found on: https://github.com/dreets/pitvis. This dataset is released as part of the PitVis Challenge, a sub-challenge of the EndoVis Challenge hosted at the annual MICCAI conference (Vancouver, Canada on 06-Oct-2024). More details about the challenge can be found on the challenge website: https://www.synapse.org/Synapse:syn51232283/wiki/621581. The companion paper with comparative models is titled: "PitVis-2023 Challenge: Workflow Recognition in videos of Endoscopic Pituitary Surgery" (Adrito Das et al.). Please cite this paper if you have used this dataset: https://arxiv.org/abs/2409.01184. 
Type Of Material Database/Collection of data 
Year Produced 2024 
Provided To Others? Yes  
Impact We are the first to release a public data on this topic creating an opportunity for the research community to overcome the challenges in this research field. 
URL https://rdr.ucl.ac.uk/articles/dataset/PitVis_Challenge_Endoscopic_Pituitary_Surgery_videos/26531686
 
Description Partnership with Medtronic (Digital Surgery) 
Organisation Medtronic
Department Medtronic Ltd
Country United Kingdom 
Sector Private 
PI Contribution Both parties are actively working together on the annotation of the surgical data for developing AI algorithms. Our clinical team from UCL Queen Square Institute of Neurology has been developing vocabulary for surgeon action annotations. This is done through consensus with consultants also from other clinical sites. The annotation vocabulary will then be used for annotating data using the Medtronic annotation platform.
Collaborator Contribution Medtronic has been providing cloud-based platform for data organisation and annotation. So far they have supported annotation of surgical videos for surgical steps, phases, and surgical instruments. Our clinical data is securely stored, organised and annotated using Medtronic platforms provided in-kind.
Impact This partnership has supported surgical steps, phases, anatomy and instruments annotations on over 100 surgical videos. This has lead to publications as reported in the publications section.
Start Year 2019
 
Description Partnership with NVIDIA - the providers of embedded computing device for clinical translation 
Organisation NVIDIA
Country Global 
Sector Private 
PI Contribution Since the start of the research project, our team is closely working in partnership with NVIDIA for setting real-time inference on NVIDIA embedded medical device. Within just 6 months (as of April 2024) into the project, our team has successfully integrated prototype algorithms on the embedded device with the help of NVIDIA.
Collaborator Contribution Since the start of the research project, NVIDIA has been involved and has been providing in kind support for translating our software solutions on to NVIDIA embedded devices for real-time inference and pre-clinical study. Both parties meet biweekly to discuss the progress on integration and requests NVIDIA support on any open integration issues.
Impact Prototype algorithms have been integrated on the NVIDIA embedded device. This is the first steps towards enabling preclinical trials by the end of this project.
Start Year 2023
 
Description AI in neurosurgery: A new era for precision medicine? - Media coverage my AA (Turkish News Agency) 
Form Of Engagement Activity A broadcast e.g. TV/radio/film/podcast (other than news/press)
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Media (as a channel to the public)
Results and Impact The project was covered by AA (Turkish News Agency) in Feb 2024. With the report appearing on both TV and print media, significant outreach was generated.

Links with the report:
https://www.aa.com.tr/en/health/ai-in-neurosurgery-a-new-era-for-precision-medicine/3157344
https://azertag.az/en/xeber/ai_in_neurosurgery_a_new_era_for_precision_medicine-2945751
Year(s) Of Engagement Activity 2024
URL https://www.aa.com.tr/en/health/ai-in-neurosurgery-a-new-era-for-precision-medicine/3157344
 
Description Engagement video on - How can AI aid Endoscopic Pituitary Surgery? 
Form Of Engagement Activity Engagement focused website, blog or social media channel
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact As part of this project, we published an awareness video on the role of AI in surgery also reporting the progress on the project and our long-term vision.
Year(s) Of Engagement Activity 2025
URL https://youtu.be/RgLwDHfJOMg?si=3gyfOh10qh3UD--g
 
Description How AI could help make brain surgery safer - Reutuers news agency 
Form Of Engagement Activity A broadcast e.g. TV/radio/film/podcast (other than news/press)
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact Reuters is a worldwide news agency covering over 200 countries. Our project was covered by Reuters in Oct 2024, showcasing our developments and plans on using AI to improve surgical outcomes in brain tumour surgery. The media recording and article, and its derivatives made it to various international news channels and social media platforms. This can attracted significant spotlight to our project and it also helped in creating awareness for general public about the impact of our project.
Year(s) Of Engagement Activity 2023
URL https://youtu.be/RQxes3TVAms?feature=shared
 
Description Invited Talk in AI for Surgery at the KCL Summer workshop on Interventional and Surgical Engineering 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact Sophia Bano (PI on the project) was invited to give a talk at the AI for Surgery at the KCL Summer workshop on Interventional and Surgical Engineering 2024
https://www.kcl.ac.uk/short-courses/summer-workshop-on-surgical-and-interventional-engineering
The attendees were mainly PhD students, Postdoc researchers and industrial representatives.
Year(s) Of Engagement Activity 2024
 
Description Invited Talk in AI for Surgery at the Multiscale Medical Robotics Centre Symposium in Hong Kong 2024 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Sophia Bano (PI on the project) was invited to give a talk at the AI for Surgery at the Multiscale Medical Robotics Centre Symposium in Hong Kong 2024.
Year(s) Of Engagement Activity 2024
 
Description Invited talk on Future of Surgery: AI-assisted Interventions 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Invited talk at the ICRA2024-Roboti-Assisted Medical Imaging workshop - https://sites.google.com/view/rami-icra-2024-workshop/program
The talk attracted collaborations and PhD students interest.
Year(s) Of Engagement Activity 2024
URL https://sites.google.com/view/rami-icra-2024-workshop/program
 
Description Keynote: Recent Advances in Surgical AI for Next Generation Interventions 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact Sophia Bano (PI on the project) was invited as keynote speaker at the German Conference on Medical Image Computing (BVM 2024) held in Erlangen, Germany on the 9-11th March 2024. The conference was attended by over 300 postgraduate and PhD students and industrial partners. The talk focused on covering recent developments of AI in Surgery and received an overwhelming response from the audience.
Year(s) Of Engagement Activity 2024
URL https://www.bvm-workshop.org
 
Description Live Demo on AI for Pituitary Surgery at the HRH The Princess Royal visit to UCL East campus 
Form Of Engagement Activity Participation in an open day or visit at my research institution
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Policymakers/politicians
Results and Impact On 20th Feb 2025, the HRH The Princess Royal visited UCL East campus, where our project team also had the opportunity to showcase our project. The HRH showed great interest in the technology under development as part of this project.
Year(s) Of Engagement Activity 2025
URL https://www.ucl.ac.uk/news/2025/feb/hrh-princess-royal-visits-ucls-new-campus
 
Description Pituitary Patient Research Open Day 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Patients, carers and/or patient groups
Results and Impact On 19th Oct 2024, our team organised the Queen Square Pituitary Open Day to create awareness about the research caring out in this topic. It attracted around 50 patients and carers, helping us gather feedback from the end users about our technology.
Year(s) Of Engagement Activity 2024
 
Description Recent Advances in Surgical AI for Next Generation Interventions 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Industry/Business
Results and Impact An invited talk was delivered by Sophia Bano (PI on the AID-PitSurg project) at Olympus Global Digital Academy, an event which is broadcasted to the entire Olympus company, hence attended by move 20,000 employees.
Recent trends in Artificial Intelligence (AI) and surgical science have revolutionized the field of surgery, paving the way for a new era of AI-assisted robotic interventions These cutting-edge technologies offer tremendous potential to enhance imaging, surgical navigation, and robotic interventions, ultimately reducing cognitive load on surgeons and optimizing procedural efficiency. My talk focused on highlighting AI applications in different surgical procedures and where we stand in terms of their clinical translation for moving towards next generation of surgical intervention.
Year(s) Of Engagement Activity 2023
 
Description Safer brain surgery using AI possible within two years - BBC exclusive TV and radio report 
Form Of Engagement Activity A broadcast e.g. TV/radio/film/podcast (other than news/press)
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact On 28th September 2023, BBC produced an exclusive report on our AI for healthcare project for safer brain surgery. The report was first on-aired on the BBC breakfast morning show. And reached to worldwide audience creating widespread reach for the project.
Year(s) Of Engagement Activity 2023
URL https://www.bbc.com/news/health-66921926
 
Description Science of Surgery - demo on Pituitary Surgery - 2025 
Form Of Engagement Activity Participation in an open day or visit at my research institution
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Schools
Results and Impact At the annual UCL Science of Surgery event on 11th April 2025, our project team presented a demo on the role of AI in Pituitary Surgery showcasing live surgery experience and challenges on a phantom. The demo created awareness of the technology to general public.
Year(s) Of Engagement Activity 2025
URL https://www.google.com/url?sa=t&source=web&rct=j&opi=89978449&url=https://www.ucl.ac.uk/intervention...
 
Description Secretary of State - Department of Science, Innovation and Technology visit to learn developments of the AID-PitSurg project 
Form Of Engagement Activity A press release, press conference or response to a media enquiry/interview
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Policymakers/politicians
Results and Impact In August 2023, Michelle Donelan, Secretary of State, Department of Science, Innovation and Technology, visited the Wellcome / EPSRC Centre for Interventional and Surgical Sciences at UCL to see our project, AID-PitSurg which aims at the use of AI to improve surgical outcomes for one of the most common types of brain tumour. This research will help to avoid complications and reduce patient recovery time.

The visit mark the announcement of the £13 million for 22 AI for health research projects fund by UKRI and was covered by both print and broadcast media of the UK.
Mentioned below are media links that covered this visit:
https://www.linkedin.com/posts/scitechgovuk_13m-of-funding-for-22-ai-healthcare-projects-activity-7095351209313607680-OXNC/?utm_source=share&utm_medium=member_android
https://twitter.com/michelledonelan/status/1689593924287905793?t=0rG2D-yHMTdJTPVw9ElMEw&s=19
https://youtu.be/HMdUP1v22rY?feature=shared
https://www.ucl.ac.uk/news/2023/aug/science-minister-pledges-ps13m-ai-research-healthcare-during-visit-ucl
https://www.ukri.org/news/13-million-for-22-ai-for-health-research-projects/
Year(s) Of Engagement Activity 2023
URL https://www.ukri.org/news/13-million-for-22-ai-for-health-research-projects/
 
Description Surgical Data Science talk at the CS Summer School for 17 year old students 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Schools
Results and Impact Our team participated and gave talk at the CS Summer School held from 12-16 Aug 2024. 16 to 17 year old school students interested in career in STEM attended the summer school. The event created awareness about the topic such as AI in Surgery and how this is transforming surgery.
Year(s) Of Engagement Activity 2024
 
Description UCL WEISS Science of Surgery public engagement event - Display of Endonasal Surgery demo (https://www.ucl.ac.uk/interventional-surgical-sciences/events/2024/apr/science-surgery-friday-12th-april-2024) 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Public/other audiences
Results and Impact On April 12, 2024, the project team participated in the UCL WEISS Science of Surgery public engagement event (https://www.ucl.ac.uk/interventional-surgical-sciences/events/2024/apr/science-surgery-friday-12th-april-2024). The team presented a demo and live activity on Endonasal Surgery phantom explaining the challenges involved in this procedure and how Artificial Intelligence (AI) is contributing to overcoming these challenges. More than 500 members of general public, including schools, attended the event and were fascinated by the role of AI in improving surgical outcomes. The aim aim of this event was to create general public awareness about the science beyond surgery.
Year(s) Of Engagement Activity 2024
URL https://www.ucl.ac.uk/interventional-surgical-sciences/events/2024/apr/science-surgery-friday-12th-a...