📣 Help Shape the Future of UKRI's Gateway to Research (GtR)

We're improving UKRI's Gateway to Research and are seeking your input! If you would be interested in being interviewed about the improvements we're making and to have your say about how we can make GtR more user-friendly, impactful, and effective for the Research and Innovation community, please email gateway@ukri.org.

Causality in Healthcare AI Hub

Lead Research Organisation: University of Edinburgh
Department Name: Sch of Engineering

Abstract

The current AI paradigm at best reveals correlations between model input and output variables. This falls short of addressing health and healthcare challenges where knowing the causal relationship between interventions and outcomes is necessary and desirable. In addition, biases and vulnerability in AI systems arise, as models may pick up unwanted, spurious correlations from historic data, resulting in the widening of already existing health inequalities.
Causal AI is the key to unlock robust, responsible and trustworthy AI and transform challenging tasks such as early prediction, diagnosis and prevention of disease.

The Causality in Healthcare AI with Real Data (CHAI) Hub will bring together academia, industry, healthcare, and policy stakeholders to co-create the next-generation of world-leading artificial intelligence solutions that can predict outcomes of interventions and help choose personalised treatments, thus transforming health and healthcare. The CHAI Hub will develop novel methods to identify and account for causal relationships in complex data. The Hub will be built by the community for the community, amassing experts and stakeholders from across the UK to
1) push the boundaries of AI innovation;
2) develop cutting-edge solutions that drive desperately needed efficiency in resource-constrained healthcare systems; and
3) cement the UK's standing as a next-gen AI superpower.

The data complexity in heterogeneous and distributed environments such as healthcare exacerbates the risks of bias and vulnerability and introduces additional challenges that must be addressed. Modern clinical investigations need to mix structured and unstructured data sources (e.g. patient health records, and medical imaging exams) which current AI cannot integrate effectively. These gaps in current AI technology must be addressed in order to develop algorithms that can help to better understand disease mechanisms, predict outcomes and estimate the effects of treatments. This is important if we want to ensure the safe and responsible use of AI in personalised decision making.

Causal AI has the potential to unearth novel insights from observational data, formalise treatment effects, assess outcome likelihood, and estimate 'what-if' scenarios. Incorporating causal principles is critical for delivering on the National AI Strategy to ensure that AI is technically and clinically safe, transparent, fair and explainable.

The CHAI Hub will be formed by a founding consortium of powerhouses in AI, healthcare, and data science throughout the UK in a hub-spoke model with geographic reach and diversity. The hub will be based in Edinburgh's Bayes Centre (leveraging world-class expertise in AI, data-driven innovation in health applications, a robust health data ecosystem, entrepreneurship, and translation). Regional spokes will be in Manchester (expertise in both methods and translation of AI through the Institute for Data Science and AI, and Pankhurst Institute), London (hosted at KCL, representing also UCL and Imperial, leveraging London's rapidly growing AI ecosystem) and Exeter (leveraging strengths in philosophy of causal inference and ethics of AI).

The hub will develop a UK-wide multidisciplinary network for causal AI. Through extended collaborations with industry, policymakers and other stakeholders, we will expand the hub to deliver next-gen causal AI where it is needed most. We will work together to co-create, moving beyond co-ideation and co-design, to co-implementation, and co-evaluation where appropriate to ensure fit-for-purpose solutions

Our programme will be flexible, will embed trusted, responsible innovation and environmental sustainability considerations, will ensure that equality diversity and inclusion principles are reflected through all activities, and will ensure that knowledge generated through CHAI will continue to have real-world impact beyond the initial 60 months.

Organisations

People

ORCID iD

Sotirios Tsaftaris (Principal Investigator) orcid http://orcid.org/0000-0002-8795-9294
Niccolo' Tempini (Co-Investigator) orcid http://orcid.org/0000-0002-5100-5376
Sjoerd Beentjes (Co-Investigator) orcid http://orcid.org/0000-0002-7998-4262
Niels Peek (Co-Investigator) orcid http://orcid.org/0000-0002-6393-9969
Anita Grigoriadis (Co-Investigator) orcid http://orcid.org/0000-0003-3434-201X
Bruce Guthrie (Co-Investigator) orcid http://orcid.org/0000-0003-4191-4880
Sohan Seth (Co-Investigator)
Ava Khamseh (Co-Investigator) orcid http://orcid.org/0000-0001-5203-2205
Hui Guo (Co-Investigator)
Yulan He (Co-Investigator)
Eliana Vasquez Osorio (Co-Investigator) orcid http://orcid.org/0000-0003-0741-994X
Henry Gouk (Co-Investigator) orcid http://orcid.org/0000-0002-0924-2933
Stephan Guttinger (Co-Investigator) orcid http://orcid.org/0000-0001-9448-973X
William Whiteley (Co-Investigator) orcid http://orcid.org/0000-0002-4816-8991
Karla Diaz-Ordaz (Co-Investigator) orcid http://orcid.org/0000-0003-3155-1561
Ewen Harrison (Co-Investigator) orcid http://orcid.org/0000-0002-5018-3066
Ginny Russell (Co-Investigator) orcid http://orcid.org/0000-0002-6440-1167
Elliot Crowley (Co-Investigator)
John Kenneth Baillie (Co-Investigator) orcid http://orcid.org/0000-0001-5258-793X
Javier Escudero Rodriguez (Co-Investigator) orcid http://orcid.org/0000-0002-2105-8725
Ben Glocker (Co-Investigator)
Matthew Sperrin (Co-Investigator)
Kianoush Nazarpour (Co-Investigator) orcid http://orcid.org/0000-0003-4217-0254
Yingzhen Li (Co-Investigator) orcid http://orcid.org/0000-0001-6938-2375
Subramanian Ramamoorthy (Co-Investigator)
Hana Chockler (Co-Investigator)
Daniel Alexander (Co-Investigator)
Sabina Leonelli (Co-Investigator)
Ricardo Silva (Co-Investigator)
Ian Simpson (Co-Investigator) orcid http://orcid.org/0000-0003-0495-7187
Neil Chue Hong (Researcher) orcid http://orcid.org/0000-0002-8876-7606

Publications

10 25 50
 
Description FUTURE-AI: international consensus guideline for trustworthy and deployable AI in healthcare
Geographic Reach Multiple continents/international 
Policy Influence Type Participation in a guidance/advisory committee
URL https://future-ai.eu/
 
Description Presentation at the Life Sciences Scotland Industry Leadership Group at Scottish Parliament
Geographic Reach National 
Policy Influence Type Participation in a guidance/advisory committee
 
Description STANDING Together: STANdards for data Diversity, INclusivity and Generalisability
Geographic Reach Multiple continents/international 
Policy Influence Type Participation in a guidance/advisory committee
Impact STANDING Together recommendations are highlighted by regulators in multiple countries as best practices.
URL https://www.datadiversity.org/
 
Description The Regulatory Horizons Council event
Geographic Reach National 
Policy Influence Type Contribution to a national consultation/review
 
Title Benchmarking Counterfactual Image Generation 
Description This repository contains the code for the paper "Benchmarking Counterfactual Image Generation" 
Type Of Material Computer model/algorithm 
Year Produced 2024 
Provided To Others? Yes  
Impact Nothing to report yet. 
URL https://gulnazaki.github.io/counterfactual-benchmark/
 
Title CHARIOT - cardiovascular prevention causal prediction model 
Description This tool allows individuals to estimate their future risk of cardiovascular events under interventions. Currently statins and antihypertensives are supported - lifestyle interventions will be added soon. 
Type Of Material Computer model/algorithm 
Year Produced 2025 
Provided To Others? Yes  
Impact In discussions with industrial partners regarding evaluating and implementing the approaches in patient facing health records, and clinical practice. 
URL https://alexpate30.shinyapps.io/chariot_prototype_shiny2/
 
Title Counterfactual contrastive learning 
Description This repository contains the code for the papers "Counterfactual contrastive learning: robust representations via causal image synthesis" and extended version "Robust representations for image classification via counterfactual contrastive learning". 
Type Of Material Computer model/algorithm 
Year Produced 2024 
Provided To Others? Yes  
Impact Nothing to report yet. 
URL https://github.com/biomedia-mira/counterfactual-contrastive
 
Title Dual Risk Minimization 
Description A method for learning and performing automated classification across data sources of varying relationships, while trading-off robustness to major unanticipated variabilities against making the most of the available data. 
Type Of Material Computer model/algorithm 
Year Produced 2024 
Provided To Others? Yes  
Impact Nothing to report yet. 
URL https://github.com/vaynexie/DRM
 
Title Structured Learning of Compositional Sequential Interventions 
Description Code for a predictive model of effects of combinations of interventions over time in sequential data. 
Type Of Material Computer model/algorithm 
Year Produced 2024 
Provided To Others? Yes  
Impact Many organisations track progress of units over time, and expose them to interventions intended to guide their behaviour. We have described the ideas and outcomes of this projects to collaborators in Spotify who were acknowledged in the companion paper. 
URL https://github.com/jialin-yu/CSI-VAE
 
Description CHAI-Canon Medical Research Europe 
Organisation Canon
Department Canon Medical Research Europe
Country United Kingdom 
Sector Academic/University 
PI Contribution The partnership is in an early-stage. Two PhD Studentships are in scope, which are currently being recruited for. The focus is causal AI for medical imaging.
Collaborator Contribution To early to specify.
Impact Not yet relevant.
Start Year 2025
 
Description Causal AI in Healthcare workshop 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Professional Practitioners
Results and Impact In collaboration with the Exeter Health Analytics research network we organized a workshop to introduce the local community of the University of Exeter to the CHAI Hub. The event included talks by co-Is Sperrin (Manchester), Silva (UCL), Li (ICL), and grant application runway activity. Multiple teams have since reported that they are planning to apply to CHAI's pump-priming fund.
Year(s) Of Engagement Activity 2025
URL https://www.chai.ac.uk/events
 
Description Causality in Science and AI meeting series 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Other audiences
Results and Impact In collaboration with the Institute for Data Science and Artificial Intelligence of the University of Exeter, and the Egenis Research Centre, we organized a five meeting series on the topic of Causality in Science and AI, and including a talk by Dr Sander Beckers (Cornell). The series has drawn audience from heterogeneous networks and disciplines, and including academics connecting remotely from the EU.
Year(s) Of Engagement Activity 2025
URL https://www.exeter.ac.uk/research/centres/egenis/activities/specialinterestgroups/#a8
 
Description Panel Discussion - Pioneering AI in multiple long term conditions conference, Sep 2024 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Patients, carers and/or patient groups
Results and Impact Contributed a brief presentation and was a panel member in discussion 'Can AI, data science & digital innovation transform MLTC Health & Social Care research?' There was a broad mix of patients, practitioners, researchers and industry in attendance.
Year(s) Of Engagement Activity 2024
URL https://www.turing.ac.uk/events/pioneering-ai-mltc-bridging-research-and-practice-conference-2024
 
Description Panel discussion (RCR-NHS Global AI Conference) 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Ben Glocker chaired a session and panel discussion on the topic of 'Transparency and bias' under the 'Stream 3: Governance and regulation' at the RCR-NHS Global AI Conference 2025.
Year(s) Of Engagement Activity 2025
URL https://rcraiconference.com/2025/pages/stream3_session2
 
Description Panel member, Causal Representation Learning workshop 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact A panel discussion among experts on the state and future directions of a major area within causal models in AI. Audience participation and questions helped to frame possible research programmes.
Year(s) Of Engagement Activity 2024
URL https://crl-community.github.io/neurips24
 
Description Pixel Pandemonium at the European Congress of Radiology 2025 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact This tech demo showcases cutting-edge research in generative AI for high-fidelity medical image synthesis. This interactive demo allows users to explore AI-generated counterfactual images, including chest radiographs and mammograms. With a press of a button, users can simulate 'what-if' scenarios: How would this patient's scan look like with and without disease? How would the image characteristics change if it had been acquired with a different scanner? How would this mammogram look like if the breast tissue had a different density? These imaginable images have potential use for stress-testing AI models and to improve their robustness, reliability and fairness.
Year(s) Of Engagement Activity 2025
URL https://connect.myesr.org/ecr-2025/pixel-pandemonium/?tabref=1
 
Description Seminar talk, University of Toronto 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Postgraduate students
Results and Impact A one-day visit to University of Toronto as their first external speaker on the series "Causal Inference: Bringing together data science and causal inference for better policy recommendations". Had several individual face-to-face meeting with faculty from Statistics, Computer Science, Economics and Business from UofT. Made new connections for CHAI from researchers on AI in healthcare.
Year(s) Of Engagement Activity 2024
URL https://datasciences.utoronto.ca/causal_inference/