CHAI - EPSRC AI Hub for Causality in Healthcare AI with Real Data

Lead Research Organisation: University of Edinburgh
Department Name: Sch of Engineering


The current AI paradigm at best reveals correlations between model input and output variables. This falls short of addressing health and healthcare challenges where knowing the causal relationship between interventions and outcomes is necessary and desirable. In addition, biases and vulnerability in AI systems arise, as models may pick up unwanted, spurious correlations from historic data, resulting in the widening of already existing health inequalities.
Causal AI is the key to unlock robust, responsible and trustworthy AI and transform challenging tasks such as early prediction, diagnosis and prevention of disease.

The Causality in Healthcare AI with Real Data (CHAI) Hub will bring together academia, industry, healthcare, and policy stakeholders to co-create the next-generation of world-leading artificial intelligence solutions that can predict outcomes of interventions and help choose personalised treatments, thus transforming health and healthcare. The CHAI Hub will develop novel methods to identify and account for causal relationships in complex data. The Hub will be built by the community for the community, amassing experts and stakeholders from across the UK to
1) push the boundaries of AI innovation;
2) develop cutting-edge solutions that drive desperately needed efficiency in resource-constrained healthcare systems; and
3) cement the UK's standing as a next-gen AI superpower.

The data complexity in heterogeneous and distributed environments such as healthcare exacerbates the risks of bias and vulnerability and introduces additional challenges that must be addressed. Modern clinical investigations need to mix structured and unstructured data sources (e.g. patient health records, and medical imaging exams) which current AI cannot integrate effectively. These gaps in current AI technology must be addressed in order to develop algorithms that can help to better understand disease mechanisms, predict outcomes and estimate the effects of treatments. This is important if we want to ensure the safe and responsible use of AI in personalised decision making.

Causal AI has the potential to unearth novel insights from observational data, formalise treatment effects, assess outcome likelihood, and estimate 'what-if' scenarios. Incorporating causal principles is critical for delivering on the National AI Strategy to ensure that AI is technically and clinically safe, transparent, fair and explainable.

The CHAI Hub will be formed by a founding consortium of powerhouses in AI, healthcare, and data science throughout the UK in a hub-spoke model with geographic reach and diversity. The hub will be based in Edinburgh's Bayes Centre (leveraging world-class expertise in AI, data-driven innovation in health applications, a robust health data ecosystem, entrepreneurship, and translation). Regional spokes will be in Manchester (expertise in both methods and translation of AI through the Institute for Data Science and AI, and Pankhurst Institute), London (hosted at KCL, representing also UCL and Imperial, leveraging London's rapidly growing AI ecosystem) and Exeter (leveraging strengths in philosophy of causal inference and ethics of AI).

The hub will develop a UK-wide multidisciplinary network for causal AI. Through extended collaborations with industry, policymakers and other stakeholders, we will expand the hub to deliver next-gen causal AI where it is needed most. We will work together to co-create, moving beyond co-ideation and co-design, to co-implementation, and co-evaluation where appropriate to ensure fit-for-purpose solutions

Our programme will be flexible, will embed trusted, responsible innovation and environmental sustainability considerations, will ensure that equality diversity and inclusion principles are reflected through all activities, and will ensure that knowledge generated through CHAI will continue to have real-world impact beyond the initial 60 months.




Sotirios Tsaftaris (Principal Investigator) orcid
Niccolo Tempini (Co-Investigator) orcid
Sjoerd Beentjes (Co-Investigator) orcid
Niels Peek (Co-Investigator) orcid
Anita Grigoriadis (Co-Investigator)
Bruce Guthrie (Co-Investigator) orcid
Sohan Seth (Co-Investigator)
Hui Guo (Co-Investigator)
Ava Khamseh (Co-Investigator) orcid
Yulan He (Co-Investigator)
Eliana Vasquez Osorio (Co-Investigator) orcid
Henry Gouk (Co-Investigator) orcid
Stephan Guttinger (Co-Investigator) orcid
William Whiteley (Co-Investigator) orcid
Karla Diaz-Ordaz (Co-Investigator) orcid
Ewen Harrison (Co-Investigator) orcid
Ginny Russell (Co-Investigator) orcid
Ian Simpson (Co-Investigator) orcid
John Kenneth Baillie (Co-Investigator) orcid
Elliot Crowley (Co-Investigator)
Javier Escudero Rodriguez (Co-Investigator) orcid
Ben Glocker (Co-Investigator)
Matthew Sperrin (Co-Investigator)
Kianoush Nazarpour (Co-Investigator) orcid
Yingzhen Li (Co-Investigator) orcid
Subramanian Ramamoorthy (Co-Investigator)
Hana Chockler (Co-Investigator)
Daniel Alexander (Co-Investigator)
Sabina Leonelli (Co-Investigator)
Ricardo Silva (Co-Investigator)
Neil Chue Hong (Researcher) orcid


10 25 50