📣 Help Shape the Future of UKRI's Gateway to Research (GtR)

We're improving UKRI's Gateway to Research and are seeking your input! Tell us what works, what doesn't, and how we can make GtR more user-friendly, impactful, and effective for the Research and Innovation community. Please send your feedback to gateway@ukri.org by 11 August 2025.

HarmonicAI: Human-guided collaborative multi-objective design of explainable, fair and privacy-preserving AI for digital health

Lead Research Organisation: University of Leeds
Department Name: Electronic and Electrical Engineering

Abstract

Artificial Intelligence (AI) is one of the most significant pillars for the digital transformation of modern healthcare systems which will leverage the growing volume of real-world data collected through wearables and sensors, and consider multitude of complex interactions between diseases and individual/population. While AI-enabled digital health services and products are rapidly expanding in volume and variety, most of the AI innovations remain in the form of proof-of-concept. There is a continuous debate regarding whether AI is worthy of trust. The EU AI HLEG has defined that trustworthy AI systems should be lawful, ethical and robust. To
translate it into actionable practices, provision of explainability, fairness and privacy is crucial. A considerable volume of research has been conducted in the areas of explainable AI, fair AI and privacy-preserving AI. However, the current research efforts to tackle the three challenges are fragmented and have culminated in a variety of solutions with heterogeneous, non-interoperable, or even conflicting capabilities. The ambitious vision of HarmonicAI is to build a human-machine collaborative multi-objective design framework to foster coherently explainable, fair and privacy-preserving AI for digital health. HarmonicAI draws together proven
experts in AI, health care, IoT, data science, privacy, cyber security, software engineering, HCI and industrial design with an underlying common aim to develop concrete technical and operational guidelines for AI practitioners to design human-centered, domainspecific, requirement-oriented trustworthy AI solutions, accelerating the scalable deployment of AI-powered digital health services and offering assurance to the public that AI in digital health is being developed and used in an ethical and trustworthy manner.

Publications

10 25 50
publication icon
Amin KR (2025) Remote Monitoring for the Management of Spasticity: Challenges, Opportunities and Proposed Technological Solution. in IEEE open journal of engineering in medicine and biology

 
Description NHS England 
Organisation NHS England
Country United Kingdom 
Sector Public 
PI Contribution We started new collaboration with NHS England on Explainable AI.
Collaborator Contribution NHS has engaged with us to contribute towards MHRA regulations.
Impact Under preparation
Start Year 2024
 
Description Invited Talk 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Professional Practitioners
Results and Impact A talk on machine learning models for respiratory health was given. It focused on income disparity and its correlation with respiratory conditions such as Asthma.
Year(s) Of Engagement Activity 2024
 
Description Workshop on Explainable AI 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact We organised first workshop, which brought practitioners and researchers together to discuss the explainability and interpretability.
Year(s) Of Engagement Activity 2024