HarmonicAI: Human-guided collAboRative Multi-Objective design of explaiNable, faIr and privaCy-preserving AI for digital health
Lead Research Organisation:
Northumbria University
Department Name: Fac of Engineering and Environment
Abstract
The ambitious vision of HarmonicAI is to build a human-machine collaborative multi-objective design framework to foster coherently explainable, fair and privacy-preserving AI for digital health. The framework will provide concrete technical and operational guidelines for AI practitioners to design human-centered, domain-specific, requirement-oriented trustworthy AI solutions, accelerating the scalable deployment of AI-powered digital health services and offering assurance to the public that AI in digital health is being developed and used in an ethical and trustworthy manner.
The scope of HarmonicAI is multifaceted and multi-dimensional. An interdisciplinary and intersectoral approach is essential to address the various challenges of trustworthy AI. HarmonicAI draws together proven experts in AI, health care, IoT, data science, privacy, cyber security, software engineering, HCI and industrial design with an underlying common aim to design and develop innovative technologies and guidelines to resolve ethical issues with respect to fairness and data privacy, achieve transparency of AI models, and enhance safety and trust in the deployment of AI for digital health. Realising these complex goals demands a collective interdisciplinary, intersectoral, cross-border effort from a diverse variety of stakeholders including academia, industries and healthcare providers.
The scope of HarmonicAI is multifaceted and multi-dimensional. An interdisciplinary and intersectoral approach is essential to address the various challenges of trustworthy AI. HarmonicAI draws together proven experts in AI, health care, IoT, data science, privacy, cyber security, software engineering, HCI and industrial design with an underlying common aim to design and develop innovative technologies and guidelines to resolve ethical issues with respect to fairness and data privacy, achieve transparency of AI models, and enhance safety and trust in the deployment of AI for digital health. Realising these complex goals demands a collective interdisciplinary, intersectoral, cross-border effort from a diverse variety of stakeholders including academia, industries and healthcare providers.
Organisations
People |
ORCID iD |
| Nauman Aslam (Principal Investigator) |
| Description | Ist UK workshop "Building Trust in AI Medical Imaging |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | The HarmonicAI project features three use cases based in the UK, France, and Thailand. The UK use case focuses on AI for medical imaging, and we are excited to invite you to participate in the workshop, "Building Trust in AI Medical Imaging", was held at the University of Leeds on 24th January 2025. The workshop was held in hybrid mode allowing remote participants to join. The workshop featured talks by ICT and medical experts and discussed human-centred co-design approaches with systems thinking, aiming to: • Identify barriers to the acceptance and adoption of AI-powered digital health services. • Understand healthcare stakeholders' expectations in the key areas of explainability, fairness, and privacy. |
| Year(s) Of Engagement Activity | 2025 |