📣 Help Shape the Future of UKRI's Gateway to Research (GtR)

We're improving UKRI's Gateway to Research and are seeking your input! If you would be interested in being interviewed about the improvements we're making and to have your say about how we can make GtR more user-friendly, impactful, and effective for the Research and Innovation community, please email gateway@ukri.org.

Assuring Responsibility for Trustworthy Autonomous Systems

Lead Research Organisation: University of York
Department Name: Computer Science

Abstract

Autonomous systems, such as medical systems, autonomous aerial and road vehicles, and manufacturing and agricultural robots, promise to extend and expand human capacities. But their benefits will only be harnessed if people have trust in the human processes around their design, development, and deployment. Enabling designers, engineers, developers, regulators, operators, and users to trace and allocate responsibility for the decisions, actions, failures, and outcomes of autonomous systems will be essential to this ecosystem of trust. If a self-driving car takes an action that affects you, you will want to know who is responsible for it and what are the channels for redress. If you are a doctor using an autonomous system in a clinical setting, you will want to understand the distribution of accountability between you, the healthcare organisation, and the developers of the system. Designers and engineers need clarity about what responsibilities fall on them, and when these transfer to other agents in the decision-making network. Manufacturers need to understand what they would be legally liable for. Mechanisms to achieve this transparency will not only provide all stakeholders with reassurance, they will also increase clarity, confidence, and competence amongst decision-makers.

The research project is an interdisciplinary programme of work - drawing on the disciplines of engineering, law, and philosophy - that culminates in a methodology to achieve precisely that tracing and allocation of responsibility. By 'tracing responsibility' we mean the process of tracking the autonomous system's decisions or outcomes back to the decisions of designers, engineers, or operators, and understanding what led to the outcome. By 'allocating responsibility' we mean both allocating role responsibilities to different agents across the life-cycle and working out in advance who would be legally liable and morally responsible for different system decisions and outcomes once they have occurred. This methodology will facilitate responsibility-by-design and responsibility-through-lifecycle.

In practice, the tracing and allocation of responsibility for the decisions and outcomes of AS is very complex. The complexity of the systems and the constant movement and unpredictability of their operational environments makes individual causal contributions difficult to distinguish. When this is combined with the fact that we delegate tasks to systems that require ethical judgement and lawful behaviour in human beings, it also gives rise to potential moral and legal responsibility gaps. The more complex and autonomous the system is, the more significant the role that assurance will play in tracing and allocating responsibility, especially in contexts that are technically and organisationally complex.

The research project tackles these challenges head on. First, we clarify the fundamental concepts of responsibility, the different kinds of responsibility in play, the different agents involved, and where 'responsibility gaps' arise and how they can be addressed. Second, we build on techniques used in the technical assurance of high-risk systems to reason about responsibility in the context of uncertainty and dynamism, and therefore unpredictable socio-technical environments. Together, these strands of work provide the basis for a methodology for responsibility-by-design and responsibility-through-lifecycle that can be used in practice by a wide range of stakeholders. Assurance of responsibility will be achieved that not only identifies which agents are responsible for which outcomes and in what way throughout the lifecycle, and explains how this identification is achieved, but also establishes why this tracing and allocation of responsibility is well-justified and complete.
 
Description The project developed a methodology to justify the ethical deployment of AI and autonomous systems, supported by various case studies, including the use of AI in the NHS. It also provided a clearer understanding of responsibility and accountability for AI and autonomous systems from engineering, legal and ethical perspectives. Finally, the project examined ethics, safety, and responsibility through industrial use cases across different sectors.
Exploitation Route The results of the project have provided a sound basis and body of knowledge for our new UKRI AI Centre for Doctoral Training in Safe Artificial Intelligence Systems (SAINTS), funded under the responsible and trustworthy AI priority area. The centre includes 35 partners (industrial, regulatory and policy) and 60 PhD studentships from at least five disciplines.
Sectors Aerospace

Defence and Marine

Agriculture

Food and Drink

Construction

Education

Energy

Environment

Healthcare

Government

Democracy and Justice

Manufacturing

including Industrial Biotechology

Security and Diplomacy

Transport

 
Description We made concrete impact on government policy, e.g. Alan Turing Institute and University of York: Trustworthy and Ethical Assurance Platform: Providing a methodology for assuring AI and digital systems (https://www.gov.uk/ai-assurance-techniques/alan-turing-institute-and-university-of-york-trustworthy-and-ethical-assurance-platform). We informed safety assurance practices and policy for the use of AI in different sectors including healthcare and transport.
First Year Of Impact 2023
Sector Communities and Social Services/Policy,Healthcare,Government, Democracy and Justice
Impact Types Policy & public services

 
Description AI Safety in the NHS
Geographic Reach National 
Policy Influence Type Influenced training of practitioners or researchers
URL https://www.youtube.com/watch?v=ARwGRxPxbwQ&list=PLAoUDBxy86gg94ngPxsH38CdvZnYb1zK_
 
Description Advice on best practice in AI/Software for the Software and AI as a Medical Device Change Programme - Roadmap
Geographic Reach National 
Policy Influence Type Participation in a guidance/advisory committee
Impact Plan to support the update of the international standard IEC 62304.
URL https://www.gov.uk/government/publications/software-and-ai-as-a-medical-device-change-programme/soft...
 
Description CPD in safety
Geographic Reach National 
Policy Influence Type Contribution to new or improved professional practice
Impact Improved ways to process ethically sensitive decisions in AI and autonomy
 
Description CPD on AI safety for Health Services Safety Investigations Body (HSSIB)
Geographic Reach National 
Policy Influence Type Influenced training of practitioners or researchers
Impact Training for national healthcare investigators
URL https://hssib-education.turtl.co/story/demystifying-ai-in-healthcare/page/1?draft=a6fa0e0e-9dab-44b5...
 
Description Input to Office for AI (work on AI safety)
Geographic Reach National 
Policy Influence Type Participation in a guidance/advisory committee
 
Description Lecture of clinical safety of AI
Geographic Reach National 
Policy Influence Type Contribution to new or improved professional practice
 
Description AI and Digital Twins
Amount £60,065 (GBP)
Organisation Arts & Humanities Research Council (AHRC) 
Sector Public
Country United Kingdom
Start 02/2024 
End 07/2024
 
Description DAISY - Robot-Assisted A&E Triage
Amount £145,094 (GBP)
Organisation Engineering and Physical Sciences Research Council (EPSRC) 
Sector Public
Country United Kingdom
Start 03/2022 
End 03/2023
 
Description Encoding Empathy: Solving the Healthcare Workforce Crisis Through the Safe Deployment of an AI-Driven Voice-Based Clinical Conversational Assistant for Long Term Monitoring
Amount £821,460 (GBP)
Funding ID 10102111 
Organisation Innovate UK 
Sector Public
Country United Kingdom
Start 06/2024 
End 07/2026
 
Description The Centre for Assuring Autonomy
Amount £4,000,000 (GBP)
Organisation Lloyd's Register Foundation 
Sector Charity/Non Profit
Country United Kingdom
Start 01/2024 
End 12/2028
 
Description UKRI AI Centre for Doctoral Training in Lifelong Safety Assurance of AI-enabled Autonomous Systems (SAINTS)
Amount £8,050,129 (GBP)
Organisation United Kingdom Research and Innovation 
Sector Public
Country United Kingdom
Start 03/2024 
End 09/2032
 
Description Developing an ethics assurance case 
Organisation Ufonia Limited
Country United Kingdom 
Sector Private 
PI Contribution Developing an ethics assurance case for an AI-based system in healthcare
Collaborator Contribution Working closely with a multi-disciplinary team on instantiating the argument patterns presented in our arXiv paper (A Principles-based Ethical Assurance Argument for AI and Autonomous Systems).
Impact The collaboration is multi-disciplinary, mainly involving engineers, ethicists and clinicians.
Start Year 2022
 
Description Trustworthy and Ethical Assurance for AI and Autonomous Systems in Digital Healthcare (TEA-DH) 
Organisation Alan Turing Institute
Country United Kingdom 
Sector Academic/University 
PI Contribution Collaboration with the Alan Turing Institute on the Notion of Ethics and Trustworthy Assurance
Collaborator Contribution The collaboration has created a methodology for developing assurance cases for digital and AI-based systems, with a focus on healthcare.
Impact Outputs under review
Start Year 2023
 
Description Autonomous AI safety 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Professional Practitioners
Results and Impact An invited keynote at Jaguar Land Rover Annual Conference for Technical Specialists
Year(s) Of Engagement Activity 2024
 
Description Clinical AI: Cure or disease? 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Public/other audiences
Results and Impact Clinical AI: Cure or disease? Ibrahim Habli
In the evolving landscape of healthcare, Artificial intelligence (AI) promises to improve patient outcomes and release scarce resources. However, reaping these benefits also brings patient safety risks.

In this talk, I'll explore the intricacies of using AI safely in healthcare. I'll then present clinical examples and real-world cases that highlight the potential risks and harms, while also outlining proactive strategies to mitigate these concerns.
Year(s) Of Engagement Activity 2023
 
Description Defining AI safety Podcast 
Form Of Engagement Activity Engagement focused website, blog or social media channel
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact Ed and David chat with Professor Ibrahim Habli, Research Director at the Centre for Assuring Autonomy in the University of York, and director of the UKRI Centre for Doctoral Training in Safe AI Systems. The conversation covers the topic of defining and contextualising AI safety and risk, given existence of existing safety practices from other industries. Ibrahim has collaborated with The Alan Turing Institute on the "Trustworthy and Ethical Assurance platform", or "TEA" for short, an open-source tool for developing and communicating structured assurance arguments to show how data science and AI tech adheres to ethical principles.
Year(s) Of Engagement Activity 2024
URL https://turing.podbean.com/e/defining-ai-safety/
 
Description Doctor Says No: When AI's Bedside Manner Falls Short 
Form Of Engagement Activity Engagement focused website, blog or social media channel
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Blog
Year(s) Of Engagement Activity 2024
URL https://www.york.ac.uk/assuring-autonomy/news/blog/doctor-says-no-when-ais-bedside-manner-falls-shor...
 
Description Enhancing Security, Resilience, and Safety of Systems Managing Edge Autonomous Devices (GENZERO). 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Keynote on safety of AI and autonomy
Year(s) Of Engagement Activity 2024
URL https://genzero.tii.ae/index.php
 
Description Keynote at the THE SCANDINAVIAN CONFERENCE ON SYSTEMS AND SOFTWARE SAFETY in 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Keynote on Assuring the ethics of AI and autonomous
Year(s) Of Engagement Activity 2023
URL https://www.saferresearch.com/events/scandinavian-conference-systems-and-software-safety
 
Description Living With AI Podcast: Challenges of Living with Artificial Intelligence AI & Taking Responsibility 
Form Of Engagement Activity A press release, press conference or response to a media enquiry/interview
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Media (as a channel to the public)
Results and Impact A Projects episode focusing on projects dedicated to researching the responsibility aspects of AI: Responsibility Projects - UKRI Trustworthy Autonomous Systems Hub (tas.ac.uk)

Our guests this week are:
1. Lars Kunze - Responsible AI for Long-term Trustworthy Autonomous Systems
2. Shannon Vallor - Making systems Answer
3. Ibrahim Habli - Assuring Responsibility for Trustworthy Autonomous Systems (AR-TAS)
Year(s) Of Engagement Activity 2023
URL https://www.buzzsprout.com/1447474/13158541-ai-taking-responsibility
 
Description Reflections on AI Governance in 2023 
Form Of Engagement Activity A press release, press conference or response to a media enquiry/interview
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact 2023 has been a big year for AI governance. Research Fellow, Dr Zoe Porter, explores three key reflections arising from these developments and what they can tell us about the future of safe AI.
Year(s) Of Engagement Activity 2023
URL https://www.york.ac.uk/assuring-autonomy/news/blog/ai-governance-2023/
 
Description TAS 2024 (Panel on Healthcare) 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Other audiences
Results and Impact Panel on AI/AS in Healthcare (Panelist: Zoe Porter)
Year(s) Of Engagement Activity 2024
 
Description TAS 2024 (Panel on Transport) 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Other audiences
Results and Impact TAS 2024 (Panel on Transport): Ibrahim Habli
Year(s) Of Engagement Activity 2024
 
Description Who decides when AI is safe enough? 
Form Of Engagement Activity A press release, press conference or response to a media enquiry/interview
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact Who decides when AI is safe enough?
A featured article on balancing the technical and ethical sides of assuring the safety of AI in the complex world of healthcare
Year(s) Of Engagement Activity 2022
URL https://features.york.ac.uk/who-decides-when-ai-is-safe-enough/index.html