Assuring Responsibility for Trustworthy Autonomous Systems
Lead Research Organisation:
University of York
Department Name: Computer Science
Abstract
Autonomous systems, such as medical systems, autonomous aerial and road vehicles, and manufacturing and agricultural robots, promise to extend and expand human capacities. But their benefits will only be harnessed if people have trust in the human processes around their design, development, and deployment. Enabling designers, engineers, developers, regulators, operators, and users to trace and allocate responsibility for the decisions, actions, failures, and outcomes of autonomous systems will be essential to this ecosystem of trust. If a self-driving car takes an action that affects you, you will want to know who is responsible for it and what are the channels for redress. If you are a doctor using an autonomous system in a clinical setting, you will want to understand the distribution of accountability between you, the healthcare organisation, and the developers of the system. Designers and engineers need clarity about what responsibilities fall on them, and when these transfer to other agents in the decision-making network. Manufacturers need to understand what they would be legally liable for. Mechanisms to achieve this transparency will not only provide all stakeholders with reassurance, they will also increase clarity, confidence, and competence amongst decision-makers.
The research project is an interdisciplinary programme of work - drawing on the disciplines of engineering, law, and philosophy - that culminates in a methodology to achieve precisely that tracing and allocation of responsibility. By 'tracing responsibility' we mean the process of tracking the autonomous system's decisions or outcomes back to the decisions of designers, engineers, or operators, and understanding what led to the outcome. By 'allocating responsibility' we mean both allocating role responsibilities to different agents across the life-cycle and working out in advance who would be legally liable and morally responsible for different system decisions and outcomes once they have occurred. This methodology will facilitate responsibility-by-design and responsibility-through-lifecycle.
In practice, the tracing and allocation of responsibility for the decisions and outcomes of AS is very complex. The complexity of the systems and the constant movement and unpredictability of their operational environments makes individual causal contributions difficult to distinguish. When this is combined with the fact that we delegate tasks to systems that require ethical judgement and lawful behaviour in human beings, it also gives rise to potential moral and legal responsibility gaps. The more complex and autonomous the system is, the more significant the role that assurance will play in tracing and allocating responsibility, especially in contexts that are technically and organisationally complex.
The research project tackles these challenges head on. First, we clarify the fundamental concepts of responsibility, the different kinds of responsibility in play, the different agents involved, and where 'responsibility gaps' arise and how they can be addressed. Second, we build on techniques used in the technical assurance of high-risk systems to reason about responsibility in the context of uncertainty and dynamism, and therefore unpredictable socio-technical environments. Together, these strands of work provide the basis for a methodology for responsibility-by-design and responsibility-through-lifecycle that can be used in practice by a wide range of stakeholders. Assurance of responsibility will be achieved that not only identifies which agents are responsible for which outcomes and in what way throughout the lifecycle, and explains how this identification is achieved, but also establishes why this tracing and allocation of responsibility is well-justified and complete.
The research project is an interdisciplinary programme of work - drawing on the disciplines of engineering, law, and philosophy - that culminates in a methodology to achieve precisely that tracing and allocation of responsibility. By 'tracing responsibility' we mean the process of tracking the autonomous system's decisions or outcomes back to the decisions of designers, engineers, or operators, and understanding what led to the outcome. By 'allocating responsibility' we mean both allocating role responsibilities to different agents across the life-cycle and working out in advance who would be legally liable and morally responsible for different system decisions and outcomes once they have occurred. This methodology will facilitate responsibility-by-design and responsibility-through-lifecycle.
In practice, the tracing and allocation of responsibility for the decisions and outcomes of AS is very complex. The complexity of the systems and the constant movement and unpredictability of their operational environments makes individual causal contributions difficult to distinguish. When this is combined with the fact that we delegate tasks to systems that require ethical judgement and lawful behaviour in human beings, it also gives rise to potential moral and legal responsibility gaps. The more complex and autonomous the system is, the more significant the role that assurance will play in tracing and allocating responsibility, especially in contexts that are technically and organisationally complex.
The research project tackles these challenges head on. First, we clarify the fundamental concepts of responsibility, the different kinds of responsibility in play, the different agents involved, and where 'responsibility gaps' arise and how they can be addressed. Second, we build on techniques used in the technical assurance of high-risk systems to reason about responsibility in the context of uncertainty and dynamism, and therefore unpredictable socio-technical environments. Together, these strands of work provide the basis for a methodology for responsibility-by-design and responsibility-through-lifecycle that can be used in practice by a wide range of stakeholders. Assurance of responsibility will be achieved that not only identifies which agents are responsible for which outcomes and in what way throughout the lifecycle, and explains how this identification is achieved, but also establishes why this tracing and allocation of responsibility is well-justified and complete.
Organisations
- University of York (Lead Research Organisation, Project Partner)
- UFONIA LIMITED (Collaboration)
- Wayve Technologies Ltd. (Project Partner)
- MIRA (United Kingdom) (Project Partner)
- Ufonia (Project Partner)
- Bradford Teaching Hospitals NHS Foundation Trust (Project Partner)
- NHS Digital (Project Partner)
- Lloyd's Register Foundation (Project Partner)
- Sheffield Robotics (Project Partner)
Publications
Kaas M
(2023)
Ethics in conversation
Porter Z
(2022)
Distinguishing two features of accountability for AI technologies
in Nature Machine Intelligence
Porter Z
(2023)
A principles-based ethics assurance argument pattern for AI and autonomous systems
in AI and Ethics
Description | Advice on best practice in AI/Software for the Software and AI as a Medical Device Change Programme - Roadmap |
Geographic Reach | National |
Policy Influence Type | Participation in a guidance/advisory committee |
Impact | Plan to support the update of the international standard IEC 62304. |
URL | https://www.gov.uk/government/publications/software-and-ai-as-a-medical-device-change-programme/soft... |
Description | Developing an ethics assurance case |
Organisation | Ufonia Limited |
Country | United Kingdom |
Sector | Private |
PI Contribution | Developing an ethics assurance case for an AI-based system in healthcare |
Collaborator Contribution | Working closely with a multi-disciplinary team on instantiating the argument patterns presented in our arXiv paper (A Principles-based Ethical Assurance Argument for AI and Autonomous Systems). |
Impact | The collaboration is multi-disciplinary, mainly involving engineers, ethicists and clinicians. |
Start Year | 2022 |
Description | Who decides when AI is safe enough? |
Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Public/other audiences |
Results and Impact | Who decides when AI is safe enough? A featured article on balancing the technical and ethical sides of assuring the safety of AI in the complex world of healthcare |
Year(s) Of Engagement Activity | 2022 |
URL | https://features.york.ac.uk/who-decides-when-ai-is-safe-enough/index.html |