Making Systems Answer: Dialogical Design as a Bridge for Responsibility Gaps in Trustworthy Autonomous Systems
Lead Research Organisation:
University of Edinburgh
Department Name: Sch of Philosophy Psychology & Language
Abstract
As computing systems become increasingly autonomous--able to independently pilot vehicles, detect fraudulent banking transactions, or read and diagnose our medical scans--it is vital that humans can confidently assess and ensure their trustworthiness. Our project develops a novel, people-centred approach to overcoming a major obstacle to this, known as responsibility gaps.
Responsibility gaps occur when we cannot identify a person who is morally responsible for an action with high moral stakes, either because it is unclear who was behind the act, or because the agent does not meet the conditions for moral responsibility; for example, if the act was not voluntary, or if the agent was not aware of it. Responsibility gaps are a problem because holding others responsible for what they do is how we maintain social trust.
Autonomous systems create new responsibility gaps. They operate in high-stakes areas such as health and finance, but their actions may not be under the control of a morally responsible person, or may not be fully understandable or predictable by humans due to complex 'black-box' algorithms driving these actions. To make such systems trustworthy, we need to find a way of bridging these gaps.
Our project draws upon research in philosophy, cognitive science, law and AI to develop new ways for autonomous system developers, users and regulators to bridge responsibility gaps-by boosting the ability of systems to deliver a vital and understudied component of responsibility, namely answerability.
When we say someone is 'answerable' for an act, it is a way of talking about their responsibility. But answerability is not about having someone to blame; it is about supplying people who are affected by our actions with the answers they need or expect. Responsible humans answer for actions in many different ways; they can explain, justify, reconsider, apologise, offer amends, make changes or take future precautions. Answerability encompasses a richer set of responsibility practices than explainability in computing, or accountability in law.
Often, the very act of answering for our actions improves us, helping us be more responsible and trustworthy in the future. This is why answerability is key to bridging responsibility gaps. It is not about who we name as the 'responsible person' (which is more difficult to identify in autonomous systems), but about what we owe to the people holding the system responsible. If the system as a whole (machines + people) can get better at giving the answers that are owed, the system can still meet present and future responsibilities to others. Hence, answerability is a system capability for executing responsibilities that can bridge responsibility gaps.
Our ambition is to provide the theoretical and empirical evidence and computational techniques that demonstrate how to enable autonomous systems (including wider "systems" of developers, owners, users, etc) to supply the kinds of answers that people seek from trustworthy agents. Our first workstream establishes the theoretical and conceptual framework that allows answerability to be better understood and executed by system developers, users and regulators. The second workstream grounds this in a people-centred, evidence-driven approach by engaging various publics, users, beneficiaries and regulators of autonomous systems in the research. Focus groups, workshops and interviews will be used to discuss cases and scenarios in health, finance and government that reveal what kinds of answers people expect from trustworthy systems operating in these areas. Finally, our third workstream develops novel computational AI techniques for boosting the answerability of autonomous systems through more dialogical and responsive interfaces with users and regulators. Our research outputs and activities will produce a mix of academic, industry and public-facing resources for designing, deploying and governing more answerable autonomous systems.
Responsibility gaps occur when we cannot identify a person who is morally responsible for an action with high moral stakes, either because it is unclear who was behind the act, or because the agent does not meet the conditions for moral responsibility; for example, if the act was not voluntary, or if the agent was not aware of it. Responsibility gaps are a problem because holding others responsible for what they do is how we maintain social trust.
Autonomous systems create new responsibility gaps. They operate in high-stakes areas such as health and finance, but their actions may not be under the control of a morally responsible person, or may not be fully understandable or predictable by humans due to complex 'black-box' algorithms driving these actions. To make such systems trustworthy, we need to find a way of bridging these gaps.
Our project draws upon research in philosophy, cognitive science, law and AI to develop new ways for autonomous system developers, users and regulators to bridge responsibility gaps-by boosting the ability of systems to deliver a vital and understudied component of responsibility, namely answerability.
When we say someone is 'answerable' for an act, it is a way of talking about their responsibility. But answerability is not about having someone to blame; it is about supplying people who are affected by our actions with the answers they need or expect. Responsible humans answer for actions in many different ways; they can explain, justify, reconsider, apologise, offer amends, make changes or take future precautions. Answerability encompasses a richer set of responsibility practices than explainability in computing, or accountability in law.
Often, the very act of answering for our actions improves us, helping us be more responsible and trustworthy in the future. This is why answerability is key to bridging responsibility gaps. It is not about who we name as the 'responsible person' (which is more difficult to identify in autonomous systems), but about what we owe to the people holding the system responsible. If the system as a whole (machines + people) can get better at giving the answers that are owed, the system can still meet present and future responsibilities to others. Hence, answerability is a system capability for executing responsibilities that can bridge responsibility gaps.
Our ambition is to provide the theoretical and empirical evidence and computational techniques that demonstrate how to enable autonomous systems (including wider "systems" of developers, owners, users, etc) to supply the kinds of answers that people seek from trustworthy agents. Our first workstream establishes the theoretical and conceptual framework that allows answerability to be better understood and executed by system developers, users and regulators. The second workstream grounds this in a people-centred, evidence-driven approach by engaging various publics, users, beneficiaries and regulators of autonomous systems in the research. Focus groups, workshops and interviews will be used to discuss cases and scenarios in health, finance and government that reveal what kinds of answers people expect from trustworthy systems operating in these areas. Finally, our third workstream develops novel computational AI techniques for boosting the answerability of autonomous systems through more dialogical and responsive interfaces with users and regulators. Our research outputs and activities will produce a mix of academic, industry and public-facing resources for designing, deploying and governing more answerable autonomous systems.
Publications
Conitzer V
(2022)
Technical Perspective: The Impact of Auditing for Algorithmic Bias
in Communications of the ACM
Hatherall L
(2023)
Responsible Agency Through Answerability
Mudrik L
(2022)
Free will without consciousness?
in Trends in cognitive sciences
Vallor S
(2023)
The Routledge Handbook of Philosophy of Responsibility
Vierkant
(2022)
The Tinkering Mind: Agency, Cognition, and the Extended Mind
Yuxin Liu
(2022)
Artificial Moral Advisors: A New Perspective from Moral Psychology
Description | Advisory Board, Scottish Biometrics Commissioner |
Geographic Reach | National |
Policy Influence Type | Participation in a guidance/advisory committee |
URL | https://www.biometricscommissioner.scot/about-us/what-we-do/ |
Description | Data Ethics In Health and Social Care DDI Talent |
Geographic Reach | Local/Municipal/Regional |
Policy Influence Type | Influenced training of practitioners or researchers |
Impact | Improving the ethical knowledge and skill of health care practitioners and data professionals in the health domain, developing capacity for responsible and trustworthy professional development and use of data-driven tools in health |
URL | https://www.ed.ac.uk/studying/postgraduate/degrees/index.php?r=site/view&id=1041 |
Description | National Statistician's Advisory Committee on Inclusive Data |
Geographic Reach | National |
Policy Influence Type | Participation in a guidance/advisory committee |
Impact | The advice given has shaped the national statistician's plans to address the lack of representative, rich and harmonised datasets pertaining to underserved groups in the UK, such as disabled persons. |
URL | https://uksa.statisticsauthority.gov.uk/the-authority-board/committees/national-statisticians-adviso... |
Description | Enabling a Responsible AI Ecosystem |
Amount | £3,040,474 (GBP) |
Funding ID | AH/X007146/1 |
Organisation | Arts & Humanities Research Council (AHRC) |
Sector | Public |
Country | United Kingdom |
Start | 11/2022 |
End | 11/2025 |
Description | AI as Risk talk |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Industry/Business |
Results and Impact | Co-I Rovatsos gave an invited talk and audience Q&A delivered to a global audience of professional service experts in data and digital transformation with a focus on financial services. The purpose was to raise awareness of risk management around AI, and to highlight the importance of appropriate decision-making, governance and stakeholder engagement in organisations aiming to use AI responsibly. |
Year(s) Of Engagement Activity | 2022 |
Description | Citizens Panel: Scottish Government Digital Innovation Network |
Form Of Engagement Activity | A formal working group, expert panel or dialogue |
Part Of Official Scheme? | No |
Geographic Reach | Regional |
Primary Audience | Public/other audiences |
Results and Impact | Co-I Sethi gave a presentation to and answered questions from a public panel convened to steer the Scottish Government Digital Innovation Network. The presentation introduced participants to the topic of citizens' data and the key issues around data ethics and justice. |
Year(s) Of Engagement Activity | 2022 |
Description | Data Island Workshop |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | Regional |
Primary Audience | Professional Practitioners |
Results and Impact | This workshop brought together public engagement practitioners to explore the use of cultural probes as a method to explore discussions around data use |
Year(s) Of Engagement Activity | 2022 |
Description | Digital Scotland 2022 talk |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | Regional |
Primary Audience | Industry/Business |
Results and Impact | Co-I Rovatsos gave an invited keynote presentation on "Building Ethical AI Solutions" and audience Q&A with other presenters from industry and the public sector. The purpose was to raise awareness of ethical issues in AI, and to highlight the challenges that come with trying to answer key technical questions when deploying new AI solutions in the public sector. |
Year(s) Of Engagement Activity | 2022 |
Description | International Population Data Linkage Network Congress |
Form Of Engagement Activity | A formal working group, expert panel or dialogue |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | Co-I Sethi was an invited member of the scientific advisory committee for the Congress which involved determining the themes and scientific programme for academics and researchers working on data linkage |
Year(s) Of Engagement Activity | 2022 |
URL | https://ipdln.org/2022-conference |
Description | Scottish AI Summit 2022 |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Industry/Business |
Results and Impact | Co-I Rovatsos participated in a panel on Responsible Innovation in AI at this national showcase event on AI, with a broad academic, industry and public audience. The main objective was to exchange expert opinions on what organisations need to focus on if they want to develop and deploy AI responsibly in a practical context. |
Year(s) Of Engagement Activity | 2022 |
URL | https://www.youtube.com/watch?v=Oxo5fG4RZ5Y |
Description | University of Bath Talk: An Agent-Based Perspective for Preserving Privacy Online |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Postgraduate students |
Results and Impact | Co-I Kokciyan gave a talk on her research on multi-agent dialogical systems approaches to trusted online interactions to the ART-AI research group at Bath. The goal was to foster future collaboration opportunities on these approaches. |
Year(s) Of Engagement Activity | 2022 |
URL | https://cdt-art-ai.ac.uk/news/events/an-agent-based-perspective-for-preserving-privacy-online-with-n... |
Description | Voices in the Code: A Public Event from Ada Lovelace Institute |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Public/other audiences |
Results and Impact | I was invited by Ada Lovelace Institute to host a conversation with author David G. Robinson about his new book, Voices in the Code. The book directly addresses AI policy and governance and the event was aimed at audiences interested in AI governance and democratic accountability for AI. |
Year(s) Of Engagement Activity | 2022 |
URL | https://www.adalovelaceinstitute.org/event/voices-in-the-code/ |
Description | Workshop on Constructing Responsible Agency |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Other audiences |
Results and Impact | project focusses on the construction of forward-looking answerability practices that can bridge so-called 'responsibility gaps' in the use of autonomous and AI systems. However, the initial stage of our work, of which this workshop is a part, seeks to look across philosophy, cognitive science and related fields to understand how apparent responsibility gaps are identified and constructively addressed in other contexts, for example, group agency and contexts involving implicit biases or brain-based etiologies that complicate conventional attributions of moral and legal responsibility. On 27 May 2022 our project team held a one-day expert workshop bringing together a group of 25 researchers working on these problems from different directions and disciplines (philosophy, neuroscience/cognitive science, AI, robotics and computing) who do not normally interact in the research environment, to identify common themes, conceptual frames and challenges that can be instructive in multiple domains of responsibility, including but not limited to the autonomous systems context. The structure of the workshop involved short (20 min.) presentations from five experts in the AM session enabling participants to learn more about one another's approaches to responsibility challenges, and in the afternoon, a presentation by our research team identifying some key questions and case studies for further discussion and triangulation/coordination across these different methodologies and approaches. |
Year(s) Of Engagement Activity | 2022 |
URL | https://twitter.com/CentreTMFutures/status/1530136327496384517?s=20 |