Making Systems Answer: Dialogical Design as a Bridge for Responsibility Gaps in Trustworthy Autonomous Systems
Lead Research Organisation:
University of Edinburgh
Department Name: Sch of Philosophy Psychology & Language
Abstract
As computing systems become increasingly autonomous--able to independently pilot vehicles, detect fraudulent banking transactions, or read and diagnose our medical scans--it is vital that humans can confidently assess and ensure their trustworthiness. Our project develops a novel, people-centred approach to overcoming a major obstacle to this, known as responsibility gaps.
Responsibility gaps occur when we cannot identify a person who is morally responsible for an action with high moral stakes, either because it is unclear who was behind the act, or because the agent does not meet the conditions for moral responsibility; for example, if the act was not voluntary, or if the agent was not aware of it. Responsibility gaps are a problem because holding others responsible for what they do is how we maintain social trust.
Autonomous systems create new responsibility gaps. They operate in high-stakes areas such as health and finance, but their actions may not be under the control of a morally responsible person, or may not be fully understandable or predictable by humans due to complex 'black-box' algorithms driving these actions. To make such systems trustworthy, we need to find a way of bridging these gaps.
Our project draws upon research in philosophy, cognitive science, law and AI to develop new ways for autonomous system developers, users and regulators to bridge responsibility gaps-by boosting the ability of systems to deliver a vital and understudied component of responsibility, namely answerability.
When we say someone is 'answerable' for an act, it is a way of talking about their responsibility. But answerability is not about having someone to blame; it is about supplying people who are affected by our actions with the answers they need or expect. Responsible humans answer for actions in many different ways; they can explain, justify, reconsider, apologise, offer amends, make changes or take future precautions. Answerability encompasses a richer set of responsibility practices than explainability in computing, or accountability in law.
Often, the very act of answering for our actions improves us, helping us be more responsible and trustworthy in the future. This is why answerability is key to bridging responsibility gaps. It is not about who we name as the 'responsible person' (which is more difficult to identify in autonomous systems), but about what we owe to the people holding the system responsible. If the system as a whole (machines + people) can get better at giving the answers that are owed, the system can still meet present and future responsibilities to others. Hence, answerability is a system capability for executing responsibilities that can bridge responsibility gaps.
Our ambition is to provide the theoretical and empirical evidence and computational techniques that demonstrate how to enable autonomous systems (including wider "systems" of developers, owners, users, etc) to supply the kinds of answers that people seek from trustworthy agents. Our first workstream establishes the theoretical and conceptual framework that allows answerability to be better understood and executed by system developers, users and regulators. The second workstream grounds this in a people-centred, evidence-driven approach by engaging various publics, users, beneficiaries and regulators of autonomous systems in the research. Focus groups, workshops and interviews will be used to discuss cases and scenarios in health, finance and government that reveal what kinds of answers people expect from trustworthy systems operating in these areas. Finally, our third workstream develops novel computational AI techniques for boosting the answerability of autonomous systems through more dialogical and responsive interfaces with users and regulators. Our research outputs and activities will produce a mix of academic, industry and public-facing resources for designing, deploying and governing more answerable autonomous systems.
Responsibility gaps occur when we cannot identify a person who is morally responsible for an action with high moral stakes, either because it is unclear who was behind the act, or because the agent does not meet the conditions for moral responsibility; for example, if the act was not voluntary, or if the agent was not aware of it. Responsibility gaps are a problem because holding others responsible for what they do is how we maintain social trust.
Autonomous systems create new responsibility gaps. They operate in high-stakes areas such as health and finance, but their actions may not be under the control of a morally responsible person, or may not be fully understandable or predictable by humans due to complex 'black-box' algorithms driving these actions. To make such systems trustworthy, we need to find a way of bridging these gaps.
Our project draws upon research in philosophy, cognitive science, law and AI to develop new ways for autonomous system developers, users and regulators to bridge responsibility gaps-by boosting the ability of systems to deliver a vital and understudied component of responsibility, namely answerability.
When we say someone is 'answerable' for an act, it is a way of talking about their responsibility. But answerability is not about having someone to blame; it is about supplying people who are affected by our actions with the answers they need or expect. Responsible humans answer for actions in many different ways; they can explain, justify, reconsider, apologise, offer amends, make changes or take future precautions. Answerability encompasses a richer set of responsibility practices than explainability in computing, or accountability in law.
Often, the very act of answering for our actions improves us, helping us be more responsible and trustworthy in the future. This is why answerability is key to bridging responsibility gaps. It is not about who we name as the 'responsible person' (which is more difficult to identify in autonomous systems), but about what we owe to the people holding the system responsible. If the system as a whole (machines + people) can get better at giving the answers that are owed, the system can still meet present and future responsibilities to others. Hence, answerability is a system capability for executing responsibilities that can bridge responsibility gaps.
Our ambition is to provide the theoretical and empirical evidence and computational techniques that demonstrate how to enable autonomous systems (including wider "systems" of developers, owners, users, etc) to supply the kinds of answers that people seek from trustworthy agents. Our first workstream establishes the theoretical and conceptual framework that allows answerability to be better understood and executed by system developers, users and regulators. The second workstream grounds this in a people-centred, evidence-driven approach by engaging various publics, users, beneficiaries and regulators of autonomous systems in the research. Focus groups, workshops and interviews will be used to discuss cases and scenarios in health, finance and government that reveal what kinds of answers people expect from trustworthy systems operating in these areas. Finally, our third workstream develops novel computational AI techniques for boosting the answerability of autonomous systems through more dialogical and responsive interfaces with users and regulators. Our research outputs and activities will produce a mix of academic, industry and public-facing resources for designing, deploying and governing more answerable autonomous systems.
Publications
Conitzer V
(2022)
Technical Perspective: The Impact of Auditing for Algorithmic Bias
in Communications of the ACM
Gabriel I
(2024)
The Ethics of Advanced AI Assistants
Hatherall L
(2023)
Responsible Agency Through Answerability
Hatherall L
(2025)
Exploring expert and public perceptions of answerability and trustworthy autonomous systems
in Journal of Responsible Technology
HATHERALL L
(2024)
Regulating for trustworthy autonomous systems: exploring stakeholder perspectives on answerability
in Journal of Law and Society
Leslie D
(2024)
'Frontier AI,' Power, and the Public Interest: Who Benefits, Who Decides?
in Harvard Data Science Review
Manzini A
(2024)
The Code That Binds Us: Navigating the Appropriateness of Human-AI Assistant Relationships
in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society
Mudrik L
(2022)
Free will without consciousness?
in Trends in cognitive sciences
| Description | We have discovered some key empirical findings on answerability from our stakeholder workshops, focus groups and scoping conversations: Someone taking responsibility by providing answers of various types, including but beyond mere explanations, was generally welcomed & seen as beneficial for trustworthy autonomous systems (TAS) Participants struggled to articulate the range of harms which may arise from use of autonomous systems and for which organisations must be answerable; there were striking dissonances between 'expert' and 'public' perspectives on such harms. Trustworthy autonomous systems should provide answers which go beyond technical explanations of how a system reaches its output. Those providing answers should have sufficient seniority. This is contextual, and can include: organisational seniority, knowledge about how the system works as a socio-technical network, and ability to enact change. There are a number of regulatory implications of our findings, including: Evidence that organisations are reluctant to be the first to set standards, leading to responsibility 'hot potato' Law seen as needed to set guardrails of acceptable development, but insufficient to properly regulate TAS Answerability is a particularly useful tool in procured black-box systems, and alongside other impact assessments. |
| Exploitation Route | Our latest findings, which our team has published in three peer-reviewed journal articles in 2024-2025, will be integrated into the draft Practitioners Handbook for Answerability that is designed to help organisations better anticipate meet the answerability expectations of their stakeholders. That Handbook will be shared with our partners for a final round of feedback and hopefully testing in their organisations, then made publicly available to other organisations based on their feedback and any final revisions needed. The Handbook offers concrete guidance to organisations on how to design new internal accountability structures and technical tools that can aid them in responsible deployment of AI technologies. |
| Sectors | Digital/Communication/Information Technologies (including Software) Financial Services and Management Consultancy Healthcare Government Democracy and Justice |
| URL | https://medium.com/@svallor_10030/edinburgh-declaration-on-responsibility-for-responsible-ai-1a98ed2e328b |
| Description | The workshop in April 2024 with our partners at SAS, NHS Digital Academy, and Scottish government gave them a first look at our draft Handbook for Answerability and the recommendations it contained for designing for answerability. We also shared with our partners the empirical findings from our stakeholder workshops and interviews. Our partners expressed an interest and intent to integrate these findings and recommendations into their own governance thinking and processes. Scottish government reached out to us in the autumn 2024 letting us know that they look forward to seeing the final version of the Handbook that incorporates their feedback and the final stage of our empirical findings. Additionally, in July 2023 the Responsibility projects met jointly and produced the Edinburgh Declaration on Responsibility, which was made public and shared and discussed widely on social media. It advanced 'Four Key Shifts on Responsibility Needed to Achieve Responsible AI'. That document became the basis of a large public event in March 2024 hosted by us at the University of Edinburgh, with several hundred attendees that included a wider public and nonacademic audience. https://www.technomoralfutures.uk/events-database/technomoral-conversation-responsible-ai There is also an intention by the participants in the original declaration to revisit it in a follow up output that takes stock of the less salutory changes in the environment for responsible AI since 2023, and we expect this will generate substantial nonacademic discussion in policy and tech circles. |
| First Year Of Impact | 2024 |
| Sector | Digital/Communication/Information Technologies (including Software),Healthcare,Government, Democracy and Justice |
| Impact Types | Policy & public services |
| Description | Advisory Board, Scottish Biometrics Commissioner |
| Geographic Reach | National |
| Policy Influence Type | Participation in a guidance/advisory committee |
| URL | https://www.biometricscommissioner.scot/about-us/what-we-do/ |
| Description | Contribution to International AI Safety Report |
| Geographic Reach | Multiple continents/international |
| Policy Influence Type | Contribution to a national consultation/review |
| URL | https://www.gov.uk/government/publications/international-ai-safety-report-2025 |
| Description | Contribution to UK POST Note on Policy Implications of AI |
| Geographic Reach | National |
| Policy Influence Type | Contribution to a national consultation/review |
| URL | https://post.parliament.uk/research-briefings/post-pn-0708/ |
| Description | Data Ethics In Health and Social Care DDI Talent |
| Geographic Reach | Local/Municipal/Regional |
| Policy Influence Type | Influenced training of practitioners or researchers |
| Impact | Improving the ethical knowledge and skill of health care practitioners and data professionals in the health domain, developing capacity for responsible and trustworthy professional development and use of data-driven tools in health |
| URL | https://www.ed.ac.uk/studying/postgraduate/degrees/index.php?r=site/view&id=1041 |
| Description | Evidence Submission to House of Lords AI in Weapons Systems Committee |
| Geographic Reach | National |
| Policy Influence Type | Implementation circular/rapid advice/letter to e.g. Ministry of Health |
| URL | https://publications.parliament.uk/pa/ld5804/ldselect/ldaiwe/16/1602.htm |
| Description | Innovate UK BridgeAI Advisory Board Member |
| Geographic Reach | National |
| Policy Influence Type | Participation in a guidance/advisory committee |
| Description | Input on Judging for Futurescot AI Challenge |
| Geographic Reach | Local/Municipal/Regional |
| Policy Influence Type | Contribution to new or improved professional practice |
| URL | https://stormid.com/futurescot-ai-challenge/ |
| Description | National Statistician's Advisory Committee on Inclusive Data |
| Geographic Reach | National |
| Policy Influence Type | Participation in a guidance/advisory committee |
| Impact | The advice given has shaped the national statistician's plans to address the lack of representative, rich and harmonised datasets pertaining to underserved groups in the UK, such as disabled persons. |
| URL | https://uksa.statisticsauthority.gov.uk/the-authority-board/committees/national-statisticians-adviso... |
| Description | Oversight Board for Ada Lovelace Institute |
| Geographic Reach | National |
| Policy Influence Type | Participation in a guidance/advisory committee |
| Impact | The Ada Lovelace Institute is among the most trusted independent voices in AI policy that has a strong programme of work in public education and engagement around AI and data, as well as considerable policy influence in the UK and abroad. |
| URL | https://www.adalovelaceinstitute.org/about/our-people/ |
| Description | Presentation to MP Ian Murray on Responsible AI Governance and AI Health Challenges |
| Geographic Reach | National |
| Policy Influence Type | Implementation circular/rapid advice/letter to e.g. Ministry of Health |
| Impact | This response was received following the briefing: "A huge thanks for your excellent presentations and the stimulating discussion today. Ian was very impressed with the quality and relevance of the research we shared with him, and we're optimistic there will be plenty of further follow up, including with other members of Labour's Shadow Cabinet. We really appreciate you giving your time for this - it's so important for us to engage early on with what is likely to be the next UK government." |
| Description | Report of NEPC Working Group on AI Sustainability |
| Geographic Reach | National |
| Policy Influence Type | Participation in a guidance/advisory committee |
| URL | https://nepc.raeng.org.uk/media/2aggau2j/foundations-for-sustainable-ai-nepc-report.pdf |
| Description | Testimony to U.S. Senate Committee on AI and Democracy |
| Geographic Reach | North America |
| Policy Influence Type | Implementation circular/rapid advice/letter to e.g. Ministry of Health |
| URL | https://www.hsgac.senate.gov/hearings/the-philosophy-of-ai-learning-from-history-shaping-our-future/ |
| Description | Enabling a Responsible AI Ecosystem |
| Amount | £5,947,659 (GBP) |
| Funding ID | AH/X007146/1 |
| Organisation | Arts & Humanities Research Council (AHRC) |
| Sector | Public |
| Country | United Kingdom |
| Start | 11/2022 |
| End | 11/2025 |
| Description | NHS Digital Partnership (formerly NHSx) |
| Organisation | NHS Digital |
| Country | United Kingdom |
| Sector | Public |
| PI Contribution | We have hosted NHSx/NHS Digital staff in initial scoping conversations and a workshop to shape a handbook on answerability for practitioners that can offer NHS further support in the Responsible AI and Data Science area, and the demo of a prototype dialogical tool to help organisations manage their answerability requirements. |
| Collaborator Contribution | • Collaborating with the project team to identify case studies of autonomous or semi-autonomous health tools that could be the focus of stakeholder workshops, focus groups, or interviews • Providing or facilitating expert input into, and iterated feedback upon, the proposed practitioner guide/handbook on answerability |
| Impact | We have two team publications as well as two articles under review that were shaped by the NHS collaboration, as well as the Handbook and Dialogical Tool in development. The collaboration is multidisciplinary, drawing from Philosophy, Neuroscience, Law/Sociolegal Studies, and Informatics. |
| Start Year | 2022 |
| Description | AI Assurance Workshop, Validate AI |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Industry/Business |
| Results and Impact | Co-I Rovatsos co-organised and led a workshop and chaired a panel on challenges in AI procurement, highlighting research from our TAS responsibility project |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://validateai.org/upcoming |
| Description | AI as Risk talk |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Industry/Business |
| Results and Impact | Co-I Rovatsos gave an invited talk and audience Q&A delivered to a global audience of professional service experts in data and digital transformation with a focus on financial services. The purpose was to raise awareness of risk management around AI, and to highlight the importance of appropriate decision-making, governance and stakeholder engagement in organisations aiming to use AI responsibly. |
| Year(s) Of Engagement Activity | 2022 |
| Description | Blog Post: Edinburgh Declaration on Responsibility for Responsible AI |
| Form Of Engagement Activity | Engagement focused website, blog or social media channel |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Public/other audiences |
| Results and Impact | Following a joint summer workshop hosted at the University of Edinburgh, bringing together the 4 TAS Responsibility projects, and joined also by members of the TAS Governance and Regulation node, PI Vallor led a joint publication of a Medium blog post on 14 July intended to translate the consensus outcomes of the workshop for a wider audience. The post was signed by 24 members of the TAS programme, widely shared on social media and referenced on stage at the TAS Symposium. It also led to plans for future work on these themes by members of the responsibility projects, and a forthcoming public panel event on 27 March at the University of Edinburgh focused on the Declaration. |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://medium.com/@svallor_10030/edinburgh-declaration-on-responsibility-for-responsible-ai-1a98ed2... |
| Description | Blog post 'Edinburgh Declaration on Responsibility for Responsible AI' |
| Form Of Engagement Activity | Engagement focused website, blog or social media channel |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | This public-facing blog post was the output of a workshop that brought together interdisciplinary research teams from the 4 TAS Responsibility projects as well as BRAID and the TAS Governance project - it was intended to be a provocation to the wider Responsible AI community with specific recommendations for reform |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://medium.com/@svallor_10030/edinburgh-declaration-on-responsibility-for-responsible-ai-1a98ed2... |
| Description | Citizens Panel: Scottish Government Digital Innovation Network |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | Regional |
| Primary Audience | Public/other audiences |
| Results and Impact | Co-I Sethi gave a presentation to and answered questions from a public panel convened to steer the Scottish Government Digital Innovation Network. The presentation introduced participants to the topic of citizens' data and the key issues around data ethics and justice. |
| Year(s) Of Engagement Activity | 2022 |
| Description | Conference Presentation at Lawtomation Days |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Policymakers/politicians |
| Results and Impact | A 20-minute presentation sharing key findings from the socio-legal, empirical work at the second annual Lawtomation Days conference, which brought together academics in Law in a cutting-edge forum to discuss the shifting legal landscape of automated decision-making and AI. Networking at the event led to requests for further detail about the key findings of the Making Systems Answer project, and raised awareness with third sector organisations about the ongoing research. The presentation was valuable knowledge exchange with colleagues and third sector organisations about key findings of the project, particularly for sharing the work with international colleagues who may be working on complementary projects. |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://lawtomation.ie.edu/conference/ |
| Description | Creating Connections in Scotland: Assistive digital technologies, Royal Society of Edinburgh, Edinburgh, United Kingdom, 6 June 2023 |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Industry/Business |
| Results and Impact | This was a two-day conference focusing on Scottish research and innovation, bringing together experts from academia, industry and government to address scientific and technical opportunities and challenges in Scotland. Co-I Nadin Kokciyan gave a short talk on our project to highlight our research and opportunities for dialogical design to enhance responsibility and trustworthiness in assistive digital technologies. |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://royalsociety.org/science-events-and-lectures/2023/06/creating-connections-scotland/ |
| Description | DSIT Roundtable on Human-Machine Interaction |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | On 7 Mar 2024 PI Vallor joined a DSIT roundtable hosted at DSIT, on Human-Machine Interaction, in which DSIT sought expert advice from 15 academic, industry and third sector experts across the disciplines, on UK government response to new and emerging HMI risks, especially from generative AI. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Dagstuhl Seminar: Roadmap for Responsible Robotics |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Other audiences |
| Results and Impact | This was a presentation on our work (by co-I Nadin Kokciyan) to an audience of researchers working in robotics from across many different fields in academia (including humanities, engineering, computer science, law) and industry, who gathered to discuss and articulate goals for responsible robotics to aim at and "develop tractable pathways to their implementation in real-world systems." |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://www.dagstuhl.de/en/seminars/seminar-calendar/seminar-details/23371 |
| Description | Data Island Workshop |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | Regional |
| Primary Audience | Professional Practitioners |
| Results and Impact | This workshop brought together public engagement practitioners to explore the use of cultural probes as a method to explore discussions around data use |
| Year(s) Of Engagement Activity | 2022 |
| Description | Digital Scotland 2022 talk |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | Regional |
| Primary Audience | Industry/Business |
| Results and Impact | Co-I Rovatsos gave an invited keynote presentation on "Building Ethical AI Solutions" and audience Q&A with other presenters from industry and the public sector. The purpose was to raise awareness of ethical issues in AI, and to highlight the challenges that come with trying to answer key technical questions when deploying new AI solutions in the public sector. |
| Year(s) Of Engagement Activity | 2022 |
| Description | Digital Scotland panel on AI and Government Services |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | In this 21 Nov 2023 Digital Scotland expert panel on 'The emerging role of AI in government services', in conversation with representatives from Scottish government and Leidos, PI Vallor discussed the challenges of answerability to citizens and publics as part of the challenge of responsible use of AI in the public sector, drawing on her and her teams' work within BRAID and the TAS Responsibility projects. This was followed by several requests (verbal and later by email) by audience members in the public sector and industry, for future conversations and joining of the BRAID network. |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://futurescot.com/futurescot-conferences/digitalscotland2023/ |
| Description | Expert Panel at AI: Human By Design? |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | At an AI conference hosted by the organisation Women in Banking and Finance, Co-I Rovatsos spoke on a panel about AI Challenges and Opportunities, drawing from our TAS Responsibility research |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://www.wibf.org.uk/events/ai-human-by-design/ |
| Description | Expert Panel at NYU-KAIST AI Global Governance Summit in New York City |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Public/other audiences |
| Results and Impact | Following the announcement of a partnership between NYU and the Republic of Korea, attended by the President of Korea, the NYU President, and NYU/Meta's Yann LeCun, the event hosted a wide-ranging panel discussion about AI and responsible digital governance by prominent international scholars in the field including PI Vallor. The other panelists included: Professor Kyung-hyun Cho, Deputy Director for NYU Center for Data Science & Courant Institute, Professor Luciano Floridi, Founding Director of the Digital Ethics Center, Yale University, Professor Urs Gasser, Rector of the Hochschule fur Politik, Technical University of Munich, Professor Stefaan Verhulst, Co-Founder & Director of GovLab's Data Program, NYU Tandon School of Engineering, and Professor Jong Chul Ye, Director of Promotion Council for Digital Health, KAIST. The event led to conversations about future collaborations between BRAID and the new NYU-KAIST partnership. |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://www.nyu.edu/about/news-publications/news/2023/september/nyu-and-kaist-launch-major-new-initi... |
| Description | Expert Panel at Public Sector Cyber Security Scotland: 'Keeping Our Citizens' Digital Identity Secure', 1 Feb 2024 |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Co-I Nadin Kokciyan talked about the project on the panel, in the context of a discussion of trustworthiness and responsibility concerns around digital identity and autonomous systems. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://events.holyrood.com/event/public-sector-cyber-security-scotland-2024/ |
| Description | Expert Panel at TAS Symposium 2023, Edinburgh |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | PI Vallor spoke on expert panels on Responsible Autonomous Systems and on AI Policy and Regulation, to an audience of industry, government and academic partners, drawing on both research within BRAID and TAS projects. |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://tas.ac.uk/bigeventscpt/first-international-symposium-on-trustworthy-autonomous-systems/ |
| Description | Expert Panel at meeting of Security Awareness Special Interest Group, the University of Edinburgh |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Industry/Business |
| Results and Impact | Co-I Nadin Kokciyan spoke on a panel on 'Humans vs machines - how do we get the balance right when it comes to resilience?', talking about our project and the role of dialogical answerability and responsibility in trustworthy autonomous systems. |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://www.thesasig.com/calendar/event/23-06-13-edinburgh/ |
| Description | Google's Mediterranean ML Summer School in Thessaloniki |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Postgraduate students |
| Results and Impact | PI Vallor gave a keynote on Responsible AI to machine learning students and early career researchers in the Mediterranean, drawing from research on AI and responsibility in both BRAID and TAS. |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://www.m2lschool.org/past-editions/m2l-2023-greece |
| Description | International Population Data Linkage Network Congress |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Co-I Sethi was an invited member of the scientific advisory committee for the Congress which involved determining the themes and scientific programme for academics and researchers working on data linkage |
| Year(s) Of Engagement Activity | 2022 |
| URL | https://ipdln.org/2022-conference |
| Description | Invited Panelist at BEYOND Conference |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Industry/Business |
| Results and Impact | Nadin Kokciyan was a panellist at BEYOND conference at Manchester, and the panel was on "Empowering the creative sector through responsible adoption of Artificial Intelligence". The session was chaired by Innovate UK. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://beyondconference.org/ |
| Description | Invited Panellist at the "Data Readiness for AI" Event |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Industry/Business |
| Results and Impact | Nadin Kokciyan (co-I) was a panellist at the "Data Readiness for AI" event as part of AI Insight Chat Series led by the Alan Turing Institute. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://iuk-business-connect.org.uk/events/ai-insight-chat-series/ |
| Description | Invited Speaker for "Aligning AI with Human Values. Challenges and Solutions" |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Industry/Business |
| Results and Impact | Nadin Kokciyan was an invited speaker for "Aligning AI with Human Values. Challenges and Solutions" event organized by the Department of Information Science and Media Studies at University of Bergen. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://www.uib.no/en/ai/173501/uib-ai-12-aligning-ai-human-values-challenges-and-solutions |
| Description | Keynote 'The AI Mirror' at Charlotte Ideas Festival |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Public/other audiences |
| Results and Impact | On 1 April, PI Vallor gave a keynote on AI at the Charlotte Ideas Festival in the USA, drawing on themes of responsibility and the importance of the arts and humanities in shaping our future with AI. |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://www.charlotteshout.com/events/detail/shannon-vallor |
| Description | Keynote at Turing Fest 2023, Edinburgh |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Public/other audiences |
| Results and Impact | PI Vallor's keynote: 'Who is Responsible for Responsible AI? The Ecologies of a Responsible AI Ecosystem' drew from BRAID and TAS Responsibility research to help a broad audience understand the challenges and opportunities we face in building a Responsible AI Ecosystem in the UK. |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://turingfest.com/speaker/shannon-vallor/ |
| Description | Living with AI Podcast on AI and Taking Responsibility |
| Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Public/other audiences |
| Results and Impact | On 19 July 2023 PI Vallor joined two other PIs of TAS Responsibility projects to talk about the work, the meaning of responsible AI, and future plans on the Living with AI podcast. |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://podcasts.apple.com/au/podcast/ai-taking-responsibility/id1538425599?i=1000621628112 |
| Description | Mapping Trustworthy AI Landscapes |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | This was an all-day engagement activity at the Scottish AI Summit led by Louise Hatherall, at an exhibitors booth in the main hall of the summit - engaging with schoolchildren and adults alike at the summit. Description: "Despite broad stakeholder agreement that the future of AI should be trustworthy, ethical, and inclusive, there remains ambiguity, and tensions in what this future should look like. Questions remain about whose perspectives will shape this landscape, and how these principles are practiced across the AI ecosystem. This event intends to embrace this ambiguous, complex, contextual, and shifting landscape by using an interactive activity to ask: what would a trustworthy AI island look like? Participants were provided with a picture of an island and craft materials (stickers, photos, newspapers, etc) and asked to create a collage or montage of their trustworthy AI island. Participants were asked questions to prompt reflection, such as: where are the areas of trustworthy or untrustworthy terrain? Where are the 'hidden treasures' which can facilitate trustworthy AI development and deployment? Where are the borders of a trustworthy landscape? This activity was organised and informed by research undertaken as part of the UKRI Making Systems Answer Project: Dialogical Design as a Bridge for Responsibility Gaps in Trustworthy Autonomous Systems. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.scottishaisummit.com/2024-exhibitors |
| Description | Medical Humanities and Artificial Intelligence: Boundaries, Methodologies, Practice' - published on The Polyphony |
| Form Of Engagement Activity | Engagement focused website, blog or social media channel |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Public/other audiences |
| Results and Impact | Louise Hatherall published this blog piece in June 2024 on The Polyphony, drawing from our research |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://thepolyphony.org/2024/06/28/medical-humanities-artificial-intelligence/ |
| Description | Panel on Responsible AI at SAS Innovate |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Industry/Business |
| Results and Impact | On 8 June 2023 PI Vallor spoke on a panel on AI and Responsible Innovation with other Responsible AI experts at an industry event hosted at the Royal Institution by SAS, a partner on the TAS Responsibility project. One outcome of the event was the expression of interest from another panelist (Ray Eitel-Porter of Accenture) in getting involved with BRAID; he is now on the BRAID Advisory Board. |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://www.sas.com/sas/events/innovate-on-tour/23/london.html |
| Description | Participant in FacultyAI 'Intelligence Rising' wargames |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Industry/Business |
| Results and Impact | The AI company FacultyAI sponsored a set of 'wargames' in early October 2023 on AI risk that allowed the expert participants to explore various scenarios of geopolitical and technological risk development, and strategies for managing those risks, over a 5-10 year horizon; PI Vallor was one of the participants (on the team of players representing corporate interests) and discussed several pieces of BRAID and TAS Responsibility research that informed the strategic thinking. |
| Year(s) Of Engagement Activity | 2023 |
| Description | Plenary address: International AI Cooperation and Governance Forum in Hong Kong |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Policymakers/politicians |
| Results and Impact | On 8 Dec 2023 PI Vallor gave a plenary invited talk on AI Governance at the forum, hosted by Tsinghua University and Hong Kong University of Science and Technology, and attended by approx 750 Hong Kong/Chinese government representatives, international policymakers, AI researchers, and students.The talk highlighted themes of care, responsibility and trust and the importance of the integration of these in AI governance, and mentioned UK support for our research in this area through BRAID and TAS. Several plans for future conversation and collaboration resulted, including plans to coauthor work on AI safety and risk governance with a Cambridge AI researcher. |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://aicg2023.hkust.edu.hk/ |
| Description | Podcast for ABC Radio National (Australia): 'What is AI Doing to Our Humanity?' |
| Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Public/other audiences |
| Results and Impact | I spoke about my book The AI Mirror and our work in BRAID for this Australian national news podcast. (28 minutes) I later received emails from listeners sharing how the episode informed and shaped their understanding of AI. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.abc.net.au/listen/programs/philosopherszone/shannon-vallor-what-is-ai-doing-to-our-human... |
| Description | Poster at Usher Showcase Event |
| Form Of Engagement Activity | Participation in an open day or visit at my research institution |
| Part Of Official Scheme? | No |
| Geographic Reach | Regional |
| Primary Audience | Postgraduate students |
| Results and Impact | A poster communicating key insights from the TAS project was displayed at the Usher Showcase. This event shared work across the Usher Institute with students, colleagues, and external visitors. Discussions following the showcase fed into the contribution of an Institute wide network exploring AI in Health, which intends to foster interdisciplinary collaboration and a space for developing robust grant applications for ongoing research. The poster was valuable knowledge exchange with colleagues which led to discussions about how to further operationalise interdisciplinary work for robust research. It was also valuable to share our findings with clinicians, who are some of the front-line roles central to current questions concerning trustworthy autonomous systems in healthcare. |
| Year(s) Of Engagement Activity | 2023 |
| Description | Presentation on 'What is AI?' to Welsh Government |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | I was invited to give an informational presentation on AI, BRAID and Responsible AI to Welsh government, along with a presentation from the ICO. The Chief Social Research Officer at Welsh Government noted that the employees attending would have limited knowledge of AI and thus need a non-technical session that covers the following themes: The differences between AI, machine learning and large language models, what they do and what they can be used for. What are the risks to be aware of when making use of such tools, with particular reference to GDPR/data protection issues and how AI can amplify existing discrimination etc. Responses to the talk from Welsh government "We feel the session went very well and it was very positively received by staff, who found it both informative and insightful." The acting Head of Welsh Affairs responded: "I thought Shannon's presentation was excellent and really informative (and a few of my team who attended said so too!). Am looking forward to working with organisations over the next few years to see what initiatives Wales can come up with." |
| Year(s) Of Engagement Activity | 2024 |
| Description | Presentation on Responsible AI to Regional Enterprise Council |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | Regional |
| Primary Audience | Industry/Business |
| Results and Impact | A presentation to Edinburgh/Scottish regional industry leaders on AI, which received this response from Neil McLean of the Social Enterprise Council: "I just wanted to write to you to say thank you so much for a fascinating and inspiring talk at the EFI tour on Friday morning. I am part of the Regional Enterprise Council (City Region Deal group) and found the whole day fascinating. As someone who spent 15 years working in the Technology industry (some of it in the US), I found your talk and some of the issues you touched upon, absolutely fascinating." |
| Year(s) Of Engagement Activity | 2024 |
| Description | Presentation to UK-US Forum on Science in the Age of AI, Royal Society London |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | A presentation to leading science funding bodies and policymakers in the UK and US on how AI will be changing, advancing, and challenging the integrity of the practice of science. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://royalsociety.org/news-resources/projects/us-uk-forum-2024/ |
| Description | Public Event 'Who is Responsible for Responsible AI?' |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Public/other audiences |
| Results and Impact | This was a large public event held and broadcast online from Edinburgh's Playfair Library, a fireside chat of experts involved in the July 23 TAS Responsibility projects' 'Edinburgh Declaration on Responsibility for Responsible AI" |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://efi.ed.ac.uk/event/technomoral-conversations-who-is-responsible-for-responsible-ai/ |
| Description | Scottish AI Summit 2022 |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Industry/Business |
| Results and Impact | Co-I Rovatsos participated in a panel on Responsible Innovation in AI at this national showcase event on AI, with a broad academic, industry and public audience. The main objective was to exchange expert opinions on what organisations need to focus on if they want to develop and deploy AI responsibly in a practical context. |
| Year(s) Of Engagement Activity | 2022 |
| URL | https://www.youtube.com/watch?v=Oxo5fG4RZ5Y |
| Description | Shannon Vallor - 9th March '23: Talk for Monash Prato Dialogues |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | The Monash Data Futures Institute Prato Dialogue Distinguished Lecture Series aims to explore the evolving impact of data science and AI in society by fostering a global dialogue. |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://www.youtube.com/watch?v=o66i5w1avcU |
| Description | Shannon Vallor - 14 Nov 23: Keynote at WASP-HS Conference on AI for Humanity |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Other audiences |
| Results and Impact | The Wallenberg AI, Autonomous Systems and Software Program - Humanities and Society invited me as keynote speaker at the WASP-HS annual conference AI for Humanity and Society 2023. WASP-HS welcomed over 200 researchers, representatives from industry, and policymakers to discuss these issues on 14-15 November at Malmö Live in Malmö, Sweden. Multiple further conversations and contacts have followed from this talk, including a potential collaboration with a Norwegian team on a proposed ERC grant. |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://wasp-hs.org/shannon-vallor-is-keynote-speaker-at-ai-for-humanity-and-society-2023/ |
| Description | Shannon Vallor - 23rd May '23: Trustworthy Autonomous Systems podcast on Responsibility |
| Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | A Projects episode focusing on projects dedicated to researching the responsibility aspects of AI: Responsibility Projects - UKRI Trustworthy Autonomous Systems Hub (tas.ac.uk) . The podcast was a conversation between Vallor, Lars Kunze and Ibrahim Habli, all PIs on the TAS programme. |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://podcasts.apple.com/au/podcast/ai-taking-responsibility/id1538425599?i=1000621628112 |
| Description | Shannon Vallor - 30th May, 23: Podcast for National Technology News/Perspective Publishing, Ross Law |
| Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Public/other audiences |
| Results and Impact | To discuss the future of AI and what steps can be taken to ensure it develops in a way that is responsible and supports human flourishing, National Technology News reporter Ross Law was joined by Shannon Vallor, co-director of the UKRI Arts and Humanities Research Council's BRAID (Bridging Responsible AI Divides) Programme and the Baillie Gifford Chair in the Ethics of Data and Artificial Intelligence at the Edinburgh Futures Institute (EFI) at the University of Edinburgh. |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://nationaltechnology.co.uk/podcast-archives.php |
| Description | Speaker at 'Navigating Responsibility' Workshop |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Other audiences |
| Results and Impact | Dilara Kekulluoglu (RA) and Nadin Kokciyan (co-I) gave a talk on "Supporting Answerability through Dialogical Interfaces" at the 'Navigating Responsibility' Workshop at the University of Leeds, Leeds, United Kingdom. |
| Year(s) Of Engagement Activity | 2024 |
| Description | TAS Workshop on Responsibility and Answerability with Regulators |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | On 26 Oct the TAS Making Systems Answer team led a workshop for regulators, partners and other stakeholders on Answerability and Responsibility in Trustworthy Autonomous Systems. Representatives from Ofcom, ICO, NHS, PSA, QUB, SAS, CDEI, Audit Scotland, and the Scottish AI Alliance attended. The workshop provided vital input into the first draft of the Practitioners Handbook for Answerability which is a forthcoming output of our project, as well as shaping several of our research publications currently under review. |
| Year(s) Of Engagement Activity | 2023 |
| Description | Talk at Women in Academia event, the University of Edinburgh |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Postgraduate students |
| Results and Impact | At a Women In Academia event on 4 March 2023 sponsored by the Association of Turkish Alumni and Students in Scotland (ATAS), open to the public and attended by people from multiple disciplines and fields, Co-I Nadin Kokciyan presented a talk on ""An Agent-based Perspective for Preserving Privacy Online" that included slides from our project showing how it builds on her prior research. |
| Year(s) Of Engagement Activity | 2023 |
| Description | Talk for the University of Copenhagen's 'Science of the Predicted Human' Series |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Postgraduate students |
| Results and Impact | On 17 April 2023 PI Vallor gave a lecture, 'The AI Mirror', to social data science, computer science and machine learning researchers and students at the University of Copenhagen's Copenhagen Center for Social Data Science, drawing from award research on Responsible AI as well as her forthcoming book |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://sodas.ku.dk/events/the-science-of-the-predicted-human-talk-series-professor-shannon-vallor/ |
| Description | University of Bath Talk: An Agent-Based Perspective for Preserving Privacy Online |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Postgraduate students |
| Results and Impact | Co-I Kokciyan gave a talk on her research on multi-agent dialogical systems approaches to trusted online interactions to the ART-AI research group at Bath. The goal was to foster future collaboration opportunities on these approaches. |
| Year(s) Of Engagement Activity | 2022 |
| URL | https://cdt-art-ai.ac.uk/news/events/an-agent-based-perspective-for-preserving-privacy-online-with-n... |
| Description | Voices in the Code: A Public Event from Ada Lovelace Institute |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Public/other audiences |
| Results and Impact | I was invited by Ada Lovelace Institute to host a conversation with author David G. Robinson about his new book, Voices in the Code. The book directly addresses AI policy and governance and the event was aimed at audiences interested in AI governance and democratic accountability for AI. |
| Year(s) Of Engagement Activity | 2022 |
| URL | https://www.adalovelaceinstitute.org/event/voices-in-the-code/ |
| Description | Workshop on Artificial Intelligence for P6/P7 Students |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | Local |
| Primary Audience | Schools |
| Results and Impact | On 28 September 2023, co-I Kokciyan spoke about our project to P6/P7 students at St. Ninian's Primary School in West Lothian, Livingston, Scotland to introduce them to the kinds of harms that AI/autonomous systems can cause that require designing for trustworthiness and responsibility. |
| Year(s) Of Engagement Activity | 2023 |
| Description | Workshop on Constructing Responsible Agency |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Other audiences |
| Results and Impact | project focusses on the construction of forward-looking answerability practices that can bridge so-called 'responsibility gaps' in the use of autonomous and AI systems. However, the initial stage of our work, of which this workshop is a part, seeks to look across philosophy, cognitive science and related fields to understand how apparent responsibility gaps are identified and constructively addressed in other contexts, for example, group agency and contexts involving implicit biases or brain-based etiologies that complicate conventional attributions of moral and legal responsibility. On 27 May 2022 our project team held a one-day expert workshop bringing together a group of 25 researchers working on these problems from different directions and disciplines (philosophy, neuroscience/cognitive science, AI, robotics and computing) who do not normally interact in the research environment, to identify common themes, conceptual frames and challenges that can be instructive in multiple domains of responsibility, including but not limited to the autonomous systems context. The structure of the workshop involved short (20 min.) presentations from five experts in the AM session enabling participants to learn more about one another's approaches to responsibility challenges, and in the afternoon, a presentation by our research team identifying some key questions and case studies for further discussion and triangulation/coordination across these different methodologies and approaches. |
| Year(s) Of Engagement Activity | 2022 |
| URL | https://twitter.com/CentreTMFutures/status/1530136327496384517?s=20 |
