AI UK: Creating an International Ecosystem for Responsible AI Research and Innovation
Lead Research Organisation:
University of Southampton
Department Name: Sch of Electronics and Computer Sci
Abstract
Artificial Intelligence (AI) can have dramatic effects on industrial sectors and societies (e.g., Generative AI, facial recognition, autonomous vehicles). AI UK will pioneer a reflective, inclusive approach to responsible AI development that does not ignore AI's potential harms but acknowledges, understands and mitigates them for diverse societies. AI UK adopts a strong human-centred approach to ensure societies deploy and use AI in a responsible way by providing the AI community with a toolkit of technological innovations, case studies, guidelines, policies and frameworks for all key sectors of the economy. To achieve this, AI UK will deliver and drive a collaborative ecosystem of researchers, industry, policymakers and stakeholders that will be responsive to the needs of society, led by a team of experienced, well-connected leaders from all four nations of the UK, committed to an inclusive approach to the management of the programme.
AI UK grows an interdisciplinary ecosystem that adopts Equality, Diversity and Inclusivity (EDI), Trusted Research, and Responsible Research and Innovation (RRI) as fundamental principles. AI UK will champion a research culture where everyone is respected, valued and able to contribute and benefit and coordinate the UK's AI research networks and programmes, working with key Research Council (and other funding) programmes, The Alan Turing Institute, The Ada Lovelace Institute, AI Standards hub, Centres for Doctoral Training, UKRI AI Research Hubs, Public Sector Research Establishments (PSREs) as well as the wider landscape of university-based Responsible/Ethical AI research institutes. AI UK will connect UK research to internationally leading research centres and institutions around the world. Ultimately, through this ecosystem, AI UK will deliver world-leading best practices for the design, evaluation, regulation and operation of AI-systems that benefit the nation and society. AI UK will invest in the following strands:
Ecosystem Creation and Management: to define the portfolio of thematic areas, translational activities, and strategic partnerships with academia, business and government and associated impact metrics. This will broaden and consolidate the network nationally and internationally and identify course corrections to national policy (e.g., industrial strategy).
Research & Innovation Programmes: to deliver consortia-led research that address fundamental challenges with multi-disciplinary and industrial perspectives, integrative research projects that link connected and established research teams across the community, and early stage and industry-led research and innovation projects to expand the UK's ecosystem and develop the next generation of leaders.
Skills Programme: to translate research into skills frameworks and training for users, customers, and developers of AI, and to contribute to the call for the UK AI Strategy's Online Academy.
Public and Policy Engagement: working with the network of policy makers, regulators, and key stakeholders to respond to arising concerns, need for new standards, build capacity for public accountability and provide evidence-based advice to the public and policymakers.
AI UK grows an interdisciplinary ecosystem that adopts Equality, Diversity and Inclusivity (EDI), Trusted Research, and Responsible Research and Innovation (RRI) as fundamental principles. AI UK will champion a research culture where everyone is respected, valued and able to contribute and benefit and coordinate the UK's AI research networks and programmes, working with key Research Council (and other funding) programmes, The Alan Turing Institute, The Ada Lovelace Institute, AI Standards hub, Centres for Doctoral Training, UKRI AI Research Hubs, Public Sector Research Establishments (PSREs) as well as the wider landscape of university-based Responsible/Ethical AI research institutes. AI UK will connect UK research to internationally leading research centres and institutions around the world. Ultimately, through this ecosystem, AI UK will deliver world-leading best practices for the design, evaluation, regulation and operation of AI-systems that benefit the nation and society. AI UK will invest in the following strands:
Ecosystem Creation and Management: to define the portfolio of thematic areas, translational activities, and strategic partnerships with academia, business and government and associated impact metrics. This will broaden and consolidate the network nationally and internationally and identify course corrections to national policy (e.g., industrial strategy).
Research & Innovation Programmes: to deliver consortia-led research that address fundamental challenges with multi-disciplinary and industrial perspectives, integrative research projects that link connected and established research teams across the community, and early stage and industry-led research and innovation projects to expand the UK's ecosystem and develop the next generation of leaders.
Skills Programme: to translate research into skills frameworks and training for users, customers, and developers of AI, and to contribute to the call for the UK AI Strategy's Online Academy.
Public and Policy Engagement: working with the network of policy makers, regulators, and key stakeholders to respond to arising concerns, need for new standards, build capacity for public accountability and provide evidence-based advice to the public and policymakers.
Organisations
- University of Southampton (Collaboration, Lead Research Organisation)
- York Teaching Hospital NHS Foundation Trust (Collaboration)
- Northwestern University (Collaboration)
- Facebook (Collaboration)
- Government of the UK (Collaboration)
- Bar-Ilan University (Collaboration)
- Bloomberg (Collaboration)
- British Standards Institute (BSI Group) (Collaboration)
- Bangor University (Collaboration)
- NHS ENGLAND (Collaboration)
- HEALTH AND SAFETY EXECUTIVE (HSE) (Collaboration)
- Accenture (Collaboration)
- University of AlmerÃa (Collaboration)
- London South Bank University (Collaboration)
- Institute of Electrical and Electronics Engineers (IEEE) (Collaboration)
- Cedars-Sinai Medical Center (Collaboration)
- University of Toronto (Collaboration)
- SHEFFIELD HALLAM UNIVERSITY (Collaboration)
- Australian National University (ANU) (Collaboration)
- BAE Systems (Collaboration)
- Price Waterhouse Cooper (Collaboration)
- Northumbria University (Collaboration)
- Coventry and Warwickshire Partnership NHS Trust (Collaboration)
- University of Birmingham (Collaboration)
- Robert Gordon University (Collaboration)
- Defence Science & Technology Laboratory (DSTL) (Collaboration)
- Queen Mary University of London (Collaboration)
- Department of Transport (Collaboration)
- Alan Turing Institute (Collaboration)
- Google (Collaboration)
- Informatica (Collaboration)
- UNIVERSITY COLLEGE LONDON (Collaboration)
- UNIVERSITY OF EDINBURGH (Collaboration)
- Ada Lovelace Institute (Collaboration)
- George Washington University (Collaboration)
- National Institute of Informatics (NII) (Collaboration)
- Swansea University (Collaboration)
- University of the Arts London (Collaboration)
- National Physical Laboratory (Collaboration)
- Medtronic (Collaboration)
- Canon (Collaboration)
- University of Glasgow (Collaboration)
- University of Sussex (Collaboration)
- University of Nottingham (Collaboration)
- Open University (Collaboration)
- Microsoft Research (Collaboration)
- Balderton Capital (Collaboration)
- University of Oxford (Collaboration)
- National Institute for Health and Care Excellence (NICE) (Collaboration)
- Loughborough University (Collaboration)
- University of Cambridge (Collaboration)
- NIHR MindTech MedTech Co-operative (Collaboration)
- Royal Academy of Engineering (Collaboration)
- National Institute of Standards & Technology (NIST) (Collaboration)
- University of Sheffield (Collaboration)
- Newton Europe (Collaboration)
- National Aeronautics and Space Administration (NASA) (Collaboration)
- University of York (Collaboration)
- Trilateral Research and Consulting LLP (Collaboration)
- BMT Group (Collaboration)
- Mishcon de Reya (Collaboration)
- East London NHS Foundation Trust (Collaboration)
- Queen's University Belfast (Collaboration)
- KING'S COLLEGE LONDON (Collaboration)
- University of L'Aquila (Collaboration)
- The Law Society of England and Wales (Collaboration)
- University of Bristol (Collaboration)
- MEDICINES AND HEALTHCARE REGULATORY AGENCY (Collaboration)
- Prime Minister's Office (Collaboration, Project Partner)
- Medtronic (Project Partner)
- Government Office for Science (Project Partner)
- Johns Hopkins University (Project Partner)
- Sensata Technologies (Project Partner)
- Partnership on AI (Project Partner)
- Centre for Data Science and AI (Project Partner)
- Google UK (Project Partner)
- Maritime and Coastguard Agency (Project Partner)
- Schmidt Futures (Project Partner)
- Department for Transport (Project Partner)
- Ministry of Sound (Project Partner)
- Royal College of Surgeons of England (Project Partner)
- HORIZON Digital Economy Research (Project Partner)
- University of Southern California (Project Partner)
- Metropolitan Police Service (Project Partner)
- Lester Aldridge LLP (Project Partner)
- Black Swan Data (Project Partner)
- British Telecommunications Plc (Project Partner)
- ARC Centre of Excellence (Project Partner)
- Dragonfly Labs (Project Partner)
- Privitar (Project Partner)
- University of Texas at Austin (Project Partner)
Publications
Abioye A
(2024)
Mapping Safe Zones for Co-located Human-UAV Interaction
Ali Y
(2024)
Perception and appropriation of a web-based recovery narratives intervention: qualitative interview study.
in Frontiers in digital health
Alidoost Nia M
(2024)
Efficient Model Verification at Runtime through Adaptive Dynamic Approximation
in ACM Transactions on Autonomous and Adaptive Systems
Arcanjo B
(2024)
Aggregating Multiple Bio-Inspired Image Region Classifiers for Effective and Lightweight Visual Place Recognition
in IEEE Robotics and Automation Letters
Astill Wright L
(2025)
Improving the Utility, Safety, and Ethical Use of a Passive Mood-Tracking App for People With Bipolar Disorder Using Coproduction: Qualitative Focus Group Study
in JMIR Formative Research
| Title | AIKONIC - Visions of AI through Clay Robots & Data Shadow Theatre |
| Description | The film was co-created with young participants from Streets of Growth in Tower Hamlets. "AIKONIC" is a collaboration between The People Speak with Ambient Information Systems (Manu Luksh and Mukul Patel), part of Public Voices in AI programme |
| Type Of Art | Film/Video/Animation |
| Year Produced | 2025 |
| Impact | We are working with Avaz to incorporate the film as their advocacy tool |
| URL | https://vimeo.com/showcase/11542531/video/1049694189 |
| Title | Knowing AI, Knowing U - Data Shadow Theatre |
| Description | The film was co-created with adult participants from MIND Tower Hamlets. "Knowing AI, Knowing U" is a collaboration between The People Speak with Ambient Information Systems (Manu Luksh and Mukul Patel), part of Public Voices in AI programme |
| Type Of Art | Film/Video/Animation |
| Year Produced | 2024 |
| Impact | We are working with Avaz to incorporate the film as their advocacy tool |
| URL | https://vimeo.com/showcase/11542531 |
| Title | Poetry - KAIKU / AIKONIC |
| Description | Participants created poetry & creative writing during the sessions |
| Type Of Art | Creative Writing |
| Year Produced | 2024 |
| Impact | Impact on the participants who were able to write about their thoughts and experiences in a creative way |
| Title | RI Prompts and Practice Cards video |
| Description | A video exploring the value of responsible innovation and a tool (prompts and practice cards) developed in collaborative research between Horizon Digital Economy Research and the Trustworthy Autonomous Systems Hub with Responsible AI UK taking RRI activities forward |
| Type Of Art | Film/Video/Animation |
| Year Produced | 2025 |
| Impact | . |
| URL | https://youtu.be/L7PSXt8Abas?si=4NmN9jwT8fQi4qp5 |
| Title | Responsible AI Music Artistic Mini-Projects |
| Description | The aim of these mini-projects is to create impact and interest in Responsible AI (RAI) concerns of bias in AI models. These mini-projects use AI tools such as low-resource AI models with small datasets to showcase the challenges of bias in AI and how RAI techniques can be used to address them. These mini-projcts are part of a 12 month project "Responsible AI international community to reduce bias in AI music generation and analysis" which built an international community to address Responsible AI (RAI) challenges of bias in AI music generation and analysis. |
| Type Of Art | Composition/Score |
| Year Produced | 2025 |
| Impact | The aim of these artistic mini-projects is to create impact and interest in Responsible AI (RAI) concerns of bias in AI models. These mini-projects use AI tools such as low-resource AI models with small datasets and were supported by the project team and industry partners. The mini-projects showcase the challenges of bias in AI and how RAI techniques can be used to address them. Three artistic mini-projects were selected from an open-call and funded to explore and address the bias inherent in mainstream AI models by creating music in genres often overlooked or marginalised by such systems. Congratulations to our selected projects, and thanks to all that applied to our open-call. We received an overwhelming number of responses of high quality. We held a hybrid launch event at Rich Mix, London on 13 Feb 2025, featuring music and Q&A with the artists. Find out more about the artists, their pieces, their Responsible AI Music composition processes, and a video recording of the launch event at the online showcase: projects.musicrai.org |
| URL | https://music-rai.github.io/projects/ |
| Title | Sculpture |
| Description | Dry clay sculptures made by the young participants representing robots of their design |
| Type Of Art | Artefact (including digital) |
| Year Produced | 2024 |
| Impact | Impact on the participants who were able to creatively visualise their thoughts, perceptions and aspirations about AI and technology to enable an entry point for a deeper conversation |
| URL | https://vimeo.com/showcase/11542531/video/1049694189 |
| Description | 7014-2024 - IEEE Standard for Ethical Considerations in Emulated Empathy in Autonomous and Intelligent Systems |
| Geographic Reach | Multiple continents/international |
| Policy Influence Type | Contribution to new or improved professional practice |
| Impact | This standard defines a model for ethical considerations and practices in the design, creation, and use of empathic technology, incorporating systems that have the capacity to identify, quantify, respond to, or simulate affective states, such as emotions and cognitive states. This includes coverage of "affective computing," "emotion artificial intelligence," and related fields.) |
| URL | https://ieeexplore.ieee.org/document/10576666 |
| Description | AI and Mental Healthcare - ethical and regulatory considerations |
| Geographic Reach | National |
| Policy Influence Type | Participation in a guidance/advisory committee |
| Impact | It provides guidelines for organisations like MHRA to help create effective regulatory practices. |
| URL | https://post.parliament.uk/research-briefings/post-pn-0738/ |
| Description | AI and Mental Healthcare - opportunities and delivery considerations |
| Geographic Reach | National |
| Policy Influence Type | Participation in a guidance/advisory committee |
| Impact | It is changing awareness on DeathTech and the ethical challenges inherited in AI applications for death and dying. |
| URL | https://post.parliament.uk/research-briefings/post-pn-0737/ |
| Description | Advised Canadian Government on PIPEDA (privacy law) for children |
| Geographic Reach | North America |
| Policy Influence Type | Contribution to a national consultation/review |
| Impact | Fed into Canadian Privacy legislation |
| URL | https://www.priv.gc.ca/en/privacy-topics/privacy-laws-in-canada/the-personal-information-protection-... |
| Description | Artificial Intelligence (Regulation and Employment Rights) Bill |
| Geographic Reach | National |
| Policy Influence Type | Contribution to a national consultation/review |
| URL | https://www.tuc.org.uk/research-analysis/reports/artificial-intelligence-regulation-and-employment-r... |
| Description | College of Policing Data Ethics and Data-Driven Technologies APP Consultation |
| Geographic Reach | National |
| Policy Influence Type | Contribution to a national consultation/review |
| Description | College of Policing Data Ethics and Data-Driven Technologies APP Consultation |
| Geographic Reach | National |
| Policy Influence Type | Contribution to a national consultation/review |
| URL | https://researchportal.northumbria.ac.uk/ws/portalfiles/portal/171215036/College_of_Policing_Data_et... |
| Description | Contribution to two POSTnotes on AI for mental health |
| Geographic Reach | National |
| Policy Influence Type | Contribution to a national consultation/review |
| URL | https://post.parliament.uk/research-briefings/post-pn-0738/ |
| Description | Engagement with Andrew Pakes MP |
| Geographic Reach | National |
| Policy Influence Type | Implementation circular/rapid advice/letter to e.g. Ministry of Health |
| Description | Engagement with Dr Allison Gardner MP |
| Geographic Reach | National |
| Policy Influence Type | Implementation circular/rapid advice/letter to e.g. Ministry of Health |
| Description | Ethical review to support Responsible Artificial Intelligence (AI) in policing |
| Geographic Reach | Local/Municipal/Regional |
| Policy Influence Type | Contribution to a national consultation/review |
| Description | Evidence and Specialist Adviser to House of Lords Committee on AI and Weapons System |
| Geographic Reach | National |
| Policy Influence Type | Participation in a guidance/advisory committee |
| Impact | asfdasdf |
| URL | http://fake.fake@gmail.com |
| Description | Evidence given at the All-Party Parliamentary Group on Artificial Intelligence |
| Geographic Reach | National |
| Policy Influence Type | Participation in a guidance/advisory committee |
| URL | https://publications.parliament.uk/pa/cm/cmallparty/241120/artificial-intelligence.htm |
| Description | Evidence to House of Lords Committee on Digital and Communications -- on Large Language Models |
| Geographic Reach | National |
| Policy Influence Type | Participation in a guidance/advisory committee |
| Description | Governance group on Scottish AI Alliance: Participatory Harm Auditing Workbenches and Methodologies [PHAWM] |
| Geographic Reach | National |
| Policy Influence Type | Participation in a guidance/advisory committee |
| Description | House of Lords Communications Committee (WH/MC) |
| Geographic Reach | National |
| Policy Influence Type | Participation in a guidance/advisory committee |
| Description | Integrating RRI and EDI into research |
| Geographic Reach | National |
| Policy Influence Type | Influenced training of practitioners or researchers |
| URL | https://www.rai.ac.uk/events/integrating-rri-and-edi-into-research |
| Description | International AI Safety Report |
| Geographic Reach | Multiple continents/international |
| Policy Influence Type | Citation in other policy documents |
| URL | https://assets.publishing.service.gov.uk/media/679a0c48a77d250007d313ee/International_AI_Safety_Repo... |
| Description | Invited speaker at Scottish Parliament event on gig work |
| Geographic Reach | National |
| Policy Influence Type | Participation in a guidance/advisory committee |
| Description | MHRA Roundtables on Digital Mental Health Regulation |
| Geographic Reach | National |
| Policy Influence Type | Contribution to a national consultation/review |
| Impact | Has led to ongoing collaboration with MHRA including joint participation in the University of Nottingham Health Policy and Practice Impact Leaders Programme, an event held in April 2024 on adverse events in digital mental health https://institutemh.org.uk/research/mindtech/1856-adverse-events-in-digital-mental-health-may-2024, the publication of guidance https://assets.publishing.service.gov.uk/media/67addcd62c594609b38acd42/2025.02.10_MHRA_guidance_on_DMHT_-_Device_characterisation_regulatory_qualification_and_classification__1_.pdf, a workshop on regulating AI in Mental Health, and several joint talks and presentations at conferences |
| URL | https://assets.publishing.service.gov.uk/media/67addcd62c594609b38acd42/2025.02.10_MHRA_guidance_on_... |
| Description | PROBabLE Futures Response to HMICFRS Consultation on Proposed policing inspection programme and framework 2025-29 |
| Geographic Reach | National |
| Policy Influence Type | Contribution to a national consultation/review |
| URL | https://hmicfrs.justiceinspectorates.gov.uk/publications/proposed-policing-inspection-programme-and-... |
| Description | PROBabLE Futures Response to HMICFRS Consultation on Proposed policing inspection programme and framework 2025-29 |
| Geographic Reach | National |
| Policy Influence Type | Contribution to a national consultation/review |
| Description | Policy engagement on responsible AI adoption |
| Geographic Reach | National |
| Policy Influence Type | Contribution to a national consultation/review |
| Description | Prof. M Liakata contributed to a RAi UK's DSIT consultation regarding their proposed AI Management Essentials Toolkit. (January 2025) |
| Geographic Reach | National |
| Policy Influence Type | Contribution to a national consultation/review |
| URL | https://rai.ac.uk/media/reports/ |
| Description | RAi UK response to the DSIT Public Consultation on the AI Management Essentials Tool |
| Geographic Reach | National |
| Policy Influence Type | Contribution to a national consultation/review |
| URL | https://rai.ac.uk/media/reports/ |
| Description | RRI consultation for RaiCo conference with UKAEA |
| Geographic Reach | National |
| Policy Influence Type | Influenced training of practitioners or researchers |
| Impact | Participants were interested in responsible innovation practices and sought hardcopies of the RI cards to use within their own organisations, including Health and Safety Executive (HSE) and Robots Artificial Intelligence Collaboration (RAiCo). |
| URL | https://www.linkedin.com/posts/horiamaior_aisafetysummit-ethicsinai-responsibleinnovation-activity-7... |
| Description | Responsible AI Governance: A Response to UN Interim Report on Governing AI for Humanity |
| Geographic Reach | Multiple continents/international |
| Policy Influence Type | Contribution to a national consultation/review |
| URL | https://www.rai.ac.uk/responsible-ai-govern |
| Description | Responsible Ai UK project KP0016-DSIT advisory study on Digital Trust |
| Geographic Reach | National |
| Policy Influence Type | Participation in a guidance/advisory committee |
| Description | Workshop participation and written submission for UNICEF consultation on neurotechnology and children |
| Geographic Reach | Multiple continents/international |
| Policy Influence Type | Participation in a guidance/advisory committee |
| Description | Automated Empathy - Globalising International Standards (AEGIS): Japan and Ethically Aligned Regions |
| Amount | £363,427 (GBP) |
| Funding ID | Responsible AI IA008 Grant Ref: EP/Y009800/1 |
| Organisation | University of Southampton |
| Sector | Academic/University |
| Country | United Kingdom |
| Start | 11/2023 |
| End | 05/2025 |
| Description | C581 Commission |
| Amount | £70,000 (GBP) |
| Organisation | Home Office |
| Sector | Public |
| Country | United Kingdom |
| Start | 01/2025 |
| End | 04/2025 |
| Description | Creative Practice Catalyst Seed Fund |
| Amount | £3,980 (GBP) |
| Organisation | King's College London |
| Sector | Academic/University |
| Country | United Kingdom |
| Start | 03/2025 |
| End | 07/2025 |
| Description | EPSRC Centre for Doctoral Training in Horizon: Creating Our Lives in Data |
| Amount | £5,861,735 (GBP) |
| Funding ID | EP/S023305/1 |
| Organisation | Engineering and Physical Sciences Research Council (EPSRC) |
| Sector | Public |
| Country | United Kingdom |
| Start | 08/2019 |
| End | 03/2028 |
| Description | Horizon: Trusted Data-Driven Products |
| Amount | £4,075,505 (GBP) |
| Funding ID | EP/T022493/1 |
| Organisation | Engineering and Physical Sciences Research Council (EPSRC) |
| Sector | Public |
| Country | United Kingdom |
| Start | 12/2020 |
| End | 12/2025 |
| Description | RAKE: Responsible Innovation Advantage in Knowledge Exchange (RAI UK Impact Accelerator)- Researcher Co-I |
| Amount | £185,000 (GBP) |
| Organisation | United Kingdom Research and Innovation |
| Sector | Public |
| Country | United Kingdom |
| Start | 11/2023 |
| End | 04/2025 |
| Description | The Nurture Network: Promoting Young People's Mental Health in a Digital World |
| Amount | £1,020,389 (GBP) |
| Funding ID | ES/S004467/1 |
| Organisation | Economic and Social Research Council |
| Sector | Public |
| Country | United Kingdom |
| Start | 07/2018 |
| End | 06/2020 |
| Description | UnBias: Emancipating Users Against Algorithmic Biases for a Trusted Digital Economy |
| Amount | £1,140,593 (GBP) |
| Funding ID | EP/N02785X/1 |
| Organisation | Engineering and Physical Sciences Research Council (EPSRC) |
| Sector | Public |
| Country | United Kingdom |
| Start | 07/2016 |
| End | 05/2019 |
| Description | University of St Andrews Impact and Innovation Fund |
| Amount | £15,000 (GBP) |
| Organisation | University of St Andrews |
| Sector | Academic/University |
| Country | United Kingdom |
| Start | 01/2025 |
| End | 06/2025 |
| Description | What's Up With Alex (WUWA)? Animated Storytelling for Mental Health Literacy Among Young People |
| Amount | £808,690 (GBP) |
| Funding ID | AH/T003804/1 |
| Organisation | Arts & Humanities Research Council (AHRC) |
| Sector | Public |
| Country | United Kingdom |
| Start | 11/2019 |
| End | 07/2022 |
| Title | Responsible Ai UK project KP0016-Dashboard for identifying and summarising moments of change in mental health |
| Description | In a number of publications we have focussed on identifying moments of change in individuals' social media data as well as on summarising the context around such changes. This tool brings together the findings of these works to enable clinicians and individuals to identify changes in a patient's affect, behaviour and needs, as manifested in content they share online, and summarise the context around them. |
| Type Of Material | Improvements to research infrastructure |
| Year Produced | 2025 |
| Provided To Others? | No |
| Impact | This has not yet been released. |
| URL | https://social-media-timeline-dashboard.vercel.app/ |
| Title | Responsible Ai UK project KP0016-Evaluation methods and tool for systems based on generative AI |
| Description | Within a number of publications we have been exploring metrics and tasks for evaluating systems based on generative AI, particularly for summarisation and longitudinal tasks. We are in the process of creating an evaluation platform that brings together a range of tasks and metrics and allows individuals to: (1) browse available evaluation strategies for particular use cases and run evaluation with their own AI model or dataset, comparing results to existing evaluations. (2) configure their own evaluation from a range of different tasks, aspects, metrics and recommendation on similarities between other sets of goals and requirements. |
| Type Of Material | Improvements to research infrastructure |
| Year Produced | 2025 |
| Provided To Others? | No |
| Impact | Not yet released but development is ongoing and several companies have expressed interest in using it. |
| URL | https://adsolve-evaluation-platform.vercel.app/ |
| Title | Responsible Ai UK project KP0016-RECV reasoning logical framework |
| Description | Reasoning is often used interchangeably to denote critical thinking, decision-making, and logical reasoning. We define reasoning as the process of logical steps that result in some form of decision-making or conclusion. Thus we define reasoning as a series of inference steps linking claims and evidence to reach a conclusion. In particular we consider that reasoning consists of the interplay of three interrelated components: types, processes, and tasks. This is the basis of our RECV framework. This is being used both as the basis of analysing academic publications in the space of reasoning as well as to create a benchmark for evaluating reasoning for claim verification. |
| Type Of Material | Improvements to research infrastructure |
| Year Produced | 2025 |
| Provided To Others? | No |
| Impact | Not yet released. |
| Title | Toolkit for specification, validation and verification of social, legal, ethical, empathetic and cultural requirements for autonomous agents |
| Description | A growing range of applications use AI and other autonomous agents to perform tasks that raise social, legal, ethical, empathetic, and cultural (SLEEC) concerns. To support a framework for the consideration of these concerns, we developed SLEEC-TK, a toolkit for specification, validation, and verification of SLEEC requirements. SLEEC-TK is an Eclipse-based environment for defining SLEEC rules in a domain-specific language with a timed process algebraic semantics. SLEEC-TK uses model checking to identify redundant and conflicting rules, and to verify conformance of design models with SLEEC rules. |
| Type Of Material | Improvements to research infrastructure |
| Year Produced | 2023 |
| Provided To Others? | Yes |
| Impact | Accepted tutorials at CORE (https://portal.core.edu.au/conf-ranks/) A*-ranked conferences - International Conference on Automated Software Engineering 2023, and International Conference on Software Engineering 2025. |
| URL | https://www.sciencedirect.com/science/article/pii/S0167642324000418 |
| Title | Unsupervised methods for clinically meaningful summarisation on social media |
| Description | A hybrid abstractive summarisation approach combining hierarchical VAEs with LLMs (LlaMA-2) to produce clinically meaningful summaries from social media user timelines, appropriate for mental health monitoring. |
| Type Of Material | Improvements to research infrastructure |
| Year Produced | 2024 |
| Provided To Others? | Yes |
| Impact | Not known yet. |
| Title | Real-world UsE CaSEs NoRmatiVe REquirements Repository (RESERVE) |
| Description | A repository of use cases covering social, legal, ethical, empathetic and cultural (SLEEC) requirements for AI and autonomous agents. |
| Type Of Material | Database/Collection of data |
| Year Produced | 2024 |
| Provided To Others? | Yes |
| Impact | Several research papers based on use cases from the RESERVE repository. |
| URL | https://www.cs.toronto.edu/~sleec/ |
| Title | Responsible Ai UK project KP0016-CLPscyh 2025 shared task dataset |
| Description | The CLPsych 2025 shared task combines longitudinal modelling in social media timelines with evidence generation (Chim et al., 2024), promoting the generation of humanly understandable rationales that support recognizing mental states as they dynamically change over time. The dataset contains annotations on dynamic adaptive and mal-adaptive self-states along with annotated evidence that supports the assignment of self-state labels within 30 social media timelines from Reddit, annotated by clinical experts. |
| Type Of Material | Database/Collection of data |
| Year Produced | 2025 |
| Provided To Others? | No |
| Impact | Enabling research on automatically annotating, providing and summarising evidence and explanations of model and annotation outputs, as well as performing the CLPsych 2025 shared task. |
| URL | https://clpsych.org/shared-task/ |
| Title | Responsible Ai UK project KP0016-Reasoning in Evidence-based Claim Verification (RECV) |
| Description | RECV is the first reasoning benchmark for claim-verification. The benchmark comprises three datasets, curated from existing resources targeting different domains: VitaminC (Schuster et al., 2021) from Wikipedia, CLIMATE-FEVER (Diggelmann088 et al., 2020) from online claims and Wikipedia, and PHEMEPlus (Dougrez-Lewis et al., 2022) from rumours circulating on social media and associated evidence from news articles. The claims involve increasing levels of complexity as we move from093 VitaminC to PHEMEPlus, often requiring deductive and/or abductive reasoning. |
| Type Of Material | Database/Collection of data |
| Year Produced | 2025 |
| Provided To Others? | No |
| Impact | None yet as the dataset has not yet been released. |
| URL | https://arxiv.org/html/2402.10735v3 |
| Title | Zooniverse questionnaire on motivation and basic psychological needs |
| Description | This is the questionnaire data associated with the publication "Exploring the Relationship between Basic Psychological Needs and Motivation in Online Citizen Science" in ACM Transcactions on Social Computing. Understanding engagement with and motivations to contribute to online citizen science projects can improve user experience and aid in attracting and retaining users. This paper proposes that the fundamental grounding of self-determination theory, being the satisfaction of Basic Psychological Needs, is a fruitful lens through which to understand experiences and motivation to take part in online citizen science. Using an online survey, this paper explores how volunteers in online citizen science experience satisfaction and dissatisfaction of needs for autonomy, competence, and relatedness, and how this relates to both their behaviour and motivations to take part. Results suggest that participation in online citizen science on the Zooniverse relates primarily to the satisfaction than dissatisfaction of needs, suggesting that taking part is psychologically beneficial to volunteers. Autonomy is the most supported need and relatedness the least. Whilst results are positive, we observe that measures of need satisfaction must be contextually relevant, especially given the complex nature of autonomy in online citizen science, to aid in further understanding of these relationships. Potential factors related to participation in the Zooniverse that could be enhanced to increase volunteer satisfaction and retention are discussed. |
| Type Of Material | Database/Collection of data |
| Year Produced | 2024 |
| Provided To Others? | Yes |
| Impact | Publication: https://doi.org/10.1145/3702210 |
| URL | https://rdmc.nottingham.ac.uk/handle/internal/11587 |
| Description | Ada Lovelace Institute (RAi) |
| Organisation | Ada Lovelace Institute |
| Country | United Kingdom |
| Sector | Charity/Non Profit |
| PI Contribution | Identification of research synergies. |
| Collaborator Contribution | Identification of research synergies and involvement with RAi funded research. |
| Impact | This collaboration is multi-disciplinary and involves a number of funded research projects (such as Public Voices on AI). |
| Start Year | 2023 |
| Description | BAE Systems (IAB) |
| Organisation | BAE Systems |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Opportunity to engage with RAi. |
| Collaborator Contribution | Membership of the RAi IAB |
| Impact | Review of Annual Report. |
| Start Year | 2024 |
| Description | BMT (IAB) |
| Organisation | BMT Group |
| Country | United Kingdom |
| Sector | Charity/Non Profit |
| PI Contribution | Opportunity to engage with RAi |
| Collaborator Contribution | Membership of Industry Advisory Board (IAB) |
| Impact | Review of Annual Report |
| Start Year | 2024 |
| Description | Balderton (IAB) |
| Organisation | Balderton Capital |
| Country | United Kingdom |
| Sector | Private |
| PI Contribution | Opportunity to engage with RAi |
| Collaborator Contribution | Membership of the Industry Advisory Board (IAB) |
| Impact | Review of Annual Report |
| Start Year | 2024 |
| Description | British Standards Institute (BSI) |
| Organisation | British Standards Institute (BSI Group) |
| Country | United Kingdom |
| Sector | Charity/Non Profit |
| PI Contribution | Engagement with BSI in determining real world challenges to inform research. |
| Collaborator Contribution | Attendance at RAi events, including a sandpit, and engagement with RAi researchers. |
| Impact | To be determined. |
| Start Year | 2024 |
| Description | CSHS (IAB) |
| Organisation | Cedars-Sinai Medical Center |
| Country | United States |
| Sector | Hospitals |
| PI Contribution | Opportunity to engage with RAi |
| Collaborator Contribution | Membership of the Industry Advisory Board (IAB) |
| Impact | Review of Annual Report |
| Start Year | 2024 |
| Description | DSTL (IAB) |
| Organisation | Defence Science & Technology Laboratory (DSTL) |
| Country | United Kingdom |
| Sector | Public |
| PI Contribution | Opportunity to engage with RAi. |
| Collaborator Contribution | Membership of the Industry Advisory Board (IAB). |
| Impact | Review of Annual Report. |
| Start Year | 2024 |
| Description | Dft (IAB) |
| Organisation | Department of Transport |
| Department | DfT, DVLA and VOSA |
| Country | United Kingdom |
| Sector | Public |
| PI Contribution | Opportunity to engage with RAi |
| Collaborator Contribution | Membership of Industry Advisory Board (IAB) |
| Impact | Review of Annual Report |
| Start Year | 2024 |
| Description | El Arte del Buen Vivir |
| Organisation | University of AlmerÃa |
| Country | Spain |
| Sector | Academic/University |
| PI Contribution | El Arte del Buen Vivir is an cooperative business located in Almeria, Spain that is supporting the evaluation of EmpathAI, a griefbot designed at the University of Nottingham to support those that are experiencing grief. El Arte del Buen Vivir provides training to accompany those with terminal illness at the end of life, or receiving palliative care. |
| Collaborator Contribution | El Arte del Buen Vivir is supporting the recruitment of grief therapists and people that are experiencing grief. |
| Impact | Talk to the general public on the 20th of Feb 2025 (Almeria). Networking with colleagues at Almeria, Granada and Sevilla Universities to collaborate on a international project to compare different cultures and rituals related to death and dying, including perceptions and adoptions of DeathTech. |
| Start Year | 2024 |
| Description | GO-Science |
| Organisation | Government of the UK |
| Department | Government Office for Science |
| Country | United Kingdom |
| Sector | Public |
| PI Contribution | Policy engagement around the adoption of responsible AI. |
| Collaborator Contribution | Attendance at RAi events and discussion with researchers. |
| Impact | To be determined. |
| Start Year | 2024 |
| Description | Google (IAB) |
| Organisation | |
| Department | Google UK |
| Country | United Kingdom |
| Sector | Private |
| PI Contribution | Opportunity to engage with RAi |
| Collaborator Contribution | Membership of the Industry Advisory Board (IAB) |
| Impact | Review of Annual Report |
| Start Year | 2024 |
| Description | HSE (IAB) |
| Organisation | Health and Safety Executive (HSE) |
| Country | United Kingdom |
| Sector | Public |
| PI Contribution | Opportunity to engage with RAi |
| Collaborator Contribution | Membership of the Industry Advisory Board (IAB) |
| Impact | Review of Annual Report. |
| Start Year | 2024 |
| Description | IEEE, partnering Standards organisation for AEGIS |
| Organisation | Institute of Electrical and Electronics Engineers (IEEE) |
| Department | IEEE Standards Association |
| Country | United States |
| Sector | Charity/Non Profit |
| PI Contribution | AEGIS are working with IEEE, through the creation of working groups, to develop P7014.1 (a technical standard to address the overlap between simulated empathy and human-AI partnering). Team members have previously delivered P7014 (Standard for Ethical considerations in Emulated Empathy in Autonomous and Intelligent Systems) in conjunction with IEEE. |
| Collaborator Contribution | IEEE provides the organisation and infrastructure to support the development of the standards. This has included networking and facilitating input from a wider scope of expertise as well as further opportunities to enhance and refine the standard. Ongoing discussions with specific IEEE teams to support additional engagement with businesses and industry. |
| Impact | IEEE White Paper - Ethics and Empathy-Based Human-AI Partnering: Exploring the Extent to which Cultural Differences Matter When Developing an Ethical Technical Standard. https://ieeexplore.ieee.org/document/10648944 IEEE P7014 Standard for Ethical considerations in Emulated Empathy in Autonomous and Intelligent Systems https://ieeexplore.ieee.org/document/10576666 |
| Start Year | 2019 |
| Description | Informatica (IAB) |
| Organisation | Informatica |
| Country | Korea, Republic of |
| Sector | Private |
| PI Contribution | Opportunity to engage with RAi |
| Collaborator Contribution | Membership of the Industry Advisory Board (IAB) |
| Impact | Review of Annual Report |
| Start Year | 2024 |
| Description | International Partnership (UK, US, Australia) with Australia National University and George Washington University |
| Organisation | Australian National University (ANU) |
| Country | Australia |
| Sector | Academic/University |
| PI Contribution | University of Southampton is leading this partnership and organising the team of core members, advisor network, and extended network. We also contribute deep expertise in AI from the communications sector. |
| Collaborator Contribution | The partners contribute their expertise for AI in maritime and aerospace sectors. |
| Impact | The collaboration is multidisciplinary. Outputs and outcomes are in-progress as the project has just started several weeks ago. |
| Start Year | 2024 |
| Description | International Partnership (UK, US, Australia) with Australia National University and George Washington University |
| Organisation | George Washington University |
| Country | United States |
| Sector | Academic/University |
| PI Contribution | University of Southampton is leading this partnership and organising the team of core members, advisor network, and extended network. We also contribute deep expertise in AI from the communications sector. |
| Collaborator Contribution | The partners contribute their expertise for AI in maritime and aerospace sectors. |
| Impact | The collaboration is multidisciplinary. Outputs and outcomes are in-progress as the project has just started several weeks ago. |
| Start Year | 2024 |
| Description | Medtronic (IAB) |
| Organisation | Medtronic |
| Department | Medtronic Ltd |
| Country | United Kingdom |
| Sector | Private |
| PI Contribution | Opportunity to engage with RAi |
| Collaborator Contribution | Membership of the Industry Advisory Board (IAB) |
| Impact | Review of Annual Report |
| Start Year | 2024 |
| Description | Meta (IAB) |
| Organisation | |
| Department | Facebook, UK |
| Country | United Kingdom |
| Sector | Private |
| PI Contribution | Opportunity to engage with RAi |
| Collaborator Contribution | Membership of the Industry Advisory Board (IAB) |
| Impact | Review of Annual Report |
| Start Year | 2024 |
| Description | NASA Ames |
| Organisation | National Aeronautics and Space Administration (NASA) |
| Department | NASA Ames Exploration Center |
| Country | United States |
| Sector | Public |
| PI Contribution | Basic research on quantifying the uncertainty of deep-learning perception components for autonomous systems. |
| Collaborator Contribution | Use cases, data sets, machine learning models, deep-learning expertise. |
| Impact | Basic research described in two papers currently under review. |
| Start Year | 2022 |
| Description | NIST-OSAC |
| Organisation | National Institute of Standards & Technology (NIST) |
| Country | United States |
| Sector | Public |
| PI Contribution | co-developing standards for forensic science speaker recognition, synthetic/deceptive media, and deepfakes |
| Collaborator Contribution | providing opportunities to explore new open research problems |
| Impact | This is interdisciplinary: police, forensic scientists, speech experts, vision experts, judges, legal scholars Publishing guidance for general public on how to approach the "deepfake defense" in courts of law, such as serving on a jury |
| Start Year | 2023 |
| Description | NPL (IAB) |
| Organisation | National Physical Laboratory |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Opportunity to engage with RAi research. |
| Collaborator Contribution | Membership of the RAi Industry Advisory Board (IAB) |
| Impact | Review of RAi Annual Report. |
| Start Year | 2024 |
| Description | National Institute of Informatics |
| Organisation | National Institute of Informatics (NII) |
| Country | Japan |
| Sector | Public |
| PI Contribution | We attended research events (seminars, open days) while being hosted by the University. We presented our research and worked in collaboration with Prof Frederic Andres to host workshops exploring regulation, standards and Empathetic-AI Human partnerships. We continue to work with NII staff on technical and research outputs. |
| Collaborator Contribution | The NII hosted our event, contributing contacts and personnel to support the delivery of our workshops as well as feeding into the discussion. Members of NII are contributing to the working group for the P7014.1 Ethical Standard. |
| Impact | IEEE White Paper - Ethics and Empathy-Based Human-AI Partnering: Exploring the Extent to which Cultural Differences Matter When Developing an Ethical Technical Standard. https://ieeexplore.ieee.org/document/10648944 Andres, F., Bakir, V., Bland, B., Laffer, A., Li, P., Miranda, D., McStay, A., & Urquhart, L. (2024). How to live well with emotional AI and empathetic technologies. NII Open House. National Institute of Informatics, Tokyo, Japan. https://doi.org/10.13140/RG.2.2.29708.73604 These are multidisciplinary across Computer Science, Social Science, Law and Ethics. |
| Start Year | 2023 |
| Description | Newton Europe responsible AI framework |
| Organisation | Newton Europe |
| Country | United Kingdom |
| Sector | Private |
| PI Contribution | We are working with this industrial partner to test the value of RRI in the innovation chain - in particular at the commercial end when companies are expanding their user-base and are under pressure to perform in a competitive marketplace. |
| Collaborator Contribution | The partner is testing our RRI Prompts & Practice cards as well as working with us to build out their Responsible AI framework |
| Impact | Still underway |
| Start Year | 2023 |
| Description | PMO (IAB) |
| Organisation | Prime Minister's Office |
| Country | Brunei Darussalam |
| Sector | Public |
| PI Contribution | Opportunity to engage with RAi |
| Collaborator Contribution | Membership of the Industry Advisory Board (IAB) |
| Impact | To be advised |
| Start Year | 2024 |
| Description | Partnership with Accenture |
| Organisation | Accenture |
| Country | Ireland |
| Sector | Private |
| PI Contribution | N/A yet |
| Collaborator Contribution | Attending meetings of the project advisory board to feed into the project's strategic direction; - Taking on interns; - Providing data access where possible; - Offering SME input for assistance; - Providing access to social impact sector clients. |
| Impact | N/A yet. |
| Start Year | 2024 |
| Description | Partnership with BSI. |
| Organisation | British Standards Institute (BSI Group) |
| Country | United Kingdom |
| Sector | Charity/Non Profit |
| PI Contribution | We have engaged in meetings. |
| Collaborator Contribution | - Participation in scoping and evaluation workshops on responsible use of LLMs in medical and legal applications. - Exchange of know-how in the form of e.g. research project collaboration. About £15K in-kind support pledged. |
| Impact | N/A yet |
| Start Year | 2023 |
| Description | Partnership with Bar- Ilan University |
| Organisation | Bar-Ilan University |
| Country | Israel |
| Sector | Academic/University |
| PI Contribution | Research conversation and collaboration |
| Collaborator Contribution | - Membership of advisory board meetings. - Participation in workshops and events on requirements gathering, co-creation and evaluation. - Active collaboration on multi-modal use of LLMs for monitoring in mental health with regular meetings resulting in joint publications. - Active collaboration on LLM use for clinically useful summarisation and dialogue systems for self-management resulting in joint publications. - Active collaboration in evaluation of LLMs for mental health resulting in joint publications - Sharing transcribed therapy sessions and multi-modal data - Co-supervising a post-doctoral clinical researcher. - Offering clinical psychology students time for annotation, evaluation purposes and co-development. |
| Impact | Research collaboration. |
| Start Year | 2024 |
| Description | Partnership with Bloomberg |
| Organisation | Bloomberg |
| Country | United States |
| Sector | Private |
| PI Contribution | Ongoing conversation |
| Collaborator Contribution | - host research visits at Bloomberg - take part in pilot demonstrations and evaluation mitigation strategies, - meetings and workshops - advisory board |
| Impact | no output yet |
| Start Year | 2024 |
| Description | Partnership with Canon Medical Systems |
| Organisation | Canon |
| Department | Canon Medical Research Europe |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | The organisation will: - Be a member of the project advisory board - Access to healthcare data through our existing collaborations - Participation in focus groups or workshops to advise from an industry/commercial perspective on the application and impact of the project and to share learnings from the challenges we are experiencing in the AI healthcare sector - Exchange of know-how in the form of internship projects for researchers. We run a successful summer internship programme each year to help explore newideas and opportunities and to provide key skills development opportunities for early career researchers. - Additionally they are interested to consider becoming an industry sponsor to a PhD in a relevant project within the QMUL Digital Environment Research Institute. iCASE awards are a great value mechanism for us to engage more significantly with important academic partners. This could begin in early 2025. Project Responsible Ai UK (KP0016). |
| Collaborator Contribution | Ongoing conversations |
| Impact | No output yet |
| Start Year | 2024 |
| Description | Partnership with Coventry & Warwickshire Partnership NHS Trust |
| Organisation | Coventry and Warwickshire Partnership NHS Trust |
| Country | United Kingdom |
| Sector | Public |
| PI Contribution | Research collaboration initiated. |
| Collaborator Contribution | - Facilitation of regulatory approvals for clinical mental health research - Facilitation of recruitment of individuals for the co-creation workshops - Supporting the evaluation of our self-support and monitoring systems. - Exchange of clinical and service know-how for further implementation of research outputs. - The Trust will also deliver the study as a research site,contributing £5,919.90 of study support costs as in-kind contribution. |
| Impact | n/a yet |
| Start Year | 2024 |
| Description | Partnership with IEEE |
| Organisation | Institute of Electrical and Electronics Engineers (IEEE) |
| Country | United States |
| Sector | Learned Society |
| PI Contribution | Forming a Working Group (WG) to develop an ethical technical standard for GPAI, emulated empathy and "partners". |
| Collaborator Contribution | IEEE SA has made introductions, advised and facilitated speedy progress of our Working Group application to the New Standards Committee (NesCom). Seeing this as a priority area, they will assist in internal and external promotion of the work. |
| Impact | Application for the multidisciplinary Working Group has been made to the New Standards Committee (NesCom), with a positive response anticipated end of March 2024. |
| Start Year | 2023 |
| Description | Partnership with Japan's National Institute of Informatics |
| Organisation | National Institute of Informatics (NII) |
| Country | Japan |
| Sector | Public |
| PI Contribution | To visit and host workshops with regional expert delegates on AI, robotics and emulated empathy. |
| Collaborator Contribution | Provision of space, accommodation, regional connections and logistics. |
| Impact | None yet (workshops planned for June). |
| Start Year | 2023 |
| Description | Partnership with MHRA |
| Organisation | Medicines and Healthcare Regulatory Agency |
| Country | United Kingdom |
| Sector | Public |
| PI Contribution | Advice through participation in project workshops and conversations. |
| Collaborator Contribution | Advice through participation in project workshops and conversations. |
| Impact | N/A |
| Start Year | 2023 |
| Description | Partnership with MINDTECH |
| Organisation | NIHR MindTech MedTech Co-operative |
| Country | United Kingdom |
| Sector | Public |
| PI Contribution | Collaboration initiated, Mindtech participated in our workshops. |
| Collaborator Contribution | - Advisory board - To contribute directly to projects and case studies. A key remit of MindTech is to support innovators in the HealthTech space and we run regular events for industry and researchers and would be keen to work together to deliver specific events focused on the use and limitations of LLMs. |
| Impact | N/A |
| Start Year | 2023 |
| Description | Partnership with Microsoft Health Futures |
| Organisation | Microsoft Research |
| Department | Microsoft Research Cambridge |
| Country | United Kingdom |
| Sector | Private |
| PI Contribution | N/A yet |
| Collaborator Contribution | - Participation on the project Advisory Board. - Contributing to talks and events organised by the proposed Keystone project - Inviting PhD students to apply to our internship program. |
| Impact | N/A yet |
| Start Year | 2023 |
| Description | Partnership with Mishcon De Reya |
| Organisation | Mishcon De Reya |
| Country | United Kingdom |
| Sector | Private |
| PI Contribution | N/A yet |
| Collaborator Contribution | - One of our legal experts will be designated to support the evaluation of applying obfuscation/anonymisation procedures stemming from the project on our data, - Legal expert to take part in the regular Advisory Board meetings of the project (at least once in six months); - Providing access to an internal dataset, the Judicial Review Analysis dataset (JURA), which contains sentence-level segments derived from a large number of English Administrative Court judgments. |
| Impact | N/A yet |
| Start Year | 2023 |
| Description | Partnership with NHS East London |
| Organisation | East London NHS Foundation Trust |
| Country | United Kingdom |
| Sector | Public |
| PI Contribution | Collaboration on research and use cases. |
| Collaborator Contribution | - Membership of the advisory board - Attending meetings and workshops exploring risks, limitations, and mitigations of large language models and multimodal models - Access to unstructured clinical data relevant to research project (subject to information governance arrangements being in place) - Industry placements for researchers in the project. - Sharing learnings and best practices from our research - Creating links with focus groups who can advise on what matters most to users, how to present data to users and how to evaluate the outputs of the project |
| Impact | N/A yet |
| Start Year | 2024 |
| Description | Partnership with NHS England |
| Organisation | NHS England |
| Country | United Kingdom |
| Sector | Public |
| PI Contribution | Research conversation |
| Collaborator Contribution | - Advisory group - Create links with focus groups who can advise on what matters most to users, how to present behavioural data to users and how to evaluate the outputs of the project - Sharing learnings from what problems are being faced by platforms such as ours - Exchange of know-how in the form of tailored open internships projects for PhD researchers |
| Impact | N/A yet |
| Start Year | 2024 |
| Description | Partnership with NICE |
| Organisation | National Institute for Health and Care Excellence (NICE) |
| Department | NICE International |
| Country | United Kingdom |
| Sector | Public |
| PI Contribution | N/A |
| Collaborator Contribution | N/A. NICE requested the final report with research conclusions be shared with them. |
| Impact | N/A yet |
| Start Year | 2023 |
| Description | Partnership with The Alan Turing Institute |
| Organisation | Alan Turing Institute |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | ongoing research conversation |
| Collaborator Contribution | - Facilitate interdisciplinary networking opportunities across the Turing's university partner network (which includes potential industry partners, collaborators and government bodies, national and international research collaborations). - Facilitate communication with stakeholders in applications which could use the new methods to address algorithmic fairness and AI robustness, such as healthcare - Subject to availability, a meeting room for up to 8 advisory board meetings and 4 workshops for 30-45 people in the Turing British Library offices - Professor Michael Wooldridge or another Senior Turing Researcher to attend advisory board meetings |
| Impact | N/A yet |
| Start Year | 2024 |
| Description | Partnership with The Law Society |
| Organisation | The Law Society of England and Wales |
| Country | United Kingdom |
| Sector | Charity/Non Profit |
| PI Contribution | The Law Society has become a partner/stakeholder of the AdSoLve project and will collaborate closely with the project. |
| Collaborator Contribution | A representative attends our regular meetings and provides advice. |
| Impact | No output yet. Letter of Support provided by the organisation. |
| Start Year | 2024 |
| Description | Partnership with Trilateral Research |
| Organisation | Trilateral Research and Consulting LLP |
| Country | United Kingdom |
| Sector | Private |
| PI Contribution | Research conversations initiated through meetings |
| Collaborator Contribution | - Membership of the advisory board Attending meetings and workshops exploring risks, limitations, and mitigations of large language models and multimodal models - Potential industry placements for interns in our Trilateral Research Summer Academy - Sharing learnings and best practices from our research |
| Impact | N/A yet |
| Start Year | 2024 |
| Description | Partnership with UCL |
| Organisation | University College London |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | n/a |
| Collaborator Contribution | - advisory board meetings - contribute where appropriate in co-creation workshops and provide advice every few months on LLM-based technology for dementia monitoring. - Feed into this project learnings from our current Wellcome-funded MEDEA study. In total, amount to a contribution of around £10000 in direct support to be offered. (in-kind). |
| Impact | N/A yet |
| Start Year | 2023 |
| Description | Partnership with University of Edinburgh CHAI hub |
| Organisation | University of Edinburgh |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Initiated conversation about the adsolve research and collaboration to be developed. |
| Collaborator Contribution | - Allocate some of our researchers' time to work directly with your team to explore not only use of LLMs for extracting information for use within causal settings but also the opportunity to identify causal reasoning within LLMs. (Estimated at 25% of time for one PDRA per year, £125k over 47 months) - Hosting your researchers for short periods of time in one of the CHAI sites (Estimated at £2k per visit if in Edinburgh, 4 visits, £8k over 47 months.) - Joint supervision of students either affiliated with CHAI or with your project (£22k over 47 months, estimated as 1 academic spending 1hr per week). - Organise joint events and workshops for training and upskilling CHAI researchers in LLMs and vice versa. (£20k over 47 months) - Invite collaboration on datasets made available via CHAI and affiliated data safe havens on health exemplars following suitable data sharing and collaboration agreements. In total, we anticipate this will amount to an in-kind contribution of at least £177k in direct in-kind support. We will also welcome an invitation for one of the CHAI co-leads co-leads sit on the advisory board if deemed helpful. |
| Impact | N/A yet |
| Start Year | 2023 |
| Description | Public Voices in AI |
| Organisation | University of Sheffield |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | This satellite project was funded by UKRI via RAi UK. |
| Collaborator Contribution | Public Voices in AI is a collaboration between the Digital Good Network at the University of Sheffield, the Ada Lovelace Institute, the Alan Turing Institute, Elgon Social Research and University College London. Public Voices in AI aims to ensure that public voices are front and centre in artificial intelligence research, development and policy. A central aspect of responsible AI research, development and policy is ensuring that it takes account of public hopes, concerns and experiences. As concern about the societal impacts of AI grows and pressure for its effective regulation mounts, understanding and anticipating societal needs and values can inform responsible AI developments and deployments. Yet public voice is frequently missing from conversations about AI, an absence which inhibits progress in RAI. Addressing this gap is essential to enable AI RD&P to maximise benefits and prevent harms and to ensure that responsible AI works for everyone. Different groups benefit from and are affected by AI differently, and their hopes, concerns and experiences also vary. Because of structural inequities, some groups are more negatively impacted by AI deployments than others - for example, in welfare systems, at borders, in policing. Some groups have more resources and access to power to shape AI technologies than others. There is also a participation gap between those with the social capital to participate in shaping AI and those without. Public Voices in AI will carry out research and produce and disseminate resources, centring those most impacted and underrepresented. One way it will do this is by distributing up to £195,000 of its funding to support participatory projects with people from groups which are negatively affected by or underrepresented in AI research, development and policy. The Public Voices in AI Fund was launched on 31st May 2024, with a closing date for applications of 20th June 2024. |
| Impact | tba |
| Start Year | 2024 |
| Description | PwC |
| Organisation | Price Waterhouse Cooper |
| Country | United Kingdom |
| Sector | Private |
| PI Contribution | Discussions around potential case studies and contributions to responsible AI research. |
| Collaborator Contribution | Discussions around potential case studies for responsible AI research and the identification of relevant synergies. |
| Impact | To be determined. |
| Start Year | 2023 |
| Description | RAI UK - IA - Bangor University |
| Organisation | Bangor University |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Funding provided for Impact Accelerator project. |
| Collaborator Contribution | Leading "Automated Empathy - Globalising International Standards (AEGIS): Japan and Ethically Aligned Regions" Partnering with Japan's National Institute of Informatics and standards developer Institute of Electrical and Electronics Engineers (IEEE), and engaging the Information Commissioner's Office (UK), this project augments our UK-Japan social science to create soft governance of autonomous systems that interact with human emotions and/or emulate empathy. This includes a regional (Japan/ethically aligned regions) IEEE Recommended Practice for these technologies, and advancement of a global parental standard. We achieve this through in-person workshops of regional experts, deriving learning for the UK & EU from a region immersed in social robotics and steeped in ethical questions about systems that process intimate data. |
| Impact | tba. |
| Start Year | 2023 |
| Description | RAI UK - IA - Sheffield Hallam University |
| Organisation | Sheffield Hallam University |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Funding Impact Accelerator project. |
| Collaborator Contribution | Leading "AIPAS - AI Accountability in Policing and Security" The challenge for Law Enforcement Agencies (LEAs) is balancing the significant opportunities AI presents for the safeguarding of society with the societal concerns and expectation about its responsible use. Accountability is at the core of UK Government efforts to ensure responsible AI use. Accountability, however, is highly abstract. Building on the research team's global work on AI Accountability, AIPAS will design the practical mechanisms and software tool LEAs in the UK need to assess and implement AI Accountability for their AI applications. These solutions will not only support AI Accountability during deployments but also proactively during design and procurement stages. |
| Impact | tba. |
| Start Year | 2023 |
| Description | RAI UK - IA - Swansea University |
| Organisation | Swansea University |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Funding Impact Accelerator |
| Collaborator Contribution | Leading "Amplify: Empowering Underserved Voices in Spoken Language Interaction" Speech is the richest, most precious form of human communication. But speech-based interactive systems are currently available for only a small fraction of the world's languages. Consequently, hundreds of millions of people are being excluded globally. For the past three years we have worked with under-served "low-resource" language communities to explore highly innovative responsible AI techniques for developing speech recognition with low data requirements. In this project we will facilitate the uptake of the speech toolkit that we have created, working with a network of community partners and NGOs to prove and refine the tools, and expand spoken language support. |
| Impact | tba |
| Start Year | 2023 |
| Description | RAI UK - IA - The Open University |
| Organisation | Open University |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Funding Impact Accelerator project. |
| Collaborator Contribution | Leading "SAGE-RAI: Smart Assessment and Guided Education with Responsible AI" Can responsible Generative AI (GenAI) lead to improved student outcomes? In SAGE-RAI, we utilise partner-applied education-oriented GenAI tools to explore this. Inspired by Bloom's 1984 study on 1-to-1 teaching's efficacy and the potential for cost-effective, scalable personalised education, we aim to unlock this potential. Addressing tutor limitations in accommodating large cohorts, we investigate how responsible GenAI can enhance tutoring, offer tailored more personalised learning experiences and generate student feedback. Our goal is to create a platform supporting assessment and student guidance while responsibly applying GenAI, addressing challenges of misinformation, copyright, and bias. The journey embodies educational innovation for better outcomes. |
| Impact | tba. |
| Start Year | 2023 |
| Description | RAI UK - IA - University of Birmingham |
| Organisation | University of Birmingham |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Funding of Impact Accelerator project. |
| Collaborator Contribution | Leading "AI Equality by Design, Deliberation and Oversight" This project will develop the theoretical foundations of 'equality-by-design, deliberation and oversight' (EbD) approaches to AI governance, striving to embed that knowledge into AI-standards by (1) representing European equality defenders in European AI standard-setting by CEN/CENELEC (2) empowering and equipping public equality defenders and other social stakeholders with the knowledge and skills to advocate for, adopt and embed EbD principles into the development and implementation of technical systems and organisational frameworks to protect fundamental rights to equal treatment from AI-generated discrimination (3) provide UK tech firms with training in equality law to address gaps and misunderstandings in their current knowledge. |
| Impact | tba. |
| Start Year | 2023 |
| Description | RAI UK - IA - University of Nottingham |
| Organisation | University of Nottingham |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Funding Impact Accelerator |
| Collaborator Contribution | Leading "RAISE - Responsible generative AI for SMEs in UK and Africa" Building on insights developed across several ethical AI projects, notably the EU-funded projects SHERPA, and SIENNA, this work will provide actionable guidance to small and medium-sized enterprises (SMEs) on how generative AI systems can be developed and used responsibly. By working with our SME partner Trilateral Research, (using the 'with SMEs, for SME's' approach) the project has access to a company leading on socially responsible AI to co-design and test practical and actionable guidance on how generative AI can be integrated into innovative products. Impact will be generated for SMEs in the UK and in Africa. |
| Impact | tba |
| Start Year | 2023 |
| Description | RAI UK - IA - University of Oxford |
| Organisation | University of Oxford |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Funding Impact Accelerator project. |
| Collaborator Contribution | Leading "RAKE (Responsible Innovation Advantage in Knowledge Exchange)" Conducting research and innovation using Responsible Innovation (RI) involves collaboration, foresight, considering impacts on people, the environment, reflection and response. RI's impact is therefore central to 'responsible' AI, but RI training and interdisciplinary practices must be developed to provide robust mechanisms for creating and assessing responsible AI. RAKE consolidates existing experience and resources to work with funders, businesses, projects, CDTs, and university spinouts. It will investigate how RI can be better embedded within these pipelines to improve AI development and deployment. It will collectively build on past work to support a new generation of RI-in-practice, strengthening RAI-UK's responsible-AI research agenda. |
| Impact | tba. |
| Start Year | 2023 |
| Description | RAI UK - IP0004 |
| Organisation | University of Sussex |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Funding International Partnership project. |
| Collaborator Contribution | Leading "Harnessing AI to enhance electoral oversight" For democratic elections to be trusted they must be free and fair. To ensure this, democracies across the world establish regulators and management bodies to oversee electoral activity. However, we know that these bodies often operate on a limited budget with limited personnel. They have also been slow to adapt to the digital age and to harness the power of automation. This project will bring together an international team of researchers, practitioners, and activists to develop automated systems to promote compliance and test these tools to protect against backfire effects and engender trust. |
| Impact | tba |
| Start Year | 2024 |
| Description | RAI UK - IP0014 |
| Organisation | King's College London |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Funding International Partnership. |
| Collaborator Contribution | Leading "The human role to guarantee an ethical AI for healthcare" The integration of AI-based tools into routine clinical care is opening the door to a completely new paradigm where doctors and AI systems collaborate to decide a right diagnosis or treatment for a patient, based on individual's biomedical information. Several important ethical challenges rise from the development of clinical AI to its implementation. What happens if the AI tool and the clinician diasegree? Who holds responsibility? Is there a risk of human replacement? Can we battle models' bias and potential unfairness? Is there a risk of disempowerment of clinicians and patients? In the year of the UK AI global summit and the European AI act, this project aims to build international partnerships with academia, governments from all over the world, and industry agents to reflect together with patients and clinicians about the human role that can ensure an ethical development and deployment of clinical AI models that can benefit all and respect human dignity. |
| Impact | tba |
| Start Year | 2024 |
| Description | RAI UK - IP0015 |
| Organisation | University of Bristol |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Funding International Partnership |
| Collaborator Contribution | Leading "Transparency Regulation Toolkits for Responsible Artificial Intelligence" The United Kingdom and European Union are creating new regulations for AI transparency; drafting and implementing laws that will require bodies to communicate when they are using AI, and how they are using it. However, the exact meaning of 'AI transparency' is contestable, so implementing these rules requires interpretation. Our aim is to examine how data scientists are interpreting and implementing transparency rules in practice, and how they plan to in the future. We will create two legal and responsible innovation toolkits to help Small and Medium Sized Enterprises (SMEs) comply with AI transparency requirements. |
| Impact | tba |
| Start Year | 2024 |
| Description | RAI UK - IP0018 |
| Organisation | University of York |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Funding International Partnership project |
| Collaborator Contribution | Leading "Disruption Mitigation for Responsible AI'' In today's dynamic landscape, AI applications across critical sectors face continual disruptions, spanning from environmental shifts to human errors and adversities. To effectively navigate these challenges, AI solutions must demonstrate adaptability and responsibility, aligning with the diverse social, legal, ethical, empathetic, and cultural norms of stakeholders (SLEEC). Yet, current AI development frameworks fall short in addressing this multifaceted demand. DOMINOS is poised to fill this void by delivering a comprehensive methodology and toolkit for seamless development, deployment, and utilization of responsible AI solutions capable of mitigating a wide spectrum of disruptions in ways that are compliant with SLEEC norms. |
| Impact | tba |
| Start Year | 2024 |
| Description | RAI UK - IP0024 |
| Organisation | University of Southampton |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Funding International Partnership project |
| Collaborator Contribution | Leading "AI Regulation Assurance in Safety - Critical Systems'' Safety-critical systems that use artificial intelligence (AI)can pose a variety of challenges and opportunities. This class of AI systems especially come with the risk of real consequential harms. With a cross-border approach spanning the UK, US, and Australia, our team aims to thoroughly investigate AI safety risks for technologies within the aerospace, maritime, and communication sectors. Through in-depth case studies, the project will identify technical and regulatory gaps and propose solutions to mitigate potential safety risks. Bridging the wider scientific community with international government stakeholders will allow us to positively impact the development and regulation of AI in safety-critical systems for the betterment of society. |
| Impact | tba |
| Start Year | 2024 |
| Description | RAI UK - IP0028 |
| Organisation | University of Southampton |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Funding International Partnership |
| Collaborator Contribution | Leading "For the FATES of Africa: A co-developed pipeline for responsible AI in online learning across Africa" The project focuses attention on a population that is vast on a global scale, yet is often left behind: namely, online learners in Sub-Saharan Africa (SSA). Our partnership will develop a 'how-to' pipeline for protecting and championing responsible AI among SSA online learners: from the conception of an online learning idea, through iterative design and development, until the moment of scale. We will deliver a comprehensive programme of consultation across the edtech ecosystem in SSA. Finally, an openly available Toolkit will be published, alongside policy and academic papers, as outputs from this project. |
| Impact | tba |
| Start Year | 2024 |
| Description | RAI UK - IP0032 |
| Organisation | University of Nottingham |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Funding International Partnership |
| Collaborator Contribution | Leading "TAS Hub And Good Systems Strategic Partnership" This international partnership aims to strengthen the existing ties between TAS Hub (UK) and Good Systems (USA). These are two renowned leaders in Responsible and Trustworthy AI and Autonomous Systems, both aligned with the mission and objectives of RAI UK. The partnership will involve a series of innovative and ambitious activities for research development, knowledge exchange, and sharing of best practices. These activities will empower existing members and bring in new members from from diverse backgrounds, including non-academic affiliates and partners from the Global South, to nurture an international community dedicated to advancing Responsible AI. |
| Impact | HEAD Residency Program: https://d3-lab.nicepage.io/HEAD-Residency-Program.html |
| Start Year | 2024 |
| Description | RAI UK - IP0033 |
| Organisation | University of Southampton |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Funding International Partnership |
| Collaborator Contribution | Leading "Exploring Fairness and Bias of Multimodal Natural Language Processing for Mental Health'' The world is grappling with an escalating mental health crisis. Emerging technologies like Artificial Intelligence (AI), and in particular Natural Language Processing (NLP) are presented as promising tools to solve these challenges by tackling online abuse and bullying, suicide ideation detection and other behavioural online harms. This international partnership project between the University of Southampton and Northeastern University is dedicated to leveraging the responsible use of AI in addressing mental health issues. With a core focus on ethical implementation, our partnership prioritises fairness and bias mitigation within AI models. Key initiatives encompass reciprocal resource sharing, model evaluations to identify and mitigate biases, workshops around policy and AI and mental/public health and proposing policy recommendations for the ethical integration of AI in mental health. In addition, we will foster extensive collaborations with experts in AI, public health and mental health, engaging stakeholders from the outset, ensuring that our approach to AI integration in mental health remains innovative, ethically sound, and genuinely responsive to user needs. |
| Impact | tba |
| Start Year | 2024 |
| Description | RAI UK - IP0053 |
| Organisation | University of Nottingham |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Funding International Partnership project |
| Collaborator Contribution | Leading "Understanding Robot Autonomy in Public'' In recent years, robots deployed in public settings like autonomous delivery bots and shuttles have been operating services in towns and cities worldwide. Yet there is little systematic understanding regarding how these technologies influence the daily lives of residents who coexist with them. This project aims to bridge various disciplines including transport, human-computer interaction (HCI), human-robot interaction (HRI), robotics, sociology, and linguistics. By forming partnerships among established and emerging collaborators, to share empirical data on human robot interactions in public spaces, and via this jointly develop interdisciplinary insights that will guide the responsible design of public robotics in the future. |
| Impact | tba |
| Start Year | 2024 |
| Description | RAI UK - IP0054 |
| Organisation | University of the Arts London |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Funding International Partnership project |
| Collaborator Contribution | Leading "Responsible AI international community to reduce bias in AI music generation and analysis'' This project aims to establish an international community dedicated to address Responsible AI challenges, specifically addressing bias in AI music generation and analysis. The prevalent dependence on large training datasets in deep learning often results in AI models biased towards Western classical and pop music, marginalising other genres. The project will bring together an international and interdisciplinary team of researchers, musicians, and industry experts to develop AI tools, expertise, and datasets aimed at enhancing access to marginalised music genres. This will directly benefit both musicians and audiences, engaging them to explore a broader spectrum of musical styles. Additionally, this initiative will contribute to the evolution of creative industries by introducing novel forms of music consumption. |
| Impact | tba |
| Start Year | 2024 |
| Description | RAI UK - IP0063 |
| Organisation | University College London |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Funding International Partnership project |
| Collaborator Contribution | Leading "Responsible AI networks for industries and governments in Latin America'' This project aims to establish a global network to advocate for the responsible use of Artificial Intelligence in Latin America. This network will explore and share best practices and experiences in designing safeguard measures and regulations for the responsible use of AI in governments and industries in developing countries. A significant part of our discussions will focus on the ethical and economic consequences of using AI in countries that heavily depend on imported technology and regulatory frameworks. By focusing on Latin America, we aim to enhance our understanding of these issues and their implications not only for the region but also for the UK and Europe. |
| Impact | tba |
| Start Year | 2024 |
| Description | RAI UK - King's College London |
| Organisation | King's College London |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Discussions around the challenges and opportunities of responsible AI |
| Collaborator Contribution | Discussions around the challenges and opportunities of responsible AI |
| Impact | RAI UK is multi-disciplinary. |
| Start Year | 2023 |
| Description | RAI UK - Queen's University Belfast |
| Organisation | Queen's University Belfast |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Discussions around responsible AI challenges and opportunities |
| Collaborator Contribution | Discussions around responsible AI challenges and opportunities |
| Impact | RAI is multi-disciplinary |
| Start Year | 2023 |
| Description | RAI UK - Swansea University |
| Organisation | Swansea University |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Discussions around responsible AI challenges and opportunities |
| Collaborator Contribution | Discussions around responsible AI challenges and opportunities |
| Impact | RAI is multi-disciplinary |
| Start Year | 2023 |
| Description | RAI UK - University College London |
| Organisation | University College London |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Discussions around responsible AI challenges and opportunities |
| Collaborator Contribution | Discussions around responsible AI challenges and opportunities |
| Impact | RAI is multi-disciplinary |
| Start Year | 2023 |
| Description | RAI UK - University of Cambridge |
| Organisation | University of Cambridge |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Discussions around responsible AI challenges and opportunities |
| Collaborator Contribution | Discussions around responsible AI challenges and opportunities |
| Impact | RAI is multi-disciplinary. |
| Start Year | 2023 |
| Description | RAI UK - University of Glasgow |
| Organisation | University of Glasgow |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Discussions around the challenges and opportunities of responsible AI |
| Collaborator Contribution | Discussions around the challenges and opportunities of responsible AI |
| Impact | RAI is multi-disciplinary |
| Start Year | 2023 |
| Description | RAI UK - University of Nottingham |
| Organisation | University of Nottingham |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Discussions around the challenges and opportunities of responsible AI |
| Collaborator Contribution | Discussions around the challenges and opportunities of responsible AI |
| Impact | RAI UK is a multi-disciplinary consortium. |
| Start Year | 2023 |
| Description | RAi UK - KP003 |
| Organisation | Northumbria University |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | This relates to the RAi funded Keystone Project - PROBabLE Futures - Probabilistic AI Systems in Law Enforcement Futures |
| Collaborator Contribution | Probabilistic systems supported by AI ('Probabilistic AI'), such as facial recognition, predictive tools and large language models, are being introduced at pace into policing, intelligence, probation and wider criminal justice contexts, potentially bringing measurable benefits. But a key problem for responsible AI is that the uncertain - or probable - nature of outputs is often obscured and/or isinterpreted. Challenges to achieving 'responsible' AI include data quality, sensitivity and uncertainty. Attention must be paid to chaining of systems and cumulative effects of AI systems feeding each other. Our ambition is for deployment of Probabilistic AI within law enforcement to seek to achieve one coherent system, with justice and responsibility at its heart. We will articulate what 'responsible' means for Probabilistic AI across the law enforcement landscape (intelligence, investigation, prosecution, sentencing, probation, oversight), determining technical, operational, legal, ethical and societal implications of the development, deployment and governance of 'responsible' AI. To realise our vision, we will draw on multiple perspectives, including from law enforcement and commercial partners, applying lessons from prior developments. We will use innovative research techniques such as a mock trial involving AI outputs as evidence. Our main output will be a future-oriented framework for responsible AI in law enforcement. |
| Impact | tba |
| Start Year | 2024 |
| Description | RAi UK - KP011 |
| Organisation | University of Glasgow |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | This relates to a RAi Keystone Project - Participatory Harm Auditing Workbenches and Methodologies |
| Collaborator Contribution | The Participatory Harm Auditing Workbenches and Methodologies (PHAWM) project brings together 25 researchers from seven leading UK universities with 23 partner organisations. The University of Glasgow will lead the consortium, with support from colleagues at the Universities of Edinburgh, Sheffield, Stirling, Strathclyde, York and King's College London. Together, they will develop new methods for maximising the potential benefits of predictive and generative AI while minimising their potential for harm arising from bias and 'hallucinations', where AI tools present false or invented information as fact. The project will pioneer participatory AI auditing, where non-experts including regulators, end-users and people likely to be affected by decisions made by AI systems will play a role in ensuring that those systems provide fair and reliable outputs. The project will develop new tools to support the auditing process in partnership with relevant stakeholders, focusing on four key use cases for predictive and generative AI, and create new training resources to help encourage widespread adoption of the tools. The predictive AI use cases in the research will focus on health and media content, analysing data sets for predicting hospital readmissions and assessing child attachment for potential bias, and examining fairness in search engines and hate speech detection on social media. In the generative AI use cases, the project will look at cultural heritage and collaborative content generation. It will explore the potential of AI to deepen understanding of historical materials without misrepresentation or bias, and how AI could be used to write accurate Wikipedia articles in under-represented languages without contributing to the spread of misinformation. |
| Impact | tba |
| Start Year | 2024 |
| Description | RAi UK - SK0009 |
| Organisation | University of Edinburgh |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | This refers to RAi UK funded Skills project "AI in Secondary Schools: Resource Development and Teacher Professional Learning " |
| Collaborator Contribution | Recent advances in AI may catalyse a transformation in education. Current GenAI tools are powerful, yet brittle. They can act as useful learning companions, but are known to be unreliable, often biased and potentially misleading. There is an urgent need to develop critical AI literacy in learners, and to support teachers in developing the skills and knowledge to incorporate GenAI in their classroom teaching and learning responsibly. This project will develop resources to support the acquisition of key critical AI literacy skills that will equip young people for living and working in the age of AI, and offer professional learning for teachers. |
| Impact | Tba. |
| Start Year | 2024 |
| Description | RAi UK - SK0024 |
| Organisation | Robert Gordon University |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | This refers to funded RAi Skills project "Generative Artificial Intelligence Skills in Schools (GenAISiS project)" |
| Collaborator Contribution | Partnering with secondary school students, the team will explore the responsible use of Generative AI (GenAI) in their co-creation of open educational resources, articulating student voice and enacting student experience via fictitious characters and cartoons. The project outcomes will also recognise and empower school librarians as key agents in fostering the responsible use of GenAI. A co-produced open educational toolkit with resources on GenAI academic integrity, information literacy and critical thinking skills in schools, will be widely disseminated via open workshops. GenAISiS offers a transformative pedagogical approach, raising public awareness, promoting equity and amplifying the student voice. |
| Impact | Tba. |
| Start Year | 2024 |
| Description | RAi UK - SK0055 |
| Organisation | Northumbria University |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | This refers to RAi funded Skills Project "Misinformation/Disinformation and Generative AI: Building Young Refugees' Skills and Capacities " |
| Collaborator Contribution | Refugees and asylum-seekers are disproportionately impacted by AI propagated misinformation/disinformation. Young refugees and asylum-seekers (YRAS) are ideal actors to upskill in this area as they already take on the role of technology educators within their communities. The project will leverage YRAS as educators and equip them with knowledge, skills and resources to be champions of Responsible AI. More specifically, working with YRAS, educators and third-sector organisations, the team will develop Self-Organised Learning Environment activities, Action Learning Sets and resources focused on increasing YRAS' skills and capacities in: Misinformation/disinformation, Generative AI, Responsible AI, and engaging with their wider communities on these issues. |
| Impact | Tba. |
| Start Year | 2024 |
| Description | RAi UK - SK0071 |
| Organisation | London South Bank University |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | This refers to RAi UK funded Skills project "Ways of (Machine) Seeing: towards a new visual literacy in AI" |
| Collaborator Contribution | How does image-based AI challenge existing approaches to the teaching of visual creativity? New digital tools such as GenAI are easily adopted by young people through everyday media, but there is little or no formal guidance on how to use these technologies safely or responsibly. Our research is designed to generate educational materials about AI, working directly with teachers to offer an enhanced visual learning experience for students. The team aim to produce responsible AI skills resources that offer best practice for teacher training and curriculum development, introducing the necessary skills to address AI literacy in classroom settings. |
| Impact | Tba. |
| Start Year | 2024 |
| Description | RAi UK - SK0083 |
| Organisation | Open University |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | This refers to RAi UK funded Skills Project "AI, Law and Legal Training (AILLT) " |
| Collaborator Contribution | This project brings together a multidisciplinary team to co-produce research informed resources that will enhance knowledge and awareness of, and confidence in the use of GenAI for understanding legal processes and accessing legal information. By explaining GenAI, its application, implications, and ethical use in legal contexts, the project activities will educate and empower the public, organisations that provide legal advice, small and medium-sized law firms, students, and academics. The team will create open access and engaging online courses (including interactive and audio-visual materials, checklists, and case studies) that provide ethical and responsible knowledge of, and skills to use GenAI. |
| Impact | Tba. |
| Start Year | 2024 |
| Description | RAi UK - SK0096 |
| Organisation | Loughborough University |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | This refers to RAi UK funded Skills Pillar project "Exploring AI: From Picture Books to Epic Games" |
| Collaborator Contribution | The team aim to enhance AI literacy and public awareness of responsible AI by developing two educational resources. The first is an A-to-Z picture book aimed at KS1 and KS2 children, introducing core AI concepts through engaging illustrations and age-appropriate text to foster early engagement. The second resource is a scenario-based megagame for older children and adults, blending tabletop, role playing, and applied games to immerse players in responsible AI scenarios. These tools will foster critical thinking, problem-solving, and a deeper understanding of AI. |
| Impact | Tba. |
| Start Year | 2024 |
| Description | RAi UK KP016 |
| Organisation | Queen Mary University of London |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | This relates to the RAi funded Keystone Project - Addressing socio-technical limitations of LLMs for medical and social computing. |
| Collaborator Contribution | Despite known technical limitations (e.g. biases, privacy leaks, poor reasoning, lack of explainability) Large Language Models (LLMs), are being rapidly adopted without forethought for the repercussions of doing so. In many sectors negative consequences might simply be inconvenient but in sensitive and safety-critical domains, such as healthcare and the law, ramifications of implementing poorly understood, or untested and black box systems, are likely to result in life changing implications for patients, litigants, and other stakeholders. To emphasise the very real risks the public are facing, UK judges are already allowed to utilise LLMs to summarise court cases; what if the LLM gets the temporal ordering of events wrong, or what about LLMs reinforcing racial biases already found in parole decisions? On the medical side we know that public medical question answering services are being rolled out, which could be giving inappropriate information due to lack of factuality and inherent biases. With few expectations on industry developers, information regarding models and training datasets is scant, and responsible research and innovation has taken low priority but this is expected to change due to legislation and public pressure. Our vision addresses the socio-technical limitations of LLMs that challenge their responsible and trustworthy use, particularly in the context of medical and legal use cases. Our goal is two-fold: Firstly to create an extensive evaluation benchmark (including suitable novel criteria, metrics and tasks) for assessing the limitations of LLMs in real world settings, enabling our standards and policy partners to implement responsible regulations, and industry and third sector partners to robustly assess their systems. The second part of our vision is to devise novel mitigating solutions based on new machine learning methodology, informed by expertise in law, ethics and healthcare, via co-creation with domain experts, that can be incorporated in products and services. The combination of these two core research strands will help enhance capabilities of LLMs for responsible research and innovation, thus allowing society to reap the benefits that LLM based systems promise for many sectors of the economy, while ensuring the prevention of enormous harms. |
| Impact | tba |
| Start Year | 2024 |
| Description | Responsible AI UK |
| Organisation | University of Southampton |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Jack Stilgoe was invited to be a co-investigator for a proposal to run the UKRI Responsible AI Programme. This bid, led by Gopal Ramchurn, was successful. This is a 5-year, £31 million programme. |
| Collaborator Contribution | Support |
| Impact | Responsible AI programme |
| Start Year | 2023 |
| Description | Responsible and Trustworthy AI: Economic Landscape Analysis |
| Organisation | University of Cambridge |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | This is a satellite project funded by UKRI via RAi. |
| Collaborator Contribution | The project's core research questions are: how can Responsible AI drive productivity and economic prosperity, and conversely how can productivity and economic considerations be taken into account in developing Responsible AI? We will ask: where do the benefits and costs of AI technologies accrue, especially productivity gains? What incentive structures, business models and regulatory frameworks will shape the character of the technologies, with their associated costs and benefits, so they are deployed responsibly across society and enhance the UK's prosperity? In this one-year project, the Bennett Institute for Public Policy, The Productivity Institute and the Economic Statistics Centre of Excellence (ESCoE), will draw on existing RAi UK work to address these questions through a combination of broad landscape mapping and focused examples. They will seek to analyse economic incentives and organisational or business models, appropriate regulatory frameworks and trade-offs, opportunities and challenges for the technology itself, and for potential mechanisms to monitor and enforce governance and incentive structures, and to understand where benefits and costs of the technologies are accruing. Their analysis can contribute to prioritizing research themes in Responsible AI, a preliminary step before further resources and work are deployed to continue addressing this key question. |
| Impact | tba |
| Start Year | 2024 |
| Description | Royal Academy of Engineering |
| Organisation | Royal Academy of Engineering |
| Country | United Kingdom |
| Sector | Charity/Non Profit |
| PI Contribution | Discussions around research challenges and identification of synergies. |
| Collaborator Contribution | Discussions around challenges and identification of synergies. |
| Impact | Presentation at the TAS ECR event held at Prince Philip House, London on 23 Feb 2022. Involvement in RAi events and support for the RAi Enterprise Fellowship Programme. |
| Start Year | 2021 |
| Description | Trilateral Research |
| Organisation | Trilateral Research and Consulting LLP |
| Country | United Kingdom |
| Sector | Private |
| PI Contribution | The opportunity to engage with the co-creation of research projects. |
| Collaborator Contribution | Engagement with research project co-creation. |
| Impact | Engagement with research projects is ongoing. |
| Start Year | 2023 |
| Description | University of L'Aquila, Italy |
| Organisation | University of L'Aquila |
| Country | Italy |
| Sector | Academic/University |
| PI Contribution | Joint research on social, legal, ethical, empathetic and cultural requirements for autonomous systems. |
| Collaborator Contribution | Joint research on social, legal, ethical, empathetic and cultural requirements for autonomous systems. |
| Impact | N/A |
| Start Year | 2024 |
| Description | University of Toronto, Canada |
| Organisation | University of Toronto |
| Country | Canada |
| Sector | Academic/University |
| PI Contribution | Contributions to joint research papers, and to joint research grant application. |
| Collaborator Contribution | Contributions to joint research papers, and to joint research grant application. |
| Impact | Research papers at top conferences Automated Software Engineering 2023, and International Conference on Software Engineering 2024. Funded Responsible AI International Partnership. |
| Start Year | 2023 |
| Description | Visiting Researcher at Northeastern University |
| Organisation | Northwestern University |
| Country | United States |
| Sector | Academic/University |
| PI Contribution | Collaboration has led to further funding via a RAIUK collaboration grant (I am a CoI on this grant). I have also helped organise multiple international workshops around responsible AI for mental health. |
| Collaborator Contribution | Collaboration has led to further funding via a RAIUK collaboration grant (NEU is a CoI on this grant). NEU has helped organise multiple international workshops around responsible AI for mental health. |
| Impact | Outputs are tagged under RAIUK grant which funded the collaboration grant we won |
| Start Year | 2024 |
| Description | York and Scarborough Teaching Hospitals NHS Foundation |
| Organisation | York Teaching Hospital NHS Foundation Trust |
| Country | United Kingdom |
| Sector | Public |
| PI Contribution | Autonomous systems, robotics and AI expertise. |
| Collaborator Contribution | Use case, data, emergency medicine expertise, consultant time. |
| Impact | Research papers under review, partnership on grant proposals under review. |
| Start Year | 2022 |
| Title | Diagnostic AI System for Robotic and Automated Triage and Assessment (DAISY) |
| Description | This study aims to introduce an automated triaging system called DAISY into the Emergency Department (ED) to give patients the opportunity to self-direct their initial consultation. This is a new system in development, with a robot like the image at the top of this sheet, a touchscreen that will ask patients a number of questions about their current health (as a Triage nurse or doctor in the Emergency Department may do) but also with some attached devices (like a blood pressure monitor and thermometer) that patients can use to help DAISY assess patients' current health. This study aims to demonstrate how patients can interact with the automated system to produce a report that is useful for the doctors and nurses in the ED. The study will examine the duration and timeliness of the automated assessment to see if it frees up staff time and determine how patients find the experience of using the DAISY system. |
| Type | Diagnostic Tool - Non-Imaging |
| Current Stage Of Development | Early clinical assessment |
| Year Development Stage Completed | 2024 |
| Development Status | Under active development/distribution |
| Clinical Trial? | Yes |
| Impact | N/A |
| URL | https://www.york.ac.uk/computer-science/research/projects/daisy-project/ |
| Title | AI Accountability Self Assessment Tool |
| Description | Software tool for AI Accountability for the UK Policing and Security ecosystem |
| Type Of Technology | Webtool/Application |
| Year Produced | 2025 |
| Impact | Still in development |
| URL | https://aipas.dev.centric.shu.ac.uk/login |
| Title | Responsible Ai UK project KP0016-Dashboard for identifying and summarising moments of change in mental health |
| Description | In a number of publications we have focussed on identifying moments of change in individuals' social media data as well as on summarising the context around such changes. This tool brings together the findings of these works to enable clinicians and individuals to identify changes in a patient's affect, behaviour and needs, as manifested in content they share online, and summarise the context around them. |
| Type Of Technology | Webtool/Application |
| Year Produced | 2025 |
| Impact | None yet |
| URL | https://social-media-timeline-dashboard.vercel.app/ |
| Title | Responsible Ai UK project KP0016-Evaluation methods and tool for systems based on generative AI |
| Description | Within a number of publications we have been exploring metrics and tasks for evaluating systems based on generative AI, particularly for summarisation and longitudinal tasks. We are in the process of creating an evaluation platform that brings together a range of tasks and metrics and allows individuals to: (1) browse available evaluation strategies for particular use cases and run evaluation with their own AI model or dataset, comparing results to existing evaluations. (2) configure their own evaluation from a range of different tasks, aspects, metrics and recommendation on similarities between other sets of goals and requirements. |
| Type Of Technology | Webtool/Application |
| Year Produced | 2025 |
| Impact | None yet as not publicly released. |
| URL | https://adsolve-evaluation-platform.vercel.app/ |
| Title | SAGE-RAI Digital Assistant |
| Description | SAGE-RAI Digital Assistant is a modular, task-focused AI assistant service developed as part of the SAGE-RAI project, a collaboration between the Open University and the Open Data Institute (ODI) for educational and is currently being evaluated within the project. It allows users to create custom AI assistants which can be tailored for educational support and also other tasks such as employee guidance, knowledge management, data discovery, customer service, technical assistance, and bid writing. The system employs Retrieval-Augmented Generation (RAG), which combines general-purpose Large Language Models (LLMs) with task-specific knowledge bases to provide accurate and relevant responses. The architecture is flexible, enabling deployment on local systems or cloud-based platforms using LLMs from providers like OpenAI or Anthropic. The service emphasises transparency, user privacy, and ease of use, aiming to democratize the creation of advanced AI assistants. Notable Impacts Resulting from the Development: Lowered Barrier to Entry: The modular architecture and focus on task-specific assistants make it easier for non-technical users to create and deploy AI tools, reducing reliance on advanced technical expertise. Improved Accuracy and Relevance: By using RAG, the system minimizes hallucinations (incorrect or fabricated responses) and ensures responses are grounded in specific, retrieved knowledge, enhancing reliability. Enhanced Knowledge Management: Organizations can use the assistants to streamline knowledge recall, transfer, and discovery, improving efficiency and decision-making. Privacy and Transparency: The system prioritizes user privacy, stores data securely, and provides transparency by listing sources used for responses, fostering trust in AI interactions. Research and Innovation: As part of the SAGE-RAI project, the development contributes to advancing AI research, particularly in the areas of retrieval-augmented systems and task-focused AI applications. Scalability and Flexibility: The ability to deploy assistants locally or on cloud infrastructure, combined with support for multiple LLM providers, offers scalability and adaptability for diverse use cases. |
| Type Of Technology | Webtool/Application |
| Year Produced | 2024 |
| Open Source License? | Yes |
| Impact | The SAGE-RAI Digital Assistant has been deployed and used by the Open Data Institute (ODI) as the "ODI AI Assistant" to support internal activities. As part of the SAGE-RAI project, the development contributes to advancing AI research, particularly in the areas of retrieval-augmented systems and task-focused AI applications as we contribute code to open-source resources such as "embed-js" a popular RAG backend javascript library. |
| URL | https://github.com/SAGE-RAI |
| Title | Social, Legal, Ethical, Empathetic and Cultural toolkit (SLEEC TK) for requirements specification, validation and verification |
| Description | SLEEC-TK is an Eclipse-based environment for defining SLEEC rules in a domain-specific language with a timed process algebraic semantics. SLEEC-TK uses model checking to identify redundant and conflicting rules, and to verify conformance of autonomous agent design models with SLEEC rules. |
| Type Of Technology | Software |
| Year Produced | 2023 |
| Open Source License? | Yes |
| Impact | Publication of a tool paper nominated for an award at ETAPS 2023 (https://etaps.org/2023/). Delivery of a tutorial at ASE 2023 (https://conf.researchr.org/track/ase-2023/ase-2023-tutorials?). |
| Title | StoryFutures ConnectXR Toolkit |
| Description | The ConnectXR Innovation Pipeline is a designed process aimed at developing and bringing together new XR technologies into the areas of health and wellbeing. This approach is a collaborative effort that combines the imaginative strength of the arts, the thoughtful planning from academic fields, knowledge of needs from local authority and the hands-on experience of local health professionals. |
| Type Of Technology | Webtool/Application |
| Year Produced | 2024 |
| Impact | It was launched on 5th June 2024 |
| URL | https://www.storyfutures.com/resources/connectxr-audience-insight-toolkit |
| Title | The UnMute Toolkit |
| Description | The UnMute Toolkit is a collection of tools, methodologies and pipelines tailored to minority language technology design and development. It contains components to engage community members, collect spoken language recordings, train information retrieval models and deploy those models in community contexts. |
| Type Of Technology | Software |
| Year Produced | 2024 |
| Open Source License? | Yes |
| Impact | The toolkit has helped gather data, train models and quickly deploy speech-driven services in community contexts. |
| URL | https://unmute.tech/toolkit/ |
| Description | "Like rearranging deck chairs on the Titanic"? Feasibility, Fairness, and Ethical Concerns of a Citizen Carbon Budget for Reducing CO2 Emissions |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Other audiences |
| Results and Impact | Presentation at the ACM Conference on Fairness, Accountability, and Transparency, as part of the TAS Citizen Carbon Budget project. Radical and disruptive interventions are needed to reach "Net Zero" by 2050 to avert the climate catastrophe. Although governments, companies, cities, and institutions have pledged to take action and reduce their carbon emissions, the idea of personal carbon allowances or budgets for individuals has also been proposed as a potential national policy in the UK. In this paper, we employ a Research through Design approach to explore the notion of a carbon budget. We present combined results from two studies: firstly a workshop with members of environmental organisations (industry, charity, and policymaking) discussing the concept of a Citizen Carbon Budget (CCB) and app, from the wide perspective of societal desirability drawn from Responsible Research and Innovation (RRI); and secondly, a one-month deployment of a CCB mobile app with twelve members of the public based in the UK. Key findings from the combination of these approaches showed that the CCB app was fruitful in supporting awareness of personal carbon emissions and reflections about people's lifestyles. However, several concerns were raised, including the unfairness of treating all people equally in environmental policy, regardless of their background and context. We provide considerations for policymaking and design, including intertwined perspectives drawn from the differing approaches of individual and collective action. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://programs.sigchi.org/facct/2024/program |
| Description | 'AI Transparency Laws: Designing Regulatory Toolkits for SMEs' |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Joshua Krook presented 'AI Transparency Laws: Designing Regulatory Toolkits for SMEs' at The Centre for Law and Technology's Afternoon Tea Talk, Faculty of Law, University of Southampton (14 Jan 2025). |
| Year(s) Of Engagement Activity | 2025 |
| Description | 'They don't just fall out of trees': Nobel awards highlight Britain's AI pedigree |
| Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | Commentaries from RAi Co-investigators Professor Dames Calder and Hall are included in the Guardian article. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.theguardian.com/science/2024/oct/11/nobel-awards-highlight-britains-ai-pedigree-demis-ha... |
| Description | (Webinar) In Conversation with: PROBabLE Futures Project |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | In this webinar, Professor Marion Oswald from Northumbria University introduced PROBabLE Futures (Probabilistic AI Systems in Law Enforcement Futures), a four-year, £3.4M RAi funded project. Led by Northumbria University and involving multiple UK institutions - Glasgow, Northampton, Leicester, Aberdeen and Cambridge Universities - this project critically examines the integration of Probabilistic AI systems, such as facial recognition and predictive tools, within law enforcement. These systems offer potential benefits but also raise significant concerns, especially where the probable nature of outputs is misinterpreted, leading to serious consequences. The project aims to develop a responsible, rights-respecting framework for the use of AI across various stages of the criminal justice system. Professor Oswald covered: Inspiration: The challenges of decision-making in law enforcement based on uncertain AI outputs Ambition: Building a justice-focused framework for AI deployment in law enforcement Impact: Creating a system that informs policy-makers and law enforcement on responsible AI use Objectives: - Mapping the probabilistic AI ecosystem in law enforcement - Learning from the pastScoping for the future, including evaluation of contested technologies such as remote weapons scanning - Focusing upon practical use of AI & the interaction of multiple systems (chaining) - Using XAI taxonomy and novel methods including story-telling and mock trials using AI evidence - Establishing an experimental oversight body including members representing under-represented groups This webinar was chaired by Professor Joel Fischer, Chair Research Pillar - RAi UK. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.northumbria.ac.uk/about-us/academic-departments/northumbria-law-school/probable-futures/ |
| Description | 100 Brilliant Women in AI Ethicsâ„¢ - 2025 (EV) |
| Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | Women in AI Ethicsâ„¢ Women make up nearly 50 percent of the global workforce but only account for a third of the Science, Technology, Engineering, and Mathematics (STEM) workforce. This lack of diversity perpetuates gender stereotypes and shows up as bias in AI systems. In addition, the erasure of women and nonbinary people's contributions to this critical field further undermines our progress towards a more inclusive and ethical tech future. Since 2018, Women in AI Ethicsâ„¢ (WAIE) has elevated the voices of experts from multidisciplinary backgrounds and showcased the role of diverse perspectives in responsible design and development of AI models and systems. WAIE publishes "100 Brilliant Women in AI Ethicsâ„¢" list annually, and hosts expert talks on the most urgent issues in responsible AI as part of its mission to make this space more diverse, ethical, and inclusive. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://womeninaiethics.org/the-list/of-2025/ |
| Description | 16. Responsible AI UK - an ecosystem for Responsible AI in the UK and beyond |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Invited talk at the Next International Workshop 2024 Singapore on Responsible AI and Large Language Models. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://next-events.github.io/workshop-dec-2024/ |
| Description | 25 Million Future Shapers Forum (SK) |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | The 25 MILLION FEMALE FUTURE SHAPERS sideline event at UNGA's Summit of the Future, hosted by TAIFA, will elevate the issue of gender equality in AI. This forum will showcase the progress made in achieving gender parity in leadership and discuss strategies to address structural barriers hindering women's participation in the tech industry. Attendees will gain valuable insights and practical tools to promote inclusive processes, support women's career development, and ultimately double the representation of women in AI and tech. This event is an opportunity to accelerate progress towards a more equitable and innovative future, where girls and young women are empowered to shape the AI revolution. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.technovation.org/taifa/events/? |
| Description | A Roadmap for Ethical AI in Healthcare |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | This event brought together experts from a range of fields to discuss the ethics of implementing AI in healthcare. As Artifical Intelligence (AI) technology rapidly advances across healthcare, critical ethical challenges emerge. Is it safe for AI to prescribe medications? Can clinicians ethically rely on AI for diagnoses? How do we build trust in AI systems? What role does patient subjectivity play in this evolving landscape? Are we at risk of dehumanization in AI-driven medicine? And how can both clinicians and patients stay empowered during these transformative times? The day featured insightful keynote speeches and dynamic panel discussions that tackled the ethical dilemmas of implementing AI in routine clinical care. The speakers shared effective strategies for stakeholders to translate ethical principles into actionable practices, ensuring human integrity in medicine. Topics ranged from accountability and legal liability to the risks of dehumanization and disempowerment. The event was led by Dr Raquel Iniesta, Reader in Statistical Learning for Precision Medicine at the Institute of Psychiatry, Psychology & Neuroscience, King's College London. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://rai.ac.uk/research/international-partnership-projects/ |
| Description | A Robot Tour Guide at the Djanogly Gallery |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Presentation at the Museum Next Digital Summit about a robot deployment conducted in the Djanogly Gallery (Lakeside Arts), as part of the TAS TERPLAY project. The Museum Digital Summit 2025 brings together museums from around the world to share the insight and stories behind inspiring and transformational digital projects. The presentation describes how a unique and engaging visitor experience was created using a telepresence robot tour guide at the Boots Counter Culture exhibition in the Djanogly Gallery, Nottingham, UK. As robots become increasingly prevalent in public spaces, it is crucial to understand the perspectives and experiences of the diverse range of individuals in these spaces and to examine interactions with and around robots. The presentation outlines the design and preparation of the tour, shares the lessons learned in organizing and delivering the event, and highlights important considerations for developing a robot tour experience. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.museumnext.com/events/digital-exhibitions-summit/schedule/ |
| Description | AAAI - ATRACC (SDR) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Artificial intelligence (AI) has already become a transformative technology that is having revolutionary impact in nearly every domain from business operations to more challenging contexts such as civil infrastructure, healthcare and military defense. AI systems built on large language and foundational/multi-modal models (LLFMs) have proven their value in all aspects of human society, rapidly transforming traditional robotics and computational systems into intelligent systems with emergent, beneficial and even unanticipated behaviors. However, the rapid embrace of AI-based critical systems introduces new dimensions of errors that induce increased levels of risk, limiting trustworthiness. Furthermore, the design of AI-based critical systems requires proving their trustworthiness. Thus, AI-based critical systems must be assessed across many dimensions by different parties (researchers, developers, regulators, customers, insurance companies, end-users, etc.) for different reasons. We can call it AI testing, validation, monitoring, assurance, or auditing, but the fundamental concept in all cases is to make sure the AI is performing well within its operational design and avoids unanticipated behaviors and unintended consequences. Such assessment begins from the early stages of research, development, analysis, design, and deployment. Thus, trustworthy AI systems and methods for their assessment should address full system-level functions as well as individual AI-models and require a systematic design both during training and development phases, ultimately providing assurance guarantees. At the theoretical and foundational level, such methods must go beyond explainability to deliver uncertainty estimations and formalisms that can bound the limits of the AI; find blind spots and edge-cases; and incorporate testing for unintended use-cases, such as adversarial testing and red teaming in order to provide traceability, and quantify risk. This level of performance is critically important to contexts that have highly risk-averse mandates such as, healthcare, essential civil systems including power and communications, military defense, and robotics that interface directly with the physical world. The symposium track aims to create a platform for discussions and explorations that are expected to ultimately contribute to the development of innovative solutions for quantitatively trustworthy AI. The symposium track will last 2.5 days and will feature keynote and invited talks from accomplished experts in the field of Trustworthy AI, panel sessions, and the presentation of selected peer-reviewed papers. Potential topics of interest include, but are not limited to: Assessment of non-functional requirements such as explainability, including transparency, accountability, and privacy. Methods that use data and knowledge to support system reliability requirements, quantify uncertainty, or balance over-generalizability. Approaches for verification and validation (V&V) of AI systems and quantitative AI and system performance indicators. Methods and approaches for enhancing reasoning in LLFMs, e.g. causal reasoning techniques and outcome verification approaches. Links between performance, and trustworthiness and trust leveraged by AI sciences, system and software engineering, metrology, and Social Sciences and Humanities methods. Research on and architectures/frameworks for Mixture-Of-Experts (MoE) and multi-agent systems with an emphasis on robustness, reliability, and emergent behaviors in risk-averse contexts. Evaluation of AI systems vulnerabilities, risks and impact; including adversarial (prompt injection, data poisoning, etc.) and red-teaming approaches targeting LLFMs or multi-agent behaviors. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://sites.google.com/view/aaai-atracc/home |
| Description | AE Global Summit on Open Problems for AI |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Industry/Business |
| Results and Impact | AE Global Summit on Open Problems for AI, London, 23rd October 2024. Presentation from Prof Maria Liakata, AdSoLve lead/QMUL. Recorded presentation available on Youtube. Funding from Responsible Ai UK (KP0016). |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.youtube.com/watch?v=ePOIz6rWnYU&t=106s |
| Description | AI & A2J Webinar |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Third sector organisations |
| Results and Impact | Presented at AI & A2J Webinar at Stanford University on AI and access to justice- sparked questions and discussions and possible future collaborations |
| Year(s) Of Engagement Activity | 2024 |
| Description | AI & Countervailing Power (GN) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Gina facilitated this half-day consultation workshop from the Joseph Rowntree Foundation, which is beginning exploring a programme of funding at the intersection of social justice, poverty and AI. Gina co-convened the event with the Foundation which combined MCTD and JRF networks in a day-long exercise to power map the UK AI ecosystem. A key takeaway was the importance of bringing together new ways of talking about AI policy to include people's perspectives and change the debate. |
| Year(s) Of Engagement Activity | 2024 |
| Description | AI + National Security Symposium (SDR) |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | An expert-led discussion about the opportunities and challenges that AI creates for national security. |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://aifringe.org/events/ai-national-security-symposium |
| Description | AI Children's Summit |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Schools |
| Results and Impact | Event to capture Childen's views on AI and take findings to the Paris AI Action Summit. Prof Greg Slabaugh, QMUL/AdSoLve participated. Funding from Responsible Ai UK (KP0016) |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://www.qmul.ac.uk/deri/news/items/queen-mary-university-host-worlds-first-childrens-ai-summit.h... |
| Description | AI Fringe |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Industry/Business |
| Results and Impact | Responsible AI co-organised the AI Fringe, a series of events hosted across London and the UK to complement the UK Government's AI Safety Summit, bringing a broad and diverse range of voices, from industry, civil society and academia, into the conversation. It provided both a platform for all communities - including those historically underrepresented - to engage in the discussion and enhanced understanding of AI and its impacts so organisations may harness its benefits. One, of the many outputs and impacts, from this event will be a white paper. |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://aifringe.org/ |
| Description | AI Fringe Panel Which 'AI Slop? Consumers Say No!' |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Panel event at the AI Fringe alongside 3 other panellists discussing the impact of AI on consumers hosted by Which |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://aifringe.org/ |
| Description | AI Fringe Seoul Summit |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Industry/Business |
| Results and Impact | This event, where RAi UK was one of the partners, brought together a diverse range of voices from the UK, the Republic of Korea and France to reflect on the priorities and outcomes from the Seoul Summit: AI safety, innovation and inclusivity. It was an opportunity to reflect on how the global AI safety and responsibility conversation has evolved over the course of the Bletchley and Seoul Summits, and to pass the baton to France, which will host the next AI Safety summit in early 2025. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://aifringe.org/ |
| Description | AI Just Can't Get Enough: Disinformation, AI, and Democracy - Munich Security Conference (GN) |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Prof Gina Neff moderated a compelling session on Disinformation, AI, and Democracy at the Munich Security Conference. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://securityconference.org/en/msc-2025/agenda/event/ai-just-cant-get-enough-disinformation-ai-an... |
| Description | AI Regulation Workshop (Australia) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Held AI regulation workshop at ANU in Australia with Australian defence, government, and academics |
| Year(s) Of Engagement Activity | 2025 |
| Description | AI Regulation Workshop (USA) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Held AI regulation workshop in DC (FAA, NIST, JPL, TRAILS, ANU, Southampton, MITLL) |
| Year(s) Of Engagement Activity | 2024 |
| Description | AI Standards Hub Global Summit 2025 |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Turing AI fellowship and the Keystone AdSoLve project will be featuring at the AI UK conference as well as the AI Standards Hub Global Summit, both in March 2025. Funding from Responsible Ai UK (KP0016). |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://aistandardshub.org/global-summit/ |
| Description | AI Transparency Workshop (Antwerp, EU) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Industry/Business |
| Results and Impact | Joshua Krook and Jan Blockx organised the Antwerp AI Transparency Workshop hosted by the University of Antwerp. Jan Blockx conducted the workshop. Attendees included academics, policymakers, computer scientists, data scientists, AI ethics consultants, regulatory and legal experts. Discussion informed the creation of the toolkit ('AI Transparency Regulation Toolkit for Responsible AI' and the forthcoming article: "Stakeholder perspectives on AI transparency for Small to Medium Sized Enterprises (SMEs) in the UK and EU." |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.eventbrite.co.uk/e/ai-transparency-workshop-antwerp-tickets-815251016737 |
| Description | AI Transparency Workshop (Bristol, UK) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Industry/Business |
| Results and Impact | Peter Winter organised the Bristol AI Transparency workshop hosted by the University of Bristol. The workshop was conducted by John Downer, Peter Winter, and Joshua Krook. Attendees included academics, policymakers, computer scientists, data scientists, AI ethics consultants, regulatory and legal experts. Discussion informed the creation of the Antwerp Workshop, and the toolkit ('AI Transparency Regulation Toolkit for Responsible AI', as well as the forthcoming article: "Stakeholder perspectives on AI transparency for Small to Medium Sized Enterprises (SMEs) in the UK and EU." |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.eventbrite.co.uk/e/ai-transparency-workshop-bristol-tickets-815299070467?aff=oddtdtcreat... |
| Description | AI Transparency Workshop - IP0015 |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | The RAI UK-funded 'Transparency Regulation Toolkits for Responsible Artificial Intelligence (AI)' project is one of the first empirical studies to tell us something about how Small to Medium Sized Enterprises (SMEs) are interpreting and implementing transparency rules in practice, and how they plan to in the future. This project, in particular, will explore how SMEs interpret and apply AI transparency rules against the backdrop of changing regulatory landscapes in the European Union (EU) and United Kingdom (EU) through two Responsible Innovation (RI) workshops. The EU and UK workshops are an opportunity to connect SMEs with the work of academics, policy makers, regulators, and their users and shape AI transparency guides to help SMEs navigate problems of AI transparency, existing transparency rules, and future transparency rules at different scales . The RI workshop in Antwerp will enable SMEs to actively engage in AI transparency governance and participate in the creation of a EU-specific guide to help SMEs comply with AI transparency requirements in the EU. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.eventbrite.co.uk/e/ai-transparency-workshop-antwerp-tickets-815251016737 |
| Description | AI Trustworthiness and Risk Assessment Scientific Seminar Series |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | The first online "ATRASS": AI Trustworthiness and Risk Assessment Scientific Seminar, was held on 23 Jan 2025, 4:00pm CET, following the successful ATRACC (AI Trustworthiness and Risk Assessment for Challenged Contexts) symposium held in Arlington, VA, last November. This monthly online seminar, open to all, will contain scientific presentations of interest for the community of researchers and practitioners of the domain. The sessions will be recorded and published for later viewing. The first edition will start with a presentation of the supporting programmes. Future sessions will mainly contain scientific talks and discussions. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://rai.ac.uk/events/ai-trustworthiness-and-risk-assessment-scientific-seminar-series/ |
| Description | AI and Blockchain: Challenges and Opportunities for Open Education and Lifelong Learning |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | This annual event serves as a platform for global collaboration and learning in the field of Open Education. Our webinar aims to explore the evolving landscape of education in the digital age, focusing on the impact of generative Artificial Intelligence (AI) and blockchain technology. Through meaningful discussions, we will delve into the potential synergies and implications of these technologies, aligning with UNESCO's 2019 Recommendation on Open Educational Resources (OER). |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://oeweek.oeglobal.org/activity/unlocking-knowledge-oer-ai-blockchain/ |
| Description | AI and rare disease: with great power comes great responsibility! |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | This is an informative meeting to present Responsible AI UK and bring awareness about incorporating AI into the treatment and diagnosis of rare medical conditions. |
| Year(s) Of Engagement Activity | 2024 |
| Description | AI between prejudices and misinformation: Gina Neff speaks, June 14, 2024 |
| Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | In an interview with Sky TG24, Professor Gina Neff talks about AI and G7 |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://tg24.sky.it/mondo/2024/06/13/ai-pregiudizi-disinformazione-discriminazione-ginaneff |
| Description | AI comes to the Nobels: double win sparks debate about scientific fields (WH) |
| Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | AI comes to the Nobels: double win sparks debate about scientific fields - Catch Professor Dame Wendy Hall's interesting conversation with 'World at One' BBC Sounds! |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.bbc.co.uk/sounds/play/m0023q3g |
| Description | AI for Digital Democracy public event hosted by Global Majority network People Powered |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Public event aimed at practitioners and policymakers |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://www.linkedin.com/feed/update/urn:li:activity:7297659656569606144/ |
| Description | AI for intelligence, investigations and public protection (KP0003 - PROBabLE Futures) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | This event is focused on AI for intelligence, investigations and public safety and is scheduled to take place online between 12:00 - 14.45 (GMT) on Monday 20th January 2025. The vision of PROBabLE Futures, a Responsible AI UK Keystone project, is for probabilistic AI in law enforcement to be one coherent system, with justice and responsibility at its heart. We want to see AI contributing positively and proportionally to preventing and detecting crime, upholding fairness, justice and the police's impartial service to law and so improving opportunities for citizens. To achieve this, we must pay attention to the 'chaining' of systems, the cumulative effect of AI outputs feeding into other parts of the criminal justice system. This workshop will focus on the start of the AI chain, the use of AI by policing for intelligence, investigation and public protection purposes. It will explore real case-studies and consider the implications for other parts of the law enforcement system. Even AI for automating 'paperwork', that might seem innocuous, can have significant implications for the administration of justice. We will consider what factors should be considered to decide whether an AI tool should be used. This online event will have an invited audience comprised of professionals primarily in law enforcement, government and selected academic and commercial sector attendees, and the discussion will be conducted in accordance with the Chatham House rule. Whether you are able to attend this particular event or not, if you wish to find out more about PROBabLE Futures and RAi UK in the meantime, we encourage you to visit our website https://probablefutures.rai.ac.uk/, where you will be able to find more information on the project, our partners and our research, as well as links to our most recent press releases and publications. Speakers: Professor Marion Oswald: PROBabLE Futures Principal Investigator, Northumbria Law School & The Alan Turing Institute Matt Welsted: Assistant Chief Constable, Force Executive Team, West Midlands Police Garry Pilkington: Digital Forensic Investigation Unit, Greater Manchester Police Professor Dame Muffy Calder: Head of the College of Science & Engineering at the University of Glasgow Industry Panel Representation from: ROKE: Mark West, Assistant Director for Innovation Microsoft: Ian O'Gara, Data & AI Lead, Public Safety PA Consulting: Phininder Balaghan, AI Consultant Palantir: Robert Shearme, Technical Lead for UK Policing BAE Systems Digital Intelligence: Professor Henry Tse, Head of Products and Services |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://probablefutures.rai.ac.uk/ |
| Description | AI for intelligence, investigations and public protection - PROBabLE Futures kick off |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | An online event on AI for intelligence, investigations, and public protection took place on January 20, 2025, bringing together professionals from law enforcement, government, academia, and industry. Hosted by PROBabLE Futures, the event explored the role of AI in policing, focusing on its cumulative impact within the criminal justice system. Discussions included real case studies and ethical considerations for AI deployment in law enforcement. The event fostered informed dialogue on responsible AI use, aiming to ensure fairness, justice, and proportionality in crime prevention and investigation. |
| Year(s) Of Engagement Activity | 2025 |
| Description | AI in Healthcare (Leeds) |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | Regional |
| Primary Audience | Public/other audiences |
| Results and Impact | This was a general talk about Responsible AI UK given in a community space in Leeds. The audience was diverse and interested to understand the pros and cons of applying AI into the healthcare sector. |
| Year(s) Of Engagement Activity | 2023 |
| Description | AI through an LGBTQ+ lens |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | This was a NEXUS breakfast with Professor Kate Devlin, Deputy Chair Public Engagement, Outreach and Policy Pillar, RAi UK. Agenda Quarterly LGBTQ+ News Digest, presented by Cynthia Fortlage Fireside chat with Kate Nash OBE, hosted by Dan Ricard AI through an LGBTQ+ lens, a panel discussion facilitated by Birgit Neu with: - Dr Kate Devlin, Professor of Artificial Intelligence & Society at King's College London - Adam Charles, Product Manager working in Data Solutions at Mastercard Open Banking Services - Ashly Cheung, Data & AI Strategy Consultant at Accenture |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://rai.ac.uk/events/ai-through-an-lgbtq-lens/ |
| Description | AI, music and the human spirit |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Public/other audiences |
| Results and Impact | The UKRI Trustworthy Autonomous Systems (TAS) Hub and Responsible AI UK are thrilled to announce an exciting event on Wednesday 12th June 2024, as part of London Tech Week: "AI, Music, and the Human Spirit". The event will be held at The Royal Society, London. We will be hosting Fernando Garibay, a visionary Director who has worked with artists such as Lady Gaga, Whitney Houston and U2, as our keynote speaker. Travelling all the way from Los Angeles, Fernando will share his insights on the intersection of AI, music, and the human spirit. In addition, we have an impressive lineup of speakers, including Ali Hossaini and Steve Benford, who will discuss their pioneering work in using AI responsibly to enhance creativity and improve people's lives. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.eventbrite.co.uk/e/ai-music-and-the-human-spirit-tickets-916479724527 |
| Description | AIPAS Presentation in Responsible AI UK Webinar Series |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Public/other audiences |
| Results and Impact | Introduction of the AIPAS project to academic and general audiences intererested in Responsible AI |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://events.teams.microsoft.com/event/84314e6f-229b-4e0a-8292-157d3cb4ddba@8370cf14-16f3-4c16-b83... |
| Description | AIPAS presentation to BAE systems |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Industry/Business |
| Results and Impact | AIPAS presentation was delivered to members of BAE Systems. The presentation sparked conversation on AI accountability and the challenges that come with it. |
| Year(s) Of Engagement Activity | 2025 |
| Description | AIPAS presentation was delivered at RISE-SD 2024, Greece |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | AIPAS presentation was delivered at RISE-SD 2024. This key event focused on security research and innovation, bringing together experts and EU officials to discuss challenges and solutions in areas such as; fighting crime, cybersecurity, and border management. |
| Year(s) Of Engagement Activity | 2024,2025 |
| URL | https://rise-sd.net/ |
| Description | AIPAS presented at the 2024 Cumberland Lodge Police Conference |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | AIPAS delivered a presentation at the Cumberland Lodge, Police Foundation. The Police Foundation is the only independent think tank focused exclusively on improving policing and developing knowledge and understanding of policing and crime reduction. The presentation was also published in a report that was generated by the Cumberland Lodge: https://www.police-foundation.org.uk/wp-content/uploads/2010/10/cumberland-lodge-2024.pdf |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://www.police-foundation.org.uk/ |
| Description | AIPAS presented at the Responsible Ai UK All-Hands Meeting in Cardiff 2024 |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | AIPAS was showcased at the Responsible Ai UK All-Hands Meeting in Cardiff 2024. https://www.linkedin.com/posts/aipas-uk_aipas-ai-activity-7239264001593946112-YslB?utm_source=share&utm_medium=member_desktop&rcm=ACoAAEIU2D0BtVLg5DBkW4gscuf2tp1bh1CHSGg |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://rai.ac.uk/events/rai-all-hands-meeting/ |
| Description | AIPAS the the Responsible Research and Innovation and EDI Equality Diversity and Inclusion session |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | AIPAS submitted a report and presented at the RRI and EDI session at Nottingham University in September 2024 |
| Year(s) Of Engagement Activity | 2024 |
| Description | AIPAS tool demonstration |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | The AIPAS accountability tool was demonstrated to professionals in South Yorkshire Police. The demonstration and the outcome helped researchers and developers understand the requirements of the users and supported the validation process. South Yorkshire Police expressed interest in the tool and during previous events have request access to the tool. |
| Year(s) Of Engagement Activity | 2025 |
| Description | AIPAS tool demonstration with the MET Police Service |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | The AIPAS accountability tool was demonstrated to professionals in the MET Police Service (MSP). The demonstration and the outcome helped researchers and developers understand the requirements of the users and supported the validation process. The MPS is a consortium member and have access to the tool, however this tool demonstration was given to a wider audience with the MPS. |
| Year(s) Of Engagement Activity | 2025 |
| Description | AIPAS was presented to the research lead for the office of the police Chief Scientific adviser. |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | AI accountability, the requirement for it and the AIPAS project in whole was presented and discussed with the research lead for the office of the police Chief Scientific adviser. The event provided fruitful conversations and a heightened awareness and interest in the AIPAS project. |
| Year(s) Of Engagement Activity | 2025 |
| Description | ART-I ""Overview of RAi UK Cutting-Edge Projects." Women in RAi: Generative AI and the Representation of Women, Women in Tech Week, King's College London" |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Industry/Business |
| Results and Impact | Imke van Heerden was a co-organiser of the event and a presenter at the event, which discussed the roles of women in technology and specifically in RAi UK, introducing our projects. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.tickettailor.com/events/responsibleai/1391860 |
| Description | ART-I "Creating Policy Futures: A Responsible AI and Careful Industries Unconference The Lowry Hotel, Manchester" |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Imke van Heerden, Liz Dowthwaite, Aisling Third, Joshua Kelsall, Deborah Olukan from the ART-I project were Roundtable Facilitators and Notetakers at this event which was a half-day unconference, sharing and shaping ideas for artificial intelligence (AI) policy in the cultural and creative industries. We identified and discussed where there is consensus and conflict on overall direction, priorities and detail. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://lu.ma/q03v8qrb |
| Description | ART-I "Generative AI in Media Creation, Palace of Westminster, London" |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Imke van Heerden was a roundtable participant in this event |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.policyconnect.org.uk/news/generative-ai-media-creation |
| Description | ART-I "Responsible Tests for the Public Procurement of AI" |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Imke van Heerden facilitated roundtable and notetaking on developing a set of responsibility measures that will help ensure government innovation can develop efficiently and in line with the public interest. Drawing on the views of panellists and workshop attendees from central government, local government, academia, law and civil society, this will inform an expert White Paper, to be published later in 2024. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://lu.ma/g3d7d1um |
| Description | ART-I "The Impact of Generative AI on Writing and Publishing: Industry Perspectives." Chartered Institute of Editing and Proofreading (CIEP) Annual Conference, Aston University |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Industry/Business |
| Results and Impact | Imke van Heerden was an invited speaker at the event. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.linkedin.com/posts/the-ciep_ciep2024-ciep2024-conferenceaston-activity-72269276551813365... |
| Description | ART-I "Workshop on AI and Creativity, European Conference on Artificial Intelligence, Santiago de Compostela, Spain" |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Other audiences |
| Results and Impact | Imke van Heerden was on the programme committee for this event |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://creai.github.io/creai2024/index.html |
| Description | ASA's Standing Council on Advice Research and Evaluation Workshop (#SCARE2024). |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Third sector organisations |
| Results and Impact | One people at a workshop for the Advice Sector, on a panel with Citizens Advice discussing opportunities and challenges of GenAI and discussing the importance of education and training. |
| Year(s) Of Engagement Activity | 2024 |
| Description | AUA 2024: Panel Discussion: Artificial Intelligence |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Dr. Prokar Dasgupta (King's College of London Urology), concluded this fascinating plenary session by highlighting the evolving litmus test for AI's societal impact. While in the 1960s, Alan Turin's query of "Can Machines Think?" defined an AI algorithm's worth, contemporary scrutiny pivots towards the Weizenbaum test "How does AI impact societies?". Democratizing artificial intelligence is essential. Dr. Dasgupta's proposition of the 3Cs-Countries, Companies, and Civil society underscores the collaborative nature of this endeavor. Only through global collaboration between academic, industrial, and societal stakeholders can AI truly catalyze transformative change while navigating the ethical nuances encapsulated in the Weizenbaum test: How does AI influence our quotidian existence? |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.urotoday.com/conference-highlights/aua-2024/aua-2024-prostate-cancer/151780-aua-2024-pan... |
| Description | AdSoLve meeting with NHS partners |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | AdSoLve partners engaged with stakeholders from the NHS and health organisations. Meeting held on October 25th 2024. Funding from Responsible Ai UK (KP0016). |
| Year(s) Of Engagement Activity | 2024 |
| Description | AdSolve Project Launch and Networking Event |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | We held a launch event for AdSolVe on Thursday 5th December 2024 at The Alan Turing Institute. This was a great opportunity for the research team to meet with stakeholders and gather feedback. Presentation slides are available here. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://adsolve.github.io/ |
| Description | Adverse events and safety in digital mental health trials. |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Industry/Business |
| Results and Impact | Workshop organised by Aislinn with MHRA and KCL colleagues to address adverse events in DMH. With Aislinn Bergin, AdSoLve partner, University of Nottingham. Responsible Ai UK project KP0016. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://institutemh.org.uk/research/mindtech/1856-adverse-events-in-digital-mental-health-may-2024 |
| Description | An Interdependence Frame for (Semi) Autonomous Robots: The Case of Mobile Robotic Telepresence |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Other audiences |
| Results and Impact | Poster presentation and panel discussion at the TAS Symposium 2024. Technological advancements often promise to alleviate our reliance on one another through automating assistance. One such example is Mobile Robotic telePresence (MRP), which gives users the ability to move independently, whilst having a video call, by teleoperating a semi-autonomous robotic device. In this paper, we draw on the Interdependence frame for Assistive Technologies (AT), to question the underlying notion that technological improvements and the implementation of automation can truly create independent users. Applying the tenets of Interdependence onto a case study of MRP use, we unravel the many interdependent relations that exist between direct users and various other supporting individuals. In doing so, we provide an example of how the frame of Interdependance can be applied outside of AT studies, to inspire more critical research on automated systems, taking into account the unavoidable reality that all people rely on one another. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://symposium.tas.ac.uk/2024/program/ |
| Description | Article for online platform - Berliner Gazette.De |
| Form Of Engagement Activity | Engagement focused website, blog or social media channel |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Public/other audiences |
| Results and Impact | Article written with support of AlgorithmWatch on emotional AI in financial services |
| Year(s) Of Engagement Activity | 2024 |
| Description | BBC Broadcasting House (radio) |
| Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Public/other audiences |
| Results and Impact | Gave live interview about deepfakes |
| Year(s) Of Engagement Activity | 2024 |
| Description | BBC Leicester (radio) |
| Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Public/other audiences |
| Results and Impact | Live radio interview for Sir David Attenborough's hometown regarding deepfakes made about him |
| Year(s) Of Engagement Activity | 2024 |
| Description | BBC News 24 (live TV) |
| Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Public/other audiences |
| Results and Impact | Live TV interview to BBC international News 24 for segment on deepfakes |
| Year(s) Of Engagement Activity | 2024 |
| Description | BBC Radio 5 Live |
| Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Public/other audiences |
| Results and Impact | Live radio interview on pornographic deepfakes |
| Year(s) Of Engagement Activity | 2025 |
| Description | BBC Radio Scotland |
| Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Public/other audiences |
| Results and Impact | Interviewed for BBC Radio Scotland on "are we losing our grip on reality" related to proliferation of deepfakes, political and cultural climate, and bad state actors |
| Year(s) Of Engagement Activity | 2025 |
| Description | Bernd Stahl PI on RAISE project fireside chat |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Industry/Business |
| Results and Impact | Bernd Stahl, PI on RAi UK RAISE (a first round Impact Accelerator project) attended the BCS in London to participate in a fireside chat at the CIO forum. During networking at this event, Bernd established some useful contacts for RAISE |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.linkedin.com/feed/update/urn:li:activity:7196217378592964608/ |
| Description | Blog |
| Form Of Engagement Activity | Engagement focused website, blog or social media channel |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Other audiences |
| Results and Impact | A multimodal blog detailing our workshop in Jakarta (in partnership with Monash University Indonesia) with regulators, journalists, academics and professionals. Key themes (hyperlocalisation; anthopomorphism; regulation vs. innovation; changing international alignments) are presented and suggested future action. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://automatingempathy.ai/blog |
| Description | Blog post on AIPAS, titled "Whose accountable anyway" |
| Form Of Engagement Activity | Engagement focused website, blog or social media channel |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Public/other audiences |
| Results and Impact | Blog post that highlights AI and the challenges LEAs face when using AI and ensuring accountability to all stakeholders. The post also talks towards EDI regarding the project. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Book Launch: The Atomic Human - Understanding Ourselves in the Age of AI, June 06, 2024 |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Public/other audiences |
| Results and Impact | Professor Gina Neff hosted a conversation with Neil D. Lawrence about his book, at the launch event. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.eventbrite.co.uk/e/book-launch-the-atomic-human-understanding-ourselves-in-the-age-of-ai... |
| Description | Britain needs to protect citizens' rights in the race for AI (GN) |
| Form Of Engagement Activity | A magazine, newsletter or online publication |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | Following the AI Action Summit in Paris, in a recent open letter to The Guardian, experts including Polly Curtis, Prof Robert Trager, Prof Gina Neff, Maeve Walsh, Jim Killock and Dr Jeni Tennison, urge the UK government to urgently develop a declaration of digital rights and principles, emphasizing the need to protect citizens' rights amid the rapid advancement of AI technologies. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://www.theguardian.com/technology/2025/feb/11/in-the-race-for-ai-britain-needs-to-protect-digit... |
| Description | British Academy Early Career Researchers' Network - AI in Research (GN) |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Postgraduate students |
| Results and Impact | Gina Neff spoke at a workshop from the British Academy for early career researchers in humanities and social sciences, looking at AI and tech and how they could use AI responsibly in their research. Approximately 25 new researchers attended and speakers included the research head for the new UK AI Safety Institute. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Building a roadmap for progressive UK tech policy (GN) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | A half-day workshop, sharing and shaping ideas for technology policy in a new government. How might the next government approach tech and its impact on society? That question will largely be determined by which ideas are on the table, how they are advocated for, and who gets to be in the conversation. In this workshop we outlined the possible roadmaps for UK tech policy. We discussed the milestones on this map, and identified where there is consensus and disagreement on the priorities, detail and overall direction. We didn't start from scratch, but instead dug into where key initiatives have left off; including the Data Bill and AI White Paper and policy work developed inside and outside of government. We asked what needs to be kept, ditched, iterated and created. And how these fit together to help shape a comprehensive policy programme that puts technology in service of society. This workshop was for those working across the policy landscape: civil society, academia, industry, and government. Note that it was a closed, invite-only event. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.crassh.cam.ac.uk/events/43124/ |
| Description | CSaP Policy Workshop on the use of AI for research (GN) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | CSaP hosted this workshop with AI@Cam and Cambridge's Accelerate programme on AI in research. Gina participated alongside high-level attendees from the research community, exploring themes of how AI can be used responsibly in research, and what policy interventions could facilitate it. Attendees included the CEO of the science funder EPSRC, Royal Society and British Academy, Chief Scientific Advisors, DSIT, and academics. |
| Year(s) Of Engagement Activity | 2024 |
| Description | CUSO Winter School on Human-AI Collaboration - Lecture: Participatory Harm Auditing Workbenches and Methodologies PHAWM |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Postgraduate students |
| Results and Impact | This event was a lecture on Responsible AI, where students were introduced to the concept of participatory auditing and the PHAWM project. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Can London become the AI capital of the world? |
| Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | Can London become the AI capital of the world? - The Bunker | Podcast - Keir Starmer's vision of London leading the global AI industry raises an intriguing question: can the UK truly challenge Silicon Valley's dominance? In this episode of The Bunker podcast, Prof Kate Devlin, our Deputy Chair PEOP Pillar, speaks with Prof Sana Khareghani - RAi Ambassador, to explore London's potential as a global AI hub or risks falling behind in the race for innovation. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://open.spotify.com/episode/4h7dKcEyKYFTjDYkVpIX5E |
| Description | Can renewable energy keep up with the exponentially increasing power demands of AI? (SDR) |
| Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | A report by the Royal Academy of Engineering calls for the government to require mandatory reporting from tech companies on their power and water usage. They say the expansion of resource-hungry AI is creating a higher environmental risk that could lead to competition for water and energy in the future. Detailed monitoring is therefore necessary to manage these risks, says Gopal Ramchurn, a professor of artificial intelligence at the University of Southampton and the chief executive of Responsible AI UK. "The worst case scenario if we don't get this right, if we don't get this data and don't manage the use of natural resources by data centres, is that we will find ourselves making difficult choices when it comes to who has access to power in the UK - data centres of normal people," he says. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://www.itv.com/news/2025-02-10/can-renewable-energy-keep-up-with-the-increasing-power-demands-o... |
| Description | Carnegie Endowment for International Peace Security Summit |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Invited and attended India High Commission for Security Summit special reception |
| Year(s) Of Engagement Activity | 2025 |
| Description | Center on Organizational Innovation at Columbia University 25th Anniversary Conference (GN) |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Gina Neff spoke at the opening event of this year's 'Problems of Democracy' conference, hosted by the Centre on Organizational Innovation at Columbia University. Gina's talk was titled "Can Democracy Survive AI?" She explored themes of how AI is Influence impacting democratic institutions and values, and how policymakers can meet these challenges. This talk will be published in the journal Sociologica in early 2025. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Centre for in vitro Predictive Models - DERI Workshop |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | Local |
| Primary Audience | Other audiences |
| Results and Impact | Presentation on Multimodal AI. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.cpm.qmul.ac.uk/events/4661/cpm-deri-networking-event-qm-cpm-event/ |
| Description | Chair of a roundtable for The Science, Innovation and Technology Select Committee. |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Chaired a roundtable, "Building Fairer Systems - Confronting Algorithmic Bias in AI", hosted by Chi Onwurah MP for the The Science, Innovation and Technology Select Committee. This was in my role as a Comissioner for the AI, Faith & Civil Society Commission - I was appointed to this role to represent Humanists UK (of which I am a Patron) and for my knowledge as a Co-I on the RAi UK programme. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://www.ai-commission.com/event/building-fairer-systems-confronting-algorithmic-bias-in-ai |
| Description | Chair of panel discussion for Creative UK |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | Regional |
| Primary Audience | Professional Practitioners |
| Results and Impact | Chaired a panel on AI and the creative industries for Creative UK. This panel followed a film screening of "The Moment of truth" a documentary short on the impact of AI on photographic archives. The panel was followed by a networking and discussion session where I discussed the work of RAi with the audience. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://www.linkedin.com/posts/floranedelcusmith_if-theres-one-thing-that-extraordinary-times-activi... |
| Description | Chatham House Workshop 'Participation, People and Power in Data and AI' |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Workshop organised by Liverpool Data Co-operative, ESRC Digital Good Network and Elgon Social, all facilitated by Reema Patel. Margaret Colling, member of the PVAI People's Advisory Panel, facilitated by Connected by Data |
| Year(s) Of Engagement Activity | 2024 |
| Description | Children and Screens webinar on Algorithms 'Youth and AI driven tech' |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Public/other audiences |
| Results and Impact | This was an online webinar 'Ask the Experts' that brought panelists with expertise in computer science, human-computer learning, communications, and mental health health. The aims was to survey the current state of data driven media online, how algorithms work to increase and solidify bias, and what families need to know to develop essential skills to cope with the growing influence of algorithmically-delivered content on youth's development, preferences and minds. |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://www.childrenandscreens.org |
| Description | Class given for MSc Computer Science at Ulster University |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Postgraduate students |
| Results and Impact | Ben Bland was invited to give a class/workshop on "AI Ethics Vs the Real World" at Ulster University for their MSc in Computer Science. |
| Year(s) Of Engagement Activity | 2024 |
| Description | CogX LA - Fundamentals of Responsible Ai |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Working across the UK - partnering with research, industry, policy and civil society experts - Responsible AI UK (RAI) are preparing for a future powered by huge technological advances. They bring multi-disciplinary teams that will address fundamental challenges of RAI and form the core research pillars of the programme. They're crafting safeguards into this process, for a future that's in sync, where people, communities and wider society reap the benefits of rapid AI progress, equitably. But how do we bake checks and balances into a technology that's advancing at an unprecedented speed? Join Professor Gopal Ramchurn, RAI CEO, and Professor Sana Khareghani, RAI's Head of AI Policy, along with the recent awardees of their flagship keystone projects, announced live during the session, as they walk you through an exploration of the risks and opportunities with AI innovation. RAI's comprehensive research projects and insights are creating global solutions that span beyond the UK too, including establishing a global advocacy network for responsible AI use across Latin America, India and Africa. Learn how effective tenets for equitable AI ecosystems are influenced by wider inclusion, and broad societal benefit. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.cogxfestival.com/agenda |
| Description | CogX LA - AI, Innovation and Geopolitics: Cutting Through the Hype |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Policymakers/politicians |
| Results and Impact | A deep exploration into the politics of artificial intelligence intelligence, which isn't just changing how we work and live, but is reshaping - and being shaped by - geopolitical competition and international diplomacy. Understanding why governments increasingly see AI and its associated technology stack through a geopolitical and national security lens has never been more important. How should companies think about their growth strategies, and communicate with governments and investors about their technology, supply chains, cybersecurity, and data governance in today's more (geo)politically charged environment? Who gets to set the global rules for AI? And how can founders and CEOS help policymakers cut through the hype and understand what effective AI regulation looks like in areas like cybersecurity, biosecurity and beyond? |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.cogxfestival.com/agenda |
| Description | CogX Leadership Summit on AI |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | The following sessions featured our team: o Advancing AI: Fostering Trust with Proactive Policy (10:55 - 11:35), session moderator Karen McLuskie - Deputy Director, Technology at Department of Business and Trade (and Chair of the RAi Industry Advisory Board). o Actionable Intelligence: How Advancing AI will Combat Cyber Crime (12:20 - 13:00), session moderator Karen McLuskie - Deputy Director, Technology at Department of Business and Trade (and Chair of the RAi Industry Advisory Board). o Demystifying Responsible AI: A Guide for Leaders (12:00 - 12:45), session moderator Gopal Ramchurn - CEO Responsible AI UK o AI & Ethics: Scaling Safeguards for Industry (16:50 - 17:25), alongside other speakers, Sana Khareghani |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://rai.ac.uk/events/cogx-leadership-summit-on-ai/ |
| Description | Collaborative In-Person Workshop Co-Delivered by PROBabLE Futures and (CETaS) Centre for Emerging Technology and Security (The Alan Turning Institute) |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | In-person workshop focussing on AI-transforming crime and the response to crime with talks and discussions about The role of AI in transforming online criminality, Challenges of AI generated child sexual abuse, Mapping probabilistic AI tools being deployed, piloted or trialled in UK law enforcement and Data analytics and AI, the fight against modern slavery |
| Year(s) Of Engagement Activity | 2025 |
| Description | College of Policing workshop, consultation on response |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | MO attended College of Policing workshop to give feedback to the College of Policing Data Ethics and Data-DrivenTechnologies APP Consultation |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://researchportal.northumbria.ac.uk/en/publications/college-of-policing-data-ethics-and-data-dr... |
| Description | ConnectXR Audience Insight and Toolkit Launch |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Industry/Business |
| Results and Impact | Launch event for the ConnectXR Toolkit developed by StoryFutures and the Department of Health Studies Royal Holloway, University of London in a ground-breaking Research & Development (R&D) collaboration with the Royal Borough of Windsor and Maidenhead and University of Nottingham. explores the role that next generation technologies can play in enhancing people's lives. ConnectXR is a StoryFutures innovation programme, which designed and tested a new innovation pathway for creative enterprises to create XR in health and wellbeing solutions from development to implementation. The programme was designed as a pilot to meet the ambitions of XR creative enterprises, creating health and wellbeing solutions, to establish new routes to audiences through distribution with community assets, in this case libraries. StoryFutures' StoryTrails project and subsequent partnerships with Meta and the BFI, resulted in ongoing collaborations with 17 libraries and 7 arts hubs and cinema venues in the UK, demonstrating that libraries and cinemas can play a key role in the successful engagement of communities in XR experiences. ConnectXR explored the challenges and opportunities facing creative XR enterprises when navigating the essential and complex network of health and community stakeholders needed to support the delivery of XR in health solutions in libraries. While creative companies are adept at creating engaging experiences, the direction and input from health services remain essential. This is where the innovation pipeline at the heart of Connect XR plays a critical role in bringing together the creative spark of these companies with the practical, health-focused insights of healthcare professionals. Audience data is ordinarily hard to access yet critical in building the health economics needed to attract funding and support. Through an open competition, "Soul Paint", a VR application exploring physical and emotional wellbeing by the creative UK SME Hatsumi and its production partner Monobanda, was selected for a 2-week library pilot, during which audience research was conducted. Findings demonstrated that Maidenhead Library, as a strong community hub, was an appropriate setting for this type of innovation. Users of a wide range of ages and technical ability found it to be enjoyable, interesting and suitable. A large majority indicated they would repeat the experience and recommend it to others. There was a strong positive impact on wellbeing, through the opportunity to take time for reflection and to connect with others. Older users found the opportunity to try something new to be enriching and valuable. The pilot demonstrated that establishing a shared innovation pipeline is essential and effective in ensuring the development of high quality XR in health solutions, and that the involvement of the right stakeholders is not only critical to community deployment, but ensures delivery is aligned with local healthcare needs. Our lessons learned span the importance of networking, training, relationship building, engagement with communities, strong host support and careful planning of the audience journey. Connect XR broke new ground in building the key relationships needed for successful deployment of XR in health solutions within community assets. But more needs to be done to further test and explore this model to ensure it is a viable and sustainable route for Creative XR SMEs to innovate and reach new audiences. The opportunity exists, building on the new momentum within local authorities and integrated care boards, to think creatively about the future role of libraries and XR in health experiences. Greater collaboration between these new partnerships, social prescribing teams and Creative XR SMEs themselves, can generate new and valuable routes of ensuring better health outcomes for all, experienced at the heart of local communities themselves. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.storyfutures.com/resources/connectxr-audience-insight-toolkit |
| Description | Contribution to RAi UK's DSIT consultation regarding their proposed AI Management Essentials Toolkit |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Prof Liakata has contributed to RAi UK's DSIT consultation regarding their proposed AI Management Essentials Toolkit (January 2025). Funding from Responsible Ai UK (KP0016). |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://rai.ac.uk/media/reports/ |
| Description | Creating Policy Futures: A Responsible AI and Careful Industries Unconference |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | This event in Manchester, expertly facilitated by Careful Trouble, saw over 60 academics, researchers, industry specialists, and creative practitioners meet, despite the weather, for an afternoon's unconference. Following opening provocations from creative coder Lex Fefegha, Dr Erinma Ochu (UWE), and Adam Ingle (LEGO Group), participants proposed discussion topics that ranged from copyright and the problems with "opt-out" to how theatres are handling AI. A panel discussion with Mia Leslie (IFOW), Henry Cooke (BBC) and Julia Bell (Birkbeck) led the second half of the afternoon, concluding with another round of discussions covering aspects such as sustainability, creative practice, public service, and grassroots resistance. The output from the discussions was recorded and will be used to feed into a rapid response to the government's Industrial Strategy 2035 and will form the basis of a whitepaper to be published in the new year. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://rai.ac.uk/events/creating-policy-futures-a-responsible-ai-and-careful-industries-unconferenc... |
| Description | Critical Conversations: Artificial Intelligence - a critical technology for a critical time? |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Dr Hayaatun Sillem CBE, Academy CEO, will chair this discussion on the fast-moving developments in artificial intelligence and regulatory approaches that enable its safe and equitable use. Our expert guest speakers are Dr Simon Baker, University of Cambridge; Enterprise Hub Fellow, and Professor Sana Khareghani, King's College London; Responsible AI UK. The panel will discuss the rapidly evolving challenges and opportunities the tech poses, the international policy dialogues that are taking place to harness it, and will explore how the UK is positioned relative to international comparators. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://raeng.org.uk/events/2024/jan/artificial-intelligence-a-critical-technology-for-a-critical-ti... |
| Description | DCMS Chief Scientific Advisor and Director of Digital, Data and Technology, Building Digital UK visited Sheffield to find out about AI research. |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | DCMS Chief Scientific Advisor Tom Crick and Prof Ajay Chakravarthy, Director - Digital, Data and Technology, Building Digital UK visited Sheffield to find out about AI research. We presented Public Voices in AI research to them and discussed it with them. Following this other team members from DCMS were then invited to join future Public Voices in AI events and other Digital Good Network events. |
| Year(s) Of Engagement Activity | 2025 |
| Description | DP World AI in Practice: Challenges and Use cases |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | Regional |
| Primary Audience | Industry/Business |
| Results and Impact | This was a follow-up event from the Tech Solent PWC event last year. We were invited to DP World Southampton to discuss AI in practice. We presented our research work on Swarm Robotics. They also presented us with use cases for adopting AI in their business and the challenges they are currently facing with deploying AI. We are currently discussing partnership opportunities based on this. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.techsolent.org/event-details/ai-in-practice-dp-world |
| Description | DSIT International Day of Women and Girls in Science celebration (GN) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | To celebrate International Day of Women and Girls in Science the UK Department for Science Innovation, and Technology asked six women, working at the top of the industry, what their advice would be to someone considering a career in STEM. Professor Gina Neff was featured in the celebration. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://www.instagram.com/p/DF8KWrBoe5z/?hl=en&img_index=4 |
| Description | DSTL presentation |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Presented responsible AI research advances to DSTL, focusing on deepfakes |
| Year(s) Of Engagement Activity | 2024 |
| Description | Dagstuhl Seminar on Resilience and Antifragility of Autonomous Systems |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Other audiences |
| Results and Impact | This Dagstuhl Seminar aimed to unify the international research on resilient and antifragile autonomous systems (RAAS), leading to faster scientific advancements and industrial adoption. To that end, the seminar wants to bring together leading researchers and practitioners with expertise in autonomous system resilience, antifragility, safety, and ethics from disciplines including Computer Science, Computational Biology, and Ethics, to share and discuss each other's understanding of, methods for, and open challenges related to RAAS. These participants will work closely together to: (1) survey the current RAAS research in order to develop and document a common understanding of the RAAS research landscape; (2) identify RAAS open challenges and promising preliminary approaches to tackling them; (3) set an international research agenda for addressing these challenges; (4) define a roadmap for the delivery of this agenda; and (5) agree on use cases (e.g., from health and assistive care, transportation, aviation and aerospace) that can be used as a benchmark for the evaluation of future RAAS solutions. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.dagstuhl.de/en/seminars/seminar-calendar/seminar-details/24182 |
| Description | Data & AI Meetup: Ethics - Participatory Harm Auditing Workbenches and Methodologies [PHAWM] |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Industry/Business |
| Results and Impact | Data & AI technologies are becoming evermore integrated in to how we live, work and play. In the past several years, AI adoption has been widespread and rapid. Big tech organisations have been racing to get in on the GenAI hype, and governments have announced AI roll-out plans to embed these technologies into the public sector. Crucial to ensuring fairness, transparency and accountability is ethics. Not least when technologies increasingly influence decisions on healthcare, finance and law enforcement. The responsible development of AI not only mitigates risks to society that these technologies can create, but also fosters trust among users, promotes social good and aligns technology with human values. The aim of this event waws to discuss ethics in data & AI. This included covering work being done in participatory AI auditing [PHAWM project] to assess the quality and potential harms of AI as well looking at responsible AI strategy, governance and monitoring. Leonardo Bezerra - Lecturer in AI/Data Science, University of Stirling, spoke on behalf of the Participatory Harm Auditing Workbenches and Methodologies Project (PHAWM) |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://community.thedatalab.com/events/171604 |
| Description | Data for Policy 2024 (GN) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | The eighth international Data for Policy conference was held 9 - 11 July, at Imperial College, London. The conference promised a packed programme, with a line-up of more than 130 presenters, speaking on topics ranging from industry's perspective on open data, to transforming AI into a force for good. The conference welcomed delegates from six continents, across a range of disciplines and roles, including academia, commerce and government. Of particular note, was the representation from SubSaharan Africa, who made valuable contributions to the conference special track 'AI, Ethics and Policy Governance in Africa,' organised by the Global Center on AI Governance. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://dataforpolicy.org/data-for-policy-2024/ |
| Description | Decolonial AI workshop |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | Local |
| Primary Audience | Other audiences |
| Results and Impact | Over 12 members of the university held conversations during and after the workshop, sparking an interest in further developing collaborations and work on the topic. A blog post has been created from these discussions. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://rai.ac.uk/decolonising-ai-what-why-and-how/ |
| Description | Demystifying Artificial Intelligence Conference MK |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | Regional |
| Primary Audience | Industry/Business |
| Results and Impact | Empowering Milton Keynes Businesses to Apply and Navigate Artificial Intelligence Effectively. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.eventbrite.co.uk/e/demystifying-artificial-intelligence-conference-mk-tickets-7595573655... |
| Description | Digital Policy workshop |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Led a Digital Policy workshop for the Department for Digital, Culture, Media and Sport (DCMS). For all the Analysts (around 100). |
| Year(s) Of Engagement Activity | 2024 |
| Description | Digital Policy workshop |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Led a Digital Policy workshop for the Department for Digital, Culture, Media and Sport (DCMS). For the whole of DCMS (unknown). |
| Year(s) Of Engagement Activity | 2025 |
| Description | Digital Policy workshop |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Led a Digital Policy workshop for the Department for Digital, Culture, Media and Sport (DCMS). For everyone working in Arts & Heritage (around 130 people). |
| Year(s) Of Engagement Activity | 2024 |
| Description | Digital Sensors in Mental Health Research: Social and Ethical Considerations |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Other audiences |
| Results and Impact | Participation an expert workshop run by Wellcome on the ethical and social considerations of using sDHTs and sensor data in mental health research. This led to a publication. The Digital Health Measurement Collaborative Community (DATAcc) by the Digital Medicine Society (DiMe), in collaboration with the UCLA Depression Grand Challenge, and supported by an Academic Advisory Committee and Wellcome (see Acknowledgements), present here recommendations for advancing the use of sensor-based digital health technologies (sDHTs) for mental health research and clinical practice. This work builds on the results of the Digital Sensing Workshop held from February 28 through March 2, 2023, at UCLA. More than 50 leading mental health and computer science researchers, industry experts, advocates, and funders from six countries came together across five working groups to discuss a shared vision and common goals for incorporating sDHTs in mental health research and care. Additionally, we share insights from two expert workshops run by Wellcome in July 2024 on the ethical and social considerations of using sDHTs and sensor data in mental health research. Contributors came from Colombia, India, Kenya, South Africa, Uganda, the UK, and the US and brought clinical, commercial, community, lived experience, research, and technical perspectives. |
| Year(s) Of Engagement Activity | 2024,2025 |
| URL | https://datacc.dimesociety.org/mental-health/ |
| Description | Digital decisions - shaping our technology for all |
| Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | The Weekly Tradecast looks at the evolution of science and technology. Since 1964, when UN Trade and Development was created, the world has seen a series of new realities, challenges and opportunities. These days, advances in artificial intelligence and other technologies are racing ahead, transforming education, finance, medicine and many other fields at a lightning pace. The benefits are enormous but so are the risks. As some countries, sectors and workers see great gains, others may lose out. With technology such a driving force, listen to Sana Khareghani, professor of practice in AI at King's College London, to find out how to get on track to a better future for all. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://unctad.org/podcast/digital-decisions-sana-khareghani-shaping-our-technology-future-benefit-a... |
| Description | Discussions around RAi with Kenyan delegation |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Trilateral Research and the team project RAISE, a RAi UK funded Impact Accelerator Project, were invited to present to a partnership and peer learning delegation from Kenya organised by the Foreign, Commonwealth and Development Office. The meeting took place in the FCDO offices in London. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://trilateralresearch.com/company_news/trilateral-research-discusses-responsible-ai-with-kenyan... |
| Description | Diverse Stakeholder workshops at the National Institute of Informatics, Japan |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Other audiences |
| Results and Impact | Two workshops were conducted with the NII bringing together academics (from across multiple disciplines), third sector organisations and consumer groups to explore cultural differences in the perception of Empathic AI-Human partnerships as well as tech regulation. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Does ChatGPT Outperform a Real Lawyer? (IP0015) |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | A talk at the Centre for Law and Technology, Southampton, about some ongoing research regarding regulating AI, transparency and accountability for tech companies. The talk covers a study with researchers at Nottingham on whether chatgpt outperforms a real lawyer and our new AI transparency toolkit for SMEs in the UK and Europe. Full details here: https://dl.acm.org/doi/10.1145/368603... |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.youtube.com/watch?v=7_tRmLzqjN8 |
| Description | EPIC CDT Seminar - Responsible AI - Participatory Harm Auditing Workbenches and Methodologies [PHAWM] |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Postgraduate students |
| Results and Impact | To introduce post-graduate students to the concept of Responsible AI and participatory auditing in the context of the Participatory Harm Auditing Workbenches and Methodologies [PHAWM] project. |
| Year(s) Of Engagement Activity | 2024 |
| Description | ESRC Festival of Social Science "Beyond the Landslide": Participatory Harm Auditing Workbenches and Methodologies [PHAWM] |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | Local |
| Primary Audience | Public/other audiences |
| Results and Impact | This event will brought together leading academic experts, activists, and thought leaders to reflect on the big challenges facing UK and devolved politicians over the coming quarter century. The event will took the form of a panel, with invited speakers offering a series of short provocations to spark discussion within the panel, and encourage the audience to think broadly about the topics at hand. Dr Mark Wong, University of Glasgow, presented about the PHAWM project in order to increase public awareness of AI bias/harms and importance of participatory auditing. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://festivalofsocialscience.com/events/after-the-landslide/ |
| Description | Ecological Empathy in the East and West: Insights from AEGIS Workshops, Tokyo, June 2024 |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | The Project AEGIS team was in downtown Tokyo, kindly hosted by partners at the National Institute of Informatics (NII), to explore regional cultural perspectives and ideas regarding technology at the intersection of emulated empathy and human-AI partnering. |
| Year(s) Of Engagement Activity | 2010,2024 |
| URL | https://automatingempathy.ai/blog/workshops-tokyo-june-2024 |
| Description | Education and ethics in an AI-powered future: Event at the Scottish Parliament |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Invited on behalf of the Learned Societies' Group on Scottish STEM Education (LSG), the Royal Society of Edinburgh to attend "Education and ethics in an AI-powered future" (Tuesday, 4 March 2025) at the Scottish Parliament in Edinburgh (the event was sponsored by Ben Macpherson MSP.) The event brought together expertise connecting to recent developments in the education reform space, such as the Education (Scotland) Bill and implementation of the national discussion for education, independent review of qualifications and assessment, Withers review, and purpose and principles for post-secondary education, research, and skills. There was an introduction by Ben Macpherson MSP and by Dr Fiona McNeill, Learned Societies' Group on STEM Education. The STEM Education in Scotland (LSG) includes representatives from several prominent organizations, e.g., the Association for Science Education, the British Computer Society (BCS), and The Chartered Institute for IT. There was also a talk by Prof Liz Bacon FRSE (Principal and Vice-Chancellor of Abertay University) and a networking session with expertise from across the subject societies providing a holistic perspective on issues impacting STEM teaching and learning. |
| Year(s) Of Engagement Activity | 2025 |
| Description | Engagement with Scottish Government Education and Education Scotland |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | Regional |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Meeting in January 2025 to discuss the research project with the Unit Head for Science, Technology, Maths, HWB, Curriculum & Qualifications Division of the Scottish Government and key representatives from Education Scotland (an executive agency of the Scottish Government that supports the quality and improvement of education in Scotland, the professional development of educators and improved learning experiences and outcomes for Scottish learners). |
| Year(s) Of Engagement Activity | 2025 |
| Description | Ensuring Responsible AI Through Methodological Diversity |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | This event, co-organised with the BRAID programme, brought together leading experts from different and distal disciplines to explore the diverse research methodologies and methods that can drive responsible AI forward. The audience were invited to participate and reflect on how different approaches and diversity of knowledge in AI research and development are key for ensuring a responsible future. From ethical considerations to technical advancements, the event included keynote presentations followed by a roundtable discussion and Q&A. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://rai.ac.uk/events/ensuring-responsible-ai-through-methodological-diversity/ |
| Description | Enterprise Fellowships Webinar |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | This fellowship is a 12-to-18-month accelerator programme to support university-based researchers, with an entrepreneurial mindset, to translate their research into new products or services. Your proposed project should demonstrate a clear path from the conceptualisation to execution, backed by a business plan or canvas. Ultimately, the project should demonstrate the use of responsible AI practices or tools as part of a spin-out which will benefit the economy, society, culture, health, the environment or quality of life. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://rai.ac.uk/rai-calls/enterprise-fellowships/ |
| Description | Espacio Bionni (Como Alfredo por su casa) (EP) |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Prof Elvira Perez Vallejos gave a talk in Spain, titled 'Responsible AI and the digitalisation of death' that focused on the rapid advancement of GenAI and how it is marking the beginning of a new era of technological innovation, including the development of 'DeathTech' or Digital Afterlife Industry (DAI). |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://www.youtube.com/watch?v=5vjnL2-Ua_I |
| Description | Ethical and Responsible AI Music Making Workshop |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | We held a one-day workshop on Responsible Music AI with a focus on bias in AI music generation systems on 17th July 2024 at the Creative Computing Institute, University of the Arts London, Holborn, London. We brought together over 100 people to form an interdisciplinary community of musicians, academics, and stakeholders to collaboratively identify the potential and challenges for using low-resource models and small datasets in musical practice. The workshop consisted of publicly streamed discussion panels, presentations of participants' work, and brainstorming sessions on the future of AI and marginalised music. The event was followed by an evening reception featuring live performances using AI and small datasets of music. Project "Responsible AI international community to reduce bias in AI music generation and analysis. This work was supported by the Engineering and Physical Sciences Research Council [grant number EP/Y009800/1], through funding from Responsible Ai UK. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://music-rai.github.io |
| Description | European Biometrics Association talk |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Presented to European Biometrics Association about audio deepfakes |
| Year(s) Of Engagement Activity | 2024 |
| Description | Evidence use in AI policymaking (KY) |
| Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | Karen Yeung is an interdisciplinary professor at the University of Birmingham, specialising in AI. In this episode, she discusses with Toby Wardman the uses of AI in evidence-based policymaking, and the uses of evidence in AI policymaking. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://scientificadvice.eu/podcast/karen-yeung-on-evidence-use-in-ai-policymaking/ |
| Description | Explainable AI in biology Event |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Postgraduate students |
| Results and Impact | Explainable AI in biology, presentation from Prof Maria Liakata, AdSoLve lead/QMUL. Recorded presentation available on Youtube. Funding from Responsible Ai UK (KP0016). |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.youtube.com/watch?v=6CZYuR0xqBk |
| Description | FATES of Africa: Advancing Responsible AI (IP0028) |
| Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | It's been an amazing journey so far on our project. We've met with more than 180 people including pupils, teachers, community members, academics and policymakers in Kenya, Nigeria, Cameroon and South Africa. Valuable invaluable insights have been gained, and important new connections have been formed. So much has happened! |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://raifatesafrica.wordpress.com/2025/01/28/fates-of-africa-project/ |
| Description | FCDO Visit to Utah USA |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Other audiences |
| Results and Impact | Purpose of the activity was to honour MOU between UK and State of Utah USA on Responsible AI |
| Year(s) Of Engagement Activity | 2024 |
| Description | FRAME Conference |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Gina Neff gave a lightning speech at FRAME Conference, an event aimed at coordinating venture and responsible investment communities around the themes of ESG and responsible investing. Gina spoke about the everyday ethics of AI and left the audience with a call to action to work with workers and workplaces to power the AI transformation in a wide range of sectors. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Fireside Chats techUK - 2024 Highlights and 2025 Vision: Reflections and Roadmaps for AI, Skills and Procurement (PROBabLE Futures) |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | techUK / Justice and Emergency Services (JES) Programme hosted an afternoon of fireside chats, powered by the Justice and Emergency Services Management Committee (JESMC). The event brought together influential leaders from both the public and private sectors, creating a space to reflect on the impactful progress made throughout 2024. Exploration of the strides taken and the positive change the programme has driven across three strategic focus areas: AI, Digital Skills, and Procurement. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.techuk.org/what-we-deliver/events/2024-highlights-and-2025-vision-reflections-and-roadma... |
| Description | Following the Data |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Postgraduate students |
| Results and Impact | Led the Following the Data workshop for a Centre for Doctoral Training (Safe & Trusted AI). |
| Year(s) Of Engagement Activity | 2025 |
| Description | Future Makers: Robotics and Emerging Technology event |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | Regional |
| Primary Audience | Public/other audiences |
| Results and Impact | On the 23rd of October, Nottingham Central Library became a hive of technical discovery as over 700 people joined us for Future Makers: Robotics and Emerging Technology. One of the main attractions at the event was the University's pioneering Cobot Maker Space who brought along their family of robots including Robin, a humanoid style robot with poseable head and arms and Spot, a Boston Dynamics dog-like mobile robot. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.nottingham.ac.uk/policyengagementblog/public-engagement-on-a-robotic-scale |
| Description | GM Connect Health Ecosystem - XR |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Immersive technologies represent the next generation of precision medicine and patient-centred care, with opportunities to deliver personalised, participatory, data-driven experiences (MindTech, 2021). Long-standing innovation partners, the University of Manchester and Greater Manchester Mental Health NHS Foundation Trust (GMMH) want to drive the integration of immersive technologies in mental health treatment. The purpose of this workshop is to go beyond awareness raising, and facilitate new collaborations, develop research ideas, and build partnerships to generate new evidence, and strengthen existing evidence to bring life-enhancing, XR-enabled treatments to more people more quickly. The event will showcase developed XR Mental Health interventions currently in use in the NHS or close to real-world implementation, with an opportunity to experience these innovative technologies first-hand. It will include: A strong service user presence - as researchers, PPIE contributors, XR facilitators and recipients of XR interventions Discussions on existing XR research, regulatory, and governance frameworks and gaps Demos and first-hand experience of XR interventions Use-case presentations where XR may work but the evidence or interventions have not been developed, applied or tested A 'world café' where attendees can mix with health ecosystem members from academia, health and industry to explore new applications of XR in mental health An opportunity to continue this collaboration with further workshops and research support for projects emerging from this event Agenda 09:00 - 09:30 | Arrivals and Tea/Coffee 09:30 - 09:45 | Welcome | Professor Chris Taylor (Digital Futures Director) | Professor Tony Warne (Chair of GMMH NHS Foundation Trust Board) 09:45 - 10:00 | Introduction to Health Ecosystem | Professor Panos Constantinides (Manchester Business School) 10:00 - 10:25 | Treating anxiety and phobias for people with ASC & LD | Jeremy Parr (Newcastle University), Billy Webber (XR Therapeutics) 10:25 - 10:45 | Discussion of XR research generation: Round Table 1 | Aislinn Gómez Bergin (MindTech, Nottingham University) 10:45 - 11:00 | Tea/Coffee Break 11:00 - 11:30 |AVATAR Therapy: Extended reality therapy for auditory hallucinations | Dr Thomas Ward, AVATAR Therapy Co-ordinator (Kings College London) 11:30 - 12:15 | GameChange VR: Exposure therapy for people with psychosis and agoraphobia | Sinead Lamb (Oxford VR), Heather Peel (GMMH Service User Researcher), Kate Kelly (GMMH Peer Mentor), Kichau Ramlaul (GMMH Peer Mentor), John Sainsbury (GMMH) 12:15 - 12:45 | Discussion of XR research generation: Round Table 2 12:45 - 13:30 | Lunch 13:30 - 14:30 | XR Demonstration (breakout) · GameChange · AVATAR · XR Therapeutics · Paul Warren - The Decision Bridge · Data visualisation suite - Robotic home help. 14:30 - 14:50 | The possibilities and challenges of XR in Mental Health | Aislinn Gómez Bergin (MindTech, Nottingham University) & Jane Guest (Innovate UK, UKRI) 14:50 - 15:05 | Tea/Coffee Break 15:05 - 15:25 | Discussion of XR research generation: Round Table 3 15:25 - 15:45 | Panel Discussion | Chair - Aislinn, Tom Ward, Jeremy Parr, Jane Guest 15:45 - 16:00 | Closing Remarks and Next Steps 16:00 | Evaluation and Feedback |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://events.manchester.ac.uk/event/event:u2h5-lqf8jfl0-1v1juc/gm-connected-health-ecosystem-xr-in... |
| Description | Gender Equity in AI |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | The intended purpose was to bring awareness about the lack of diversity in AI. The outcomes from this event feed a white paper. |
| Year(s) Of Engagement Activity | 2023 |
| Description | Gender Equity in AI |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | It is vital to address the gender disparities and exclusions that persist within the field of AI, acknowledging the critical role that gender diversity plays in ensuring the responsible design, development, implementation and use of AI. We must move beyond potential and rhetoric, and take further tangible and transformative action towards gender equity in AI. This event brings together EDI experts, artists, industry innovators and gender advocates to share their experience and to propose provocations to spark discussion. Participants will have an opportunity to engage in group discussions surrounding these provocations and to explore concrete actions we can foster and support within the Responsible AI UK ecosystem. |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://www.rai.ac.uk/events/gender-equity-in-ai |
| Description | Generative AI: Hopes and Fears - Dumfries |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Artificial Intelligence will solve all our problems. Artificial Intelligence will take away all of our jobs. Which is it? And what is it, anyway? Join a team of AI researchers from the Open University to explore recent developments in "Generative AI", covering what it is and what it can do, how it's being used in education and research, and reasons to be cautious, or optimistic, about it. Interactive lectures will be followed by discussion and Q&A with Prof. John Domingue (@johndmk), Dr. Aisling Third (@thirda), and Dr. David Pride (@davejavupride), of the OU's Knowledge Media Institute as well as Michael Gardiner, Digital Development Specialist for South of Scotland Enterprise (SOSE). |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.eventbrite.co.uk/e/generative-ai-hopes-and-fears-dumfries-tickets-827084099807 |
| Description | Global Child and Adolescent Mental Health Conference Gothenburg |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Policymakers/politicians |
| Results and Impact | International conference with Academics, Wellcome Trust representatives and Swedish policy-makers (including Minister of Public health). Prof Domenico Giacco, AdSoLve partner, University of Warwick, participated in the event. Funding from Responsible Ai UK (KP0016). |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.gu.se/en/global-child-and-adolescent-mental-health-conference |
| Description | Global MOOC & Online Education Conference |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Other audiences |
| Results and Impact | Conference looking at how digital tech and AI are transforming education. Prof Greg Slabaugh, QMUL/AdSoLve participated. Funding from Responsible Ai UK (KP0016) |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.qmul.ac.uk/media/news/2024/pr/queen-mary-university-host-global-education-conference--.h... |
| Description | Guardian special podcast series about artificial intelligence (KD) |
| Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | Guardian special podcast series about artificial intelligence - mentioned Responsible AI UK and importance of responsibility. |
| Year(s) Of Engagement Activity | 2023 |
| Description | HUMAN-CENTRIC AND OUTCOME-FOCUSED APPROACHES TO AI GOVERNANCE |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Other audiences |
| Results and Impact | This event is organized by Yulu Pi, Prof. Cagatay Turkey from the Centre for Interdisciplinary Methodologies at the University of Warwick, and Dan Gibbons from the Social Data Science Unit, FCA. Supported by the Warwick Policy Support Fund, this focused workshop aims to bring together experts in policy-making, AI governance, behavioural studies, and Human-AI interaction. Our discussions will explore how human-centric and outcome-focused approaches can strengthen AI governance by prioritizing societal and individual well-being. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Hackathon - India |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Schools |
| Results and Impact | The Responsible AI Hackathon is a dynamic and engaging event designed to introduce girls to the world of Artificial Intelligence (AI) and the responsible use of it in innovation. Hosted at the prestigious Indraprastha Institute of Information Technology Delhi in partnership with Responsible AI UK and Technovation, this hackathon aimed to inspire and empower the next generation of female leaders in technology. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.technovation.org/taifa/events/ |
| Description | Hands-on workshop in Responsible Innovation- TAS Symposium 2024 |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Half a day workshop where we introduced the principles and value of Responsible innovation (RI) and a tool to support RI practice to researchers and innovators: Responsible Innovation Prompt and Practice Cards (RI Cards). Participants familiarised with the RI Cards and used them as part of a RI reflective activity designed to to identify possible responsible innovation issues within their projects and shared their views to the main group. Participants were pleased with the workshop and provided positive feedback to this activity. Physical decks of RI Cards were handled to all participants and expressed their interest to disseminate the RI Cards within their project teams. After the session, the organisers of the "Responsible AI in Health Research Workshop" - to be held on 25th April 2025, University of Missouri, USA- (part of a RAi UK International Project activity), invited Dr Portillo to present her work and co-facilitate the workshop with two members of that project. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Home Office technology demonstration |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Demonstrated technology on a visit to the Home Office in London |
| Year(s) Of Engagement Activity | 2024 |
| Description | Horizon Welfare Campaign project planning workshop |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | Local |
| Primary Audience | Other audiences |
| Results and Impact | Workshop to develop mental health technology project. With Aislinn Bergin, AdSoLve partner, University of Nottingham. Responsible Ai UK project KP0016. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.horizon.ac.uk/project/the-horizon-adoption-of-wellbeing-technology-toolkit-hawt-toolkit/ |
| Description | How should the new government boost the UK's compute capacity and invest across the AI stack? (GN) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Gina Neff attended a private workshop co-hosted by Demos and DayOne on boosting the UK's compute capacity. The workshop was attended by a focused group including MPs, think tanks, civil society, and government including representatives of Ada Lovelace Institute, Tech UK, and the Tony Blair Institute. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Human Autonomy in the Age of AI |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Joshua Krook presented 'Human Autonomy in the Age of AI' at the DAIRNet Defence AI Seminar Series (27 March 2025). |
| Year(s) Of Engagement Activity | 2025 |
| Description | Human Autonomy in the Age of AI |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Joshua Krook presented 'Human Autonomy in the Age of AI,' at the AI, ML and Friends Seminar Series, Australian National University (ANU) (13 March 2025). |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://cs.anu.edu.au/ai-ml-friends/ |
| Description | Human Autonomy in the Age of AI |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Joshua Krook presented 'Human Autonomy in the Age of AI,' at CSIRO Canberra, 12th March 2025. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://www.eventbrite.com.au/e/the-ethics-of-ai-tickets-1200660450729?aff=ebdssbdestsearch |
| Description | IEEE Guest Lecture - Automating Empathy in Human-AI Partnerships: Issues, Ethics and Governance |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | This lecture considered General-Purpose Artificial Intelligence (GPAI) for 'human-AI partnering' focusing on the risks and opportunities of empathic human-AI partnering, what new governance (if any) is required, and the role that soft law standards may play in leading in supporting hard law. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://technologyandsociety.org/event/automating-empathy-in-human-ai-partnerships-issues-ethics-and... |
| Description | IEEE training |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Industry/Business |
| Results and Impact | RAKE project team-members worked with a third-party consultant and IEEE on the CertifAIEd training |
| Year(s) Of Engagement Activity | 2024 |
| Description | ILPC (Information Law and Policy Centre) Annual Conference 2024 AI and Power: Regulating Risk and Rights |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Invited speakers 'What role can data ethics oversight play in mitigating the risk of a Probabilistic AI chain reaction in law enforcement?' / 'What the use of the polygraph in criminal justice can teach us about the regulation of scientifically contested AI?' The ILPC's 9th Annual Conference explored the risk and rights-based approaches to the regulation of AI-based systems, including generative AI, that are increasingly used across society. Particularly the implications of these systems for the rights and responsibilities of individuals and organisations. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://ials.sas.ac.uk/events/ilpc-annual-conference-2024-ai-and-power-regulating-risk-and-rights |
| Description | Impact of AI on Equality and Human Rights (GN) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Gina Neff helped lead at a private strategy workshop hosted by the Equality and Human Rights Commission that helped guide strategy for enforcing tech regulations in the UK and using the framework of the Equalities Act to pursue tech policy. Attendees included the EHR commissions, members of the House of Lords, the EHRC Executive and heads of their strategy, legal and policy divisions. |
| Year(s) Of Engagement Activity | 2024 |
| Description | In Conversation with: Dr Rafael Mestre and Prof. Nick Bryan-Kinns (Webinar) |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Exploring Fairness and Bias of Multimodal Natural Language Processing for Mental Health The world is grappling with an escalating mental health crisis. Emerging technologies like Artificial Intelligence (AI), and in particular Natural Language Processing (NLP) are presented as promising tools to solve these challenges by tackling online abuse and bullying, suicide ideation detection and other behavioural online harms. This international partnership project between the University of Southampton and Northeastern University is dedicated to leveraging the responsible use of AI in addressing mental health issues. With a core focus on ethical implementation, our partnership prioritises fairness and bias mitigation within AI models. Key initiatives encompass reciprocal resource sharing, model evaluations to identify and mitigate biases, workshops around policy and AI and mental/public health and proposing policy recommendations for the ethical integration of AI in mental health. In addition, we will foster extensive collaborations with experts in AI, public health and mental health, engaging stakeholders from the outset, ensuring that our approach to AI integration in mental health remains innovative, ethically sound, and genuinely responsive to user needs. Presenters: - Dr Rafael Mestre, University of Southampton, New Frontiers Fellow. Working with Northeastern University. - Annika Schoene, Research Scientist, Northeastern University. - Responsible AI international community to reduce bias in AI music generation and analysis This project aims to establish an international community dedicated to address Responsible AI challenges, specifically addressing bias in AI music generation and analysis. The prevalent dependence on large training datasets in deep learning often results in AI models biased towards Western classical and pop music, marginalising other genres. The project will bring together an international and interdisciplinary team of researchers, musicians, and industry experts to develop AI tools, expertise, and datasets aimed at enhancing access to marginalised music genres. This will directly benefit both musicians and audiences, engaging them to explore a broader spectrum of musical styles. Additionally, this initiative will contribute to the evolution of creative industries by introducing novel forms of music consumption. To know more about the project, read here. Presenter: Prof. Nick Bryan-Kinns, University of the Arts London. Working with Music Hackspace UK, DAACI UK, Steinberg Germany and Bela UK. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://rai.ac.uk/events/in-conversation-with-dr-rafael-mestre-and-prof-nick-bryan-kinns/ |
| Description | In Conversation with: Prof. Marion Oswald (webinar) |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | In this webinar, Professor Marion Oswald from Northumbria University introduced PROBabLE Futures (Probabilistic AI Systems in Law Enforcement Futures), a four-year, £3.4M RAi funded project. Led by Northumbria University and involving multiple UK institutions - Glasgow, Northampton, Leicester, Aberdeen and Cambridge Universities - this project critically examines the integration of Probabilistic AI systems, such as facial recognition and predictive tools, within law enforcement. These systems offer potential benefits but also raise significant concerns, especially where the probable nature of outputs is misinterpreted, leading to serious consequences. The project aims to develop a responsible, rights-respecting framework for the use of AI across various stages of the criminal justice system. Professor Oswald will cover: Inspiration: The challenges of decision-making in law enforcement based on uncertain AI outputs Ambition: Building a justice-focused framework for AI deployment in law enforcement Impact: Creating a system that informs policy-makers and law enforcement on responsible AI use Objectives: - Mapping the probabilistic AI ecosystem in law enforcement - Learning from the past - Scoping for the future, including evaluation of contested technologies such as remote weapons scanning - Focusing upon practical use of AI & the interaction of multiple systems (chaining) - Using XAI taxonomy and novel methods including story-telling and mock trials using AI evidence - Establishing an experimental oversight body including members representing under-represented groups |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://rai.ac.uk/events/in-conversation-with-prof-marion-oswald-webinar/ |
| Description | In Conversation with: Professor Maria Liakata |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | This cutting-edge research explores the critical challenges posed by the rapid adoption of Large Language Models (LLMs) particularly in the context of medical and legal use cases. Despite known limitations - such as biases, privacy leaks, and poor reasoning - LLMs are being deployed without fully understanding their potential consequences. While in some sectors, these risks may be inconvenient, in safety-critical domains like healthcare and legal systems, the impact could be life-altering. For instance, LLMs are already being used by UK judges to summarise court cases, which could lead to serious misjudgements if the AI models reinforce existing biases or misinterpret crucial details. This project aims to address the socio-technical limitations of LLMs, focusing on two key goals: Developing a robust evaluation benchmark to assess LLM limitations in real-world scenarios, helping regulators and industry partners implement responsible practices. Designing innovative solutions informed by law, ethics, and healthcare experts to mitigate the risks posed by LLMs, ensuring their safe integration into products and services. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://rai.ac.uk/events/in-conversation-with-professor-maria-liakata/ |
| Description | In Conversation with: RAi UK's team |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | An exclusive live conversation with RAi UK's team featuring Professor Gopal Ramchurn, CEO - RAi UK, Professor Gina Neff, Deputy CEO - RAi UK, Professor Dame Muffy Calder, Chair - Skills Pillar, RAi UK and Professor Tom Rodden, Chair - Executive Management Team, RAi UK. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://rai.ac.uk/events/in-conversation-with-rai-uk-leadership-team/ |
| Description | In Conversation with: Tara Chklovski, Founder and CEO of Technovation |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | In this exclusive live session, Tara will dive into the insights from Technovation's newly launched report, Women in AI: A Global Overview of a $200 Billion Innovation Opportunity. This powerful report is a key indicator of the progress being made by TAIFA, the only existing alliance working at the intersection of AI and gender on a global scale. It provides a road map for countries to set ambitious targets, implement effective AI-skilling programs, and empower women to become the "Builders of Better AI". Tara also shared Technovation's key goals for 2025, discussing the exciting new collaboration with Responsible Ai UK and how she envisions it driving innovation in the tech industry while making a lasting impact on society. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://rai.ac.uk/events/in-conversation-with-tara-chklovski-founder-and-ceo-of-technovation/ |
| Description | Industry talk: Tech for Impact Seminar |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Industry/Business |
| Results and Impact | I was invited to talk to the LGBTQ+ network at IBM. The talk was hybrid: hosted locally (in IBM London) and online, to the New York and other offices. It led to discussion and follow up conversations about repsonsible AI in the workplace. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Inteligencia Artificial Responsable y la Digitalizacion de los Muertos |
| Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | I was invited by the cooperative "El Arte del Buen Vivir" (Almeria, Spain) to give a talk to the general public. TV Almeria interviewed us live. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://youtu.be/5vjnL2-Ua_I?si=1p8cy8T7LGtCZ3vo |
| Description | Inteligencia responsable y la digitalizacion de los muertos |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Postgraduate students |
| Results and Impact | This was a series of talks given in Granada University to two different research groups; one at the new centre for AI and another at the Dept of Psychology. The talks were really well attended and numerous opportunities for further research emerged. |
| Year(s) Of Engagement Activity | 2025 |
| Description | International Association for Safe and Ethical AI |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Rob Procter, University of Warwick and AdSoLve partner participated in the event. Responsible Ai UK project KP0016. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://www.iaseai.org/conference |
| Description | International Collaborations to Advance Responsible AI |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | RAi session at the AI Fringe, London, that showcased real-world examples of responsible AI initiatives from Asia, Latin America, and Africa, and raised the issues of how to address the challenges and capture the benefits of international work for advancing AI that benefits humanity. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://www.youtube.com/watch?v=sbpiwJ4D7S0&t=14s |
| Description | International Workshop on Guiding Concepts for Responsible AI |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | This workshop featured positions that examine underpinning concepts critically in a transdisciplinary manner, and explores their adoption in texts such as legislative frameworks and practice oriented guidelines. The term guiding concepts can be understood widely for the purposes of the workshop, for instance in terms of epistemological, moral, empirical, philosophical or practical orientations, guidelines, and reasoning systems that have been, or could be drawn upon to underpin notions of trustworthy and responsible AI. The goal of the workshop is to map the landscape of such guiding concepts to enlighten the communities that are developing (e.g., researchers) and adopting (e.g., policy makers and practitioners) such concepts to drive responsible AI. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://sites.google.com/view/rai-concepts-workshop/home |
| Description | International discussion on AI regulation |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Discussion with lawyers in Brazil about AI regulation in the UK and globally |
| Year(s) Of Engagement Activity | 2024 |
| Description | Interview broadcasted on STV News (Scotland) |
| Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
| Part Of Official Scheme? | No |
| Geographic Reach | Regional |
| Primary Audience | Public/other audiences |
| Results and Impact | The project leader (Dr Konstantina Martzoukou) and the school project partner (Ms Emma Grey) as well as young project participants from Forfar Academy were interviewed by STV by Lynne Rankin STV Reporter. The video was broadcasted on the live STV news on 30th Nov 2024. The purpose was to showcase the work to the general public as part of the 6 o'clock news. The news highlighted our work with school children across Scotland aiming to develop a toolkit to teach the positives and pitfalls of generative artificial intelligence (AI) and help to improve their understanding of the technology. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://news.stv.tv/north/scottish-school-pupils-helping-develop-toolkit-to-teach-the-positives-and-... |
| Description | Interview for BBC Radio 4 show - 'Artificial Human' |
| Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | Andrew McStay was interviewed on the BBC Radio 4 show 'Artificial Human' for an episode on Emotional AI. McStay was the main guest and key contributor for the episode. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.bbc.co.uk/programmes/m0023dsw |
| Description | Interview for Festival of the Mind AI-generated choir performance (2024) |
| Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Public/other audiences |
| Results and Impact | Peter was invited for interview for the Cor_alis_Choralis project: A coral piece exploring trust in the use of AI in medicine. Peter'a interview was turned into music and recorded by a choir as part of the Festival of the Mind (2024) at Spiegeltent, Barker's Pool. Cor_alis _ChoraIis is a choral piece performed by singers from Kantos choir that will use voice to explore trust in the use of Artificial Intelligence (AI) in medicine. Produced through a collaboration between Professor Andy Swift, Sheffield Medical School, artists Emma O'Connor, Michael Day, composer Michael Betteridge and Lucy Bramley, Cor_alis _ChoraIis is a cultural counterpoint to a research programme that uses AI to improve pulmonary hypertension diagnoses from chest scans and will also include an installation of manuscripts taken from expert interviews. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://festivalofthemind.sheffield.ac.uk/2024/spiegeltent/cor_alis_choralis-exploring-trust-in-arti... |
| Description | Interview for Kollega - trade union member's magazine |
| Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Interview about Emotional AI at work for Kollega, the member's magazine for Unionen, Sweden's largest trade union |
| Year(s) Of Engagement Activity | 2024 |
| Description | Interview for National News - The Guardian |
| Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | Andrew McStay interviewed by The Guardian for an article on new forms of emotional AI. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.theguardian.com/technology/article/2024/jun/23/emotional-artificial-intelligence-chatgpt... |
| Description | Interview for SAS Software Podcast: Pondering AI |
| Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | Ben Bland was the main guest on the 'Artificial Empathy' episode of the Pondering AI podcast. Highlights the affordances and risks of emotional AI and the importance of Standards and Regulation for this technology. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://pondering-ai.transistor.fm/episodes/ep57 |
| Description | Interview for national magazine (Wired) - on Emotional Ai and Smart Glasses |
| Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | Interview for Wired Magazine with global digital reach. Promoted the work of the research unit. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.wired.com/story/emteq-smart-glasses-read-emotions-watch-what-you-eat/ |
| Description | Interview for national news: Times Radio |
| Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Public/other audiences |
| Results and Impact | Expert commentary in the media on various talk programmes on Times Radio, which led to subsequent conversations on social media. |
| Year(s) Of Engagement Activity | 2024,2025 |
| Description | Interview for podcast |
| Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | Peter Winter and Alan Chamberlain were invited by Matimba Swana to appear on the Black & Brown in Bioethics (BBB) Power & Privilege Podcast for an episode titled "Matimba in Conversation with Dr. Peter Winter and Dr. Alan Chamberlain on Gaps in Public and Community Engagement with Research." The discussion sparked a debate on effective approaches to public engagement and outreach, particularly in the context of AI. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://blackbrownbioethics.blogs.bristol.ac.uk/power-and-privilege-podcast/ |
| Description | Interview with ITV News |
| Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Public/other audiences |
| Results and Impact | Dominic Price was interviewed at the Health and Safety Event 2024 for ITV evening news. This was broadcast on all regional news broadcasts across the UK. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Interview with National Public Radio Show 'Marketplace' |
| Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | Andrew McStay and Ben Bland interviewed for "A tour of 'emotionally intelligent' AI" on Marketplace, a public media outlet in the US with international reach. (11.7 million weekly listeners across their portfolio.) |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.marketplace.org/2024/07/16/emotionally-intelligent-ai-artificial-intelligence-emotions-e... |
| Description | Interview with Politico on use of LLMs in social care management |
| Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Public/other audiences |
| Results and Impact | Rob Procter, AdSoLve project partner, was interviewed by Politico on the use of LLMs in social care management. Responsible Ai UK project KP0016. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Invited Lecture for BBC AI/Machine Learning Community |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | An invited lecture for the BBC AI/Machine Learning community on AI companions and empathy. Purpose to keep a key media producer up-to-date on developments in this area and to reflect on their own use. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Invited Seminar - Northeastern University |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Postgraduate students |
| Results and Impact | Seminar for staff and students at the Institute for Experiential AI, Northeastern University, Boston, USA on the topic of Large Language Models. Sparked discussion which was followed through during my week long visit. This has led to a NSF-EPSRC grant proposal (to be submitted in 2025) around responsible AI for large language models. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Invited Seminar for Centre for Machine Intelligence (CMI) and London Information Retrieval (IR) Meetup |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Industry/Business |
| Results and Impact | Invited seminar on "Large Language Models for Information Extraction and Information Retrieval". Impact was via good engagement after seminar (lots of questions and requests for followup collaborations). |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.meetup.com/london-information-retrieval-meetup-group/ |
| Description | Invited Talk - Workshop, Exploring the Role of AI in the Armed Forces, KCL, London |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Invited Expert Talk for Roundtable Discussion Workshop with Senior Civil Servants. This was a policy focussed workshop at Kings College London in 2024 on "Exploring the Role of AI in the Armed Forces" with participants including MOD, commercials, charities and some civil servants. The workshop leads (with input from speakers like myself) have submitted a BMJ Military Health article and are developing a policy focussed consensus document (with a view to using it for policy impact) |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://kcmhr.org/pdf/2024-ai-delgate-pack.pdf |
| Description | Invited Talk at Violence Against Women and Girls (VAWG) Symposium 2023 |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Invited talk at Violence Against Women and Girls (VAWG) Symposium 2023. Attendees included UK law enforcement (e.g. Hampshire & IoW Police) and NGOs and impact took the form of Hampshire & IoW Police agreeing to work with me for future collaboration |
| Year(s) Of Engagement Activity | 2023 |
| Description | Invited lectures on LLMs and NLP methods in healthcare at the Oxford Machine Learning Summer School (Prof. Liakata) |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Postgraduate students |
| Results and Impact | Prof. Liakata delivered two invited lectures on LLMs and NLP methods in healthcare at the Oxford Machine Learning Summer School https://www.oxfordml.school/health (Oxford, July 2024). |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.oxfordml.school/health |
| Description | Invited lectures on longitudinal NLP methods at the 7th Advanced Course on Data Science & Machine Learning (ACDL) |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Postgraduate students |
| Results and Impact | Prof Liakata delivered four invited lectures on longitudinal NLP methods at the 7th Advanced Course on Data Science & Machine Learning (ACDL) (Tuscany, June 2024) |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://acdl2024.icas.events |
| Description | Invited panel member at OPSCA (Office of the Chief Scientific Adviser) workshop |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Contribution to Developing a Science & Technology Profession for Policing |
| Year(s) Of Engagement Activity | 2024 |
| Description | Invited speaker International Conference on Artificial Intelligence and Criminal Law |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Invited speaker 'AI and sentencing: a practice perspective' at the 'International Conference on Artificial Intelligence and Criminal Law' organised by the University of Bergamo and the European Research Group On Artificial Intelligence (ERGO) AI, Bergamo, Italy; 25th-26th October 2024 (E Tiarks) |
| Year(s) Of Engagement Activity | 2024 |
| Description | Invited speaker at Scottish Privacy Forum |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | A talk to share insights from the project with people interested in data, privacy and employee monitoring, aiming to increase awareness of how platform-based monitoring works. Resulted in invitation from Scottish AI Alliance to speak at launch of IFOW Pissarides Review. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://crisp-surveillance.com/scottish-privacy-forum |
| Description | Invited talk Turing AI fellows event, with policy makers (Birmingham, November 2024) |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Third sector organisations |
| Results and Impact | Prof Liakata was invited to give a talk at the Turing AI fellows event, with policy-makers (Birmingham, November 2024) and talked about her key project AI Turing Fellowship and RAi UK Keystone AdSoLve project. Funding from Responsible Ai UK (KP0016). |
| Year(s) Of Engagement Activity | 2024 |
| Description | Invited talk and panelist at invitation-only AI in mental health MHRA workshop (London, February 2025) |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Invited talk and panelist at invitation-only AI in mental health MHRA workshop (London, February 2025), Prof Maria Liakata, AdSoLve lead. Funding from Responsible Ai UK (KP0016). |
| Year(s) Of Engagement Activity | 2024 |
| Description | Invited talk at All Hands RAi UK Networking Event |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Other audiences |
| Results and Impact | RAi UK community gathered at the Voco ® St David's Cardiff for a full day of networking, collaboration, and insightful sessions to help shape the future of Responsible AI. Several members of the AdSoLve consortium participated: Maria Liakata, Rob Procter, Jenny Chim, Michael Schlichtkrull, Aislinn Bergin. Prof M Liakata was invited to give a talk about AdSoLve. Responsible Ai UK project KP0016. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://rai.ac.uk/events/rai-all-hands-meeting/ |
| Description | Invited talk at Keystone roundtable with funders and policymakers (London, June 2024) |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Other audiences |
| Results and Impact | Prof Liakata was invited to give a talk at Keystone roundtable with funders and policymakers (London, June 2024). Funding from Responsible Ai UK (KP0016). |
| Year(s) Of Engagement Activity | 2024 |
| Description | Invited talk at Northeastern University (USA) |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Other audiences |
| Results and Impact | Dr Mestre gave an invited seminar talk at the Institute for Experiential AI @ Northeastern University, in person in Boston, with hybrid attendance. He talked about his work on emerging technologies and responsible AI. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://ai.northeastern.edu/event/ai-ethics-and-citizen-input-in-emerging-technologies |
| Description | Invited talk at Tea and Talk Tech: Digital Mental Health |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Third sector organisations |
| Results and Impact | Talk for Money and Mental Health Policy Institute. With Aislinn Bergin, AdSoLve partner, University of Nottingham. Responsible Ai UK project KP0016. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.moneyandmentalhealth.org/ |
| Description | Invited talk at invitation only interdisciplinary symposium on LLMs in healthcare (Speinshart, March 2024) |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Prof Liakata was invited to give a talk at the invitation-only interdisciplinary symposium on LLMs in healthcare (Speinshart, March 2024). Funding from Responsible Ai UK (KP0016). |
| Year(s) Of Engagement Activity | 2024 |
| Description | Invited talk at the Japan Advanced Institute of Science and Technology (JAIST) |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | Regional |
| Primary Audience | Postgraduate students |
| Results and Impact | Dr Mestre gave an invited seminar talk at the Japan Advanced Institute of Science and Technology (JAIST), in person in Japan, with hybrid attendance. He talked about his work on emerging technologies and responsible AI for postgraduates, academics and undergraduates. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Keynote talk and attendance at The Health and Safety Event 2024 |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Industry/Business |
| Results and Impact | The Cobot Maker Space was invited by the Institute of Occupational Safety and Health to give a keynote talk on the future of with with robotics and AI, and also to exhibit at the event. The event reached over 14,000 visitors from industry and government, both national and international. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.healthandsafetyevent.com/ |
| Description | Keynote talk at the DEF-AI-MIA workshop at CVPR |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Keynote talk at the DEF-AI-MIA workshop at CVPR. Prof Greg Slabaugh, QMUL/AdSoLve participated. Funding from Responsible Ai UK (KP0016) |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://ai-medical-image-analysis.github.io/4th/ |
| Description | Knowing AI, Knowing U / AIKONIC - Final Celebration event |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | Local |
| Primary Audience | Study participants or study members |
| Results and Impact | This event was a culmination of the Knowing AI, Knowing U / AIKONIC project we did as part of Public Voices in AI. Among the participants from both groups we engaged with, there were invited academics, policy-makers and arts & culture professionals |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://thepeoplespeak.org.uk/public-voices-in-ai-2024/ |
| Description | Launch event |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | A six-hour public event in Edinburgh involving presentations about the project and new methods revealing worker-led survey results; premiere of new film; workshop on public engagement in the project; and panel of workers and researchers. Aimed to put the project findings and experiences of workers on the national policy/public agenda. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.eventbrite.co.uk/e/gig-work-in-edinburgh-an-inquiry-and-agenda-for-the-future-tickets-10... |
| Description | Lecture at Ulster University |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | Regional |
| Primary Audience | Postgraduate students |
| Results and Impact | Ben Bland was invited to give a lecture at the School of Computer at Ulster University on "AI Ethics and Responsible Innovation" |
| Year(s) Of Engagement Activity | 2024 |
| Description | LinkedIn posts from the AIPAS account that signpost and highlight engagement activities. |
| Form Of Engagement Activity | Engagement focused website, blog or social media channel |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | AIPAS LinkedIn account has continually uploaded post that highlight and signpost AIPAS activity, including • To disseminate Policing Insights publication To disseminate Biometric Update blog post To highlight AIPAS EDI considerations To disseminate AIPAS involvement in RIDE-SD 2024 To disseminate AIPAS at the Responsible Ai UK All-Hands Meeting in Cardiff 2024 To highlight AIPAS tool development To disseminate the first AIPAS Delphi workshop To highlight strategic areas of the project |
| Year(s) Of Engagement Activity | 2024,2025 |
| URL | https://www.linkedin.com/posts/aipas-uk_ai-artificialintelligence-aiaccountability-activity-72555550... |
| Description | London Tech Week |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Professor Sana Khareghani, among others, shared her thoughts on the interplay between AI and talent, at a breakfast roundtable, a fringe event part of the London Tech Week 2024. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://londontechweek.com/fringe-events |
| Description | Lovelace-Hodgkins Symposium on AI Ethics: Supporting stakeholders to assess AI: Using the Participatory Harm Auditing Workbenches and Methodologies [PHAWM] approach to develop Responsible AI |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Third sector organisations |
| Results and Impact | This event aimed to prompt insightful discussions towards shaping a more ethical and inclusive future for artificial intelligence (AI) and, in doing so, produce actionable recommendations and a collaborative vision for an ethical framework in AI. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.gla.ac.uk/research/az/datascience/events/lovelace-hodgkinsymposium/ |
| Description | MHRA AI in Mental Health Workshop |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Other audiences |
| Results and Impact | The Medicines and Healthcare products Regulatory Agency (MHRA) will be hosting a presentation showcase and interactive workshop to explore some of the opportunities and challenges of regulating and evaluating AI in Digital Mental Health Technologies (DMHTs) This all day workshop will offer an opportunity to network, take part in breakout discussion and activities, and to hear presentations on topics, including: · Responsible use of AI · Industry perspective · Future of AI in mental health |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://www.eventbrite.co.uk/e/mhra-ai-in-mental-health-workshop-tickets-1097183438199 |
| Description | Meeting with Ada Lovelace Institute |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Third sector organisations |
| Results and Impact | Two meetings with Ada Lovelace Institute, stakeholder of the project. With AdSoLve partner, University of Sheffield, Jiahong Chen and Prof Maria Liakata, QMUL. Funding from Responsible Ai UK (KP0016). |
| Year(s) Of Engagement Activity | 2024 |
| Description | Meeting with Canon Medical Systems |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | Local |
| Primary Audience | Industry/Business |
| Results and Impact | Presentation on Multimodal AI. Prof Greg Slabaugh and Prof. Maria Liakata, QMUL/AdSoLve participated. Funding from Responsible Ai UK (KP0016) |
| Year(s) Of Engagement Activity | 2024 |
| Description | Meeting with ICO |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Meeting with ICO, stakeholder. With AdSoLve partner, University of Sheffield, Jiahong Chen and Prof Maria Liakata, QMUL. Funding from Responsible Ai UK (KP0016). |
| Year(s) Of Engagement Activity | 2024 |
| Description | Meeting with Mishcon de Reya |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Industry/Business |
| Results and Impact | Meeting with Mishcon de Reya, stakeholder, about research directions and feedback. With AdSoLve partner, University of Sheffield, Jiahong Chen and Rob Procter, University of Warwick and Tom Sorell, University of Warwick. Funding from Responsible Ai UK (KP0016). |
| Year(s) Of Engagement Activity | 2024 |
| Description | Meeting with Pinsent Mansons |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Industry/Business |
| Results and Impact | Meeting with Pinsent Mansons, Stakeholder, about research direction and industry feedback.With AdSoLve partner, University of Sheffield, Jiahong Chen, Rob Procter University of Warwick and Maria Liakata, QMUL. Funding from Responsible Ai UK (KP0016). |
| Year(s) Of Engagement Activity | 2024 |
| Description | Meeting with Solicitors Regulation Authority |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Meeting with Solicitors Regulation Authority. Funding from Responsible Ai UK (KP0016). |
| Year(s) Of Engagement Activity | 2025 |
| Description | National Institute of Informatics Open Day |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Industry/Business |
| Results and Impact | We presented a poster detailing our current work on Empathic AI-Human Partnerships and Standards, as well as the previous work that motivated it, as part of the National Institute in Informatics Open Day in Tokyo, Japan. Attendance was over 100 with a diverse audience including academics, policy makers and the general public. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.nii.ac.jp/en/event/openhouse/ |
| Description | Navigating the Future - AI's Impact on Education |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | The event targets a diverse audience of 50 to 100 individuals, drawing participants from various disciplines, including AI researchers, developers, and educational researchers, as well as practitioners and policymakers. This inclusive approach ensures a multifaceted exploration of AIEd, fostering collaboration and knowledge exchange. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://events.kmi.open.ac.uk/ai-and-ed/ |
| Description | News article post for Biometric Update titled "Will change in the White House sound a different tune for AI-driven biometrics in law enforcement" |
| Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Other audiences |
| Results and Impact | New article published that directly relates to the AIPAS project and the need for accountability in biometrics. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://www.biometricupdate.com/202501/will-change-in-the-white-house-sound-a-different-tune-for-ai-... |
| Description | Newspaper article (Byline Times) |
| Form Of Engagement Activity | A magazine, newsletter or online publication |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Public/other audiences |
| Results and Impact | Articles in Byline Times (in print and online) about issues of AI and responsibility. |
| Year(s) Of Engagement Activity | 2024,2025 |
| URL | https://bylinetimes.com/author/katedevlin/ |
| Description | Newspaper coverage of worker survey |
| Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Public/other audiences |
| Results and Impact | An article written by two of the team and published in the print and online version of The National, a daily newspaper. Aim to get the research on political and policy agendas and give workers public voice. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.thenational.scot/news/24790745.delivery-rider-survey-reveals-exploitative-system-edinbur... |
| Description | Nuffield Foundation on AI Governance (GN/WH/DC) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Gina Neff participated in this private interdisciplinary strategy workshop co[1]convened by the Nuffield Foundation and the Ada Lovelace Institute and co-hosted by Dame Prof Diane Coyle and Dame Prof Wendy Hall. The afternoon focused on what was needed for AI to deliver positive impacts for society and what strategy the Nuffield Foundation could take in the UK |
| Year(s) Of Engagement Activity | 2024 |
| Description | Nursing and Midwifery Council AI Roundtable |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Rob Procter, University of Warwick, participated in the networking event, on behalf of the AdSoLve project. There were a series of presentations and discussions throughout the day on AI with a view to incorporating more digital and AI competencies into nursing standards. Responsible Ai UK project KP0016. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://ai-nurses.kcl.ac.uk/hello-world/ |
| Description | OFCOM - Global Online Safety Regulators Roundtable (GN) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Gina attended, in her capacity as a UK AI expert, this gathering of world-wide regulators. This included the heads of safety regulators from France, Australia, Fiji, Ireland, Korea, the Netherlands, Slovakia, South Africa and the UK. This event was an excellent chance to be part of international conversation on regulatory approaches to online safety. |
| Year(s) Of Engagement Activity | 2024 |
| Description | OFCOM session on Researcher Access to Data (GN) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Gina attended this small private workshop hosted by OFCOM on the current researcher access to data landscape. Other attendees included academics and civil society stakeholders, and government and regulatory bodies. Discussion informed the upcoming report and consultation from OFCOM required by the Online Safety Act and will shape other government responses (namely under the new Data Use Bill) on this vital topic. |
| Year(s) Of Engagement Activity | 2024 |
| Description | ONS Research Excellence webinar (Marion Oswald) |
| Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Public/other audiences |
| Results and Impact | Research Excellence talk by Professor Marion Oswald in August, "Ethical review to support Responsible Artificial Intelligence (AI) in policing: A preliminary study of West Midlands Police's specialist data ethics review committee". She shared, for the first time, the results of the research at her presentation and agreed to share the report once it was released - press release: https://www.northumbria.ac.uk/about-us/news-events/news/police-use-of-ai-more-responsible-with-an-independent-data-ethics-advisory-committee/ and summary https://researchportal.northumbria.ac.uk/en/publications/ethical-review-to-support-responsible-artificial-intelligence-ai-. There are a number of recommendations for the police, the PCC's office and national policy-makers. The main findings of the report are also summarised in this article published by Centre for Emerging Technology and Security (CETaS): https://cetas.turing.ac.uk/publications/bridging-gap We hope you find this interesting and if you would like to listen to Marion's talk again it can be found on the ADR UK You Tube channel: https://www.youtube.com/playlist?list=PLGhCBVcBzlGBkbYRVpCOxrVm2XIwZ93LC |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.youtube.com/playlist?list=PLGhCBVcBzlGBkbYRVpCOxrVm2XIwZ93LC |
| Description | Online magazine article |
| Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Public/other audiences |
| Results and Impact | Article explaining project on Bella Caledonia, respected political magazine |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://bellacaledonia.org.uk/2024/12/08/self-organising-workers-in-the-gig-economy/ |
| Description | Online news article - 'We're bringing the Peelian principles to the 21st century by applying AI to them |
| Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Other audiences |
| Results and Impact | News article for Policing Insights that discuss the benefits and challenges of artificial intelligence (pay wall protected) |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://policinginsight.com/feature/interview/prof-babak-akhgar-were-bringing-the-peelian-principles... |
| Description | PHAWM launch event (2024) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | The launch of the Participatory Harm Auditing Workbenches and Methodologies (PHAWM ) project. Led by a consortium of 7 academic institutions in collaboration with 24 partner organisations, PHAWM is set to transform AI auditing. We're addressing the complex challenge of determining the benefits and harms of generative and predictive AI systems by involving a diverse group of stakeholders without AI expertise, including end-users, regulators, decision subjects, and domain experts. Our project spans four key areas: Health, Media Content, Cultural Heritage, and Collaborative Content Generation. Join us for drink and canapés and learn more about our vision for workbenches for AI assessment, comprehensive methodologies for participatory audits, and a certification framework for AI solutions. There will be an opportunity to network with leading figures from industry, cultural heritage, academia, and AI ethics as we share our plans, and opportunities to engage with our project and keep up to date with future developments. It promises to be a landmark occasion in AI ethics and governance, we do hope you'll join us at the University of Strathclyde Technology and Innovation Centre. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://phawm.org |
| Description | PMAC 2025 (SDR) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Gopal spoke at the "Harnessing technologies in the Age of AI to build a healthier world" event in Thailand. The PMAC 2025 emphasized leveraging these technological advancements to ensure equitable, affordable, and comprehensive access for all populations, especially in low- and middle-income countries and for resource-constrained individuals in high-income countries. The conference highlighted the importance of synergizing technologies to strengthen health systems, achieve the SDGs, and foster a healthy planet. Key issues such as climate change, conflict, and emerging diseases were discussed. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://pmac-2025.com/ |
| Description | Panel discussion, Norwich Science Festival |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | Regional |
| Primary Audience | Public/other audiences |
| Results and Impact | One of three panellists talking about the future of medicine at a public-facing comedy panel event at Norwich Science Festival. Member sof the audience approached me to discuss my work and express their interest following the panel. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://norwichsciencefestival.co.uk/whats-on/future-medicine-showdown |
| Description | Panel discussion: AI through an LGBTQ+ lens |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | Regional |
| Primary Audience | Industry/Business |
| Results and Impact | Nexus LGBTQ is a network for LGBTQ Employee Network Leaders in London, UK. I spoke on an expert panel about AI bias and how it impacts the LGBTQ+ community. This led to discussion afterwards, and an invitation to talk to other industry groups. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://outleadership.com/driving-equality/nexus-lgbtq-the-network-for-lgbtq-employee-network-leader... |
| Description | Panel on AI at The Amanpour Hour Show with Christiane Amanpour, CNN June 15, 2024: |
| Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | Professor Dame Wendy Hall was invited for a panel discussion on AI, among industry leader, around the conversation 'what it means to be human and how technology is changing us for better or worse?. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://archive.org/details/CNNW_20240615_150000_The_Amanpour_Hour/start/300/end/360 |
| Description | Panelist at Cambridge University society of ethics of AI event |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Public/other audiences |
| Results and Impact | The panel included 4 experts in the ethics of AI in healthcare. I gave a presentation on my research and then the 4 panelists discussed together with the public about important challenges as data security, dehumanisation in medicine, disempowerment of practitioners and patients. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://lnkd.in/eJZRBEfQ |
| Description | Panelist at Turing AI UK |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Other audiences |
| Results and Impact | Jennifer Williams was invited to speak on a panel at Turing AI UK on Responsible AI |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://ai-uk.turing.ac.uk/ |
| Description | Panelist at the LLMs for science: best practices for safe and effective deployment workshop (Cambridge, September 2024) |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Third sector organisations |
| Results and Impact | Prof Liakata was a panelist at the LLMs for science: best practices for safe and effective deployment workshop (Cambridge, September 2024). Funding from Responsible Ai UK (KP0016). |
| Year(s) Of Engagement Activity | 2024 |
| Description | Participation in Connect XR Audience Insight and Toolkit Launch |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Industry/Business |
| Results and Impact | Toolkit to support XR developers in health and wellbeing. With Aislinn Bergin, AdSoLve partner, University of Nottingham. Responsible Ai UK project KP0016. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.storyfutures.com/resources/connectxr-audience-insight-toolkit |
| Description | Participation in Curriculum Review Cycle meeting (Education Scotland) |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | A process is in place for reviewing and evolving Scotland's Curriculum, the Curriculum Improvement Cycle (CIC). Work has begun but is still in the early stage of the planning. The Scottish Government has established a short-life dedicated co-design group, made up primarily of teachers, which met four times between December 2022 and May 2023. Practitioners from across all local authority areas, curriculum areas and settings were also invited to join the group. The purpose of the group was to develop a draft model of curriculum review for Scotland. Additional engagement took place within existing forums in 2023 including the Teacher Panel, Association of Directors of Education in Scotland (ADES) Curriculum and Qualifications Group, Building the Curriculum Self Help (BOCSH) Group and the Curriculum and Assessment Board (CAB). I was invited to join the next CIC group. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://blogs.glowscotland.org.uk/glowblogs/cices/ |
| Description | Participation in RAi Responsible and Adaptive Robots Co-Production Workshop |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | Local |
| Primary Audience | Study participants or study members |
| Results and Impact | Participation in RAi Responsible and Adaptive Robots Co-Production Workshop. with Maria Waheed and Aislinn Bergin. Responsible Ai UK project KP0016. |
| Year(s) Of Engagement Activity | 2025 |
| Description | Participation in RAi4MH Workshop |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Participation in RAi4MH Workshop, with Aislinn Bergin, project partner for AdSoLve. Responsible Ai UK project KP0016. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://zenodo.org/records/14044362 |
| Description | Participation in a Formal Working Group - Engagement and Negotiation at CEN-CENELEC JTC-21 Working Groups |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Milla Vidina attended and participated in 9 remote/hybrid meetings, including a further 1 in person meeting, negotiating and advocating for equality by design, deliberation and oversight principles to be incorporated into the development of harmonised technical standards, following Art. 40 of the EU AI Act. These negotiations are ongoing. |
| Year(s) Of Engagement Activity | 2024,2025 |
| Description | Participation in a Formal Working Group - Engagement and negotiation at CEN-CENELEC JTC-21 Working Group 2 |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Prof. Dr. Karen Yeung attended and participated in 59 remote/hybrid meetings, including 13 in person meetings, negotiating and advocating for equality by design, deliberation and oversight principles to be incorporated into the development of harmonised technical standards, following Art. 40 of the EU AI Act. She has contributed 16 discussion and position pieces, including commentary of the developing draft. These negotiations are ongoing. |
| Year(s) Of Engagement Activity | 2024,2025 |
| Description | Participation in a Formal Working Group - Engagement and negotiation at CEN-CENELEC JTC-21 Working Group 3 |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Dr. Aaron Ceross attended and participated in 12 remote/hybrid meetings, negotiating and advocating for equality by design, deliberation and oversight principles to be incorporated into the development of harmonised technical standards, following Art. 40 of the EU AI Act. He has contributed a significant position piece on Assumption Logs in the development of a Bias technical standard. These negotiations are ongoing. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Participation in a Formal Working Group - Engagement and negotiation in CEN-CENELEC JTC-21 Working Group 4 |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Trish Shaw attended and participated in 47 remote/hybrid meetings, including a further 12 in person meetings, negotiating and advocating for equality by design, deliberation and oversight principles to be incorporated into the development of harmonised technical standards, following Art. 40 of the EU AI Act. She has contributed several significant position pieces on human oversight and transparency towards the development of an AI Trustworthiness technical standard. These negotiations are ongoing. |
| Year(s) Of Engagement Activity | 2024,2025 |
| Description | Participation in a Formal Working Group - Engaging and Negotiating in CEN-CENELEC JTC-21 Working Groups |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Dr. James MacLaren attended and participated in 20 remote/hybrid meetings, negotiating and advocating for equality by design, deliberation and oversight principles to be incorporated into the development of harmonised technical standards, following Art. 40 of the EU AI Act. He has contributed three pieces supporting the Risk Management and Trustworthiness technical standards, compiling use cases and offering analysis on ECHR and European Charter of Fundamental Rights. These negotiations are ongoing. |
| Year(s) Of Engagement Activity | 2024,2025 |
| Description | Participation in a Free Advice Sector workshop on AI, Law and Legal Training |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Third sector organisations |
| Results and Impact | Thirty people attended an online workshop to discuss the challenges and opportunities of GenAI in the free legal advice sector, the workshop findings are informing the design of online learning materials. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Participation in the RAi Coordinating Keystone Launch |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Other audiences |
| Results and Impact | Participation in the RAi Coordinating Keystone Launch. Networking opportunity. Responsible Ai UK project KP0016. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://rai.ac.uk/events/rai-uk-coordinating-keystone-launch-event/ |
| Description | Participatory AI Research & Practice Symposium (PAIR) symposium event in Paris |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Third sector organisations |
| Results and Impact | Public event aimed at practitioners and policymakers (https://pairs25.notion.site/Agenda-8th-Feb-17a260e24e1a8092b9e4ebe332807534?p=17a260e24e1a81209aa9e3aebbe78a12±=c ) |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://pairs25.notion.site |
| Description | Participatory AI Research & Practice Symposium (PAIR) symposium event in Paris |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Third sector organisations |
| Results and Impact | Public event aimed at practitioners and policymakers |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://pairs25.notion.site/ |
| Description | Participatory AI Research & Practice Symposium (PAIR) symposium event online |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Third sector organisations |
| Results and Impact | Public event aimed at practitioners and policymakers |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://www.youtube.com/watch?v=v5eqMH-qzFE |
| Description | Participatory Harm Auditing Workbenches and Methodologies [PHAWM] Launch Event |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Industry/Business |
| Results and Impact | On 25 October 2024, we kicked off the PHAWM project with a Launch and Networking Event at the University of Strathclyde's Technology and Innovation Centre. The event brought together project researchers, partners, and stakeholders from across the UK and beyond. Professor Dame Muffy Calder set the tone with an inspiring welcome, drawing on her expertise as RAI UK Skills Pillar Chair and Vice-Principal of the University of Glasgow. Our project team then took centre stage, sharing insights into our key objectives and an overview of our first of the health and cultural heritage use cases that the project will explore. This was followed by an interactive Responsible Research and Innovation (RRI) activity which you can also take part in HERE. Our approach to RRI has been inspired by UKRI Trustworthy Autonomous Systems Hub's approach of building it into the core of our activities. The afternoon finished with a networking event with our stakeholders, connecting with other RAi-funded projects, researchers (including early-career and PhD), and the wider AI ecosystem across the UK. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Participatory Harm Auditing Workbenches and Methodologies [PHAWM] Newsletter December 2024 |
| Form Of Engagement Activity | A magazine, newsletter or online publication |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | The PHAWM newsletter has two main purposes, firstly to keep our non-academic partners and stakeholders informed about what is going on in the project and secondly to engage with a wider audience who are interested in participatory auditing of AI. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Participatory Harm Auditing Workbenches and Methodologies [PHAWM] Website |
| Form Of Engagement Activity | Engagement focused website, blog or social media channel |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Industry/Business |
| Results and Impact | https://phawm.org/ The PHAWM website provides an overview of the project and it's goals as well as raising awareness for the need to take a participatory approach in the audit AI tools. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://phawm.org/ |
| Description | Perspectives on AI Risks in Light of the UK's AI Safety Summit |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Monday, October 23 2023 In the early days of World War II, the British Service repurposed mansions at Bletchley Park to house a large team of codebreakers. These individuals, mainly academics, used an early form of artificial intelligence to break German code and in so doing, protect the British people. Some 80 years later, the UK government announced it would host the first international summit on AI Safety at Bletchley Park on November 1-2 UK officials stated that they had identified two very broad types of AI risks- misuse (as example using AI to cause chaos or harm) and loss of control risks. Some NGOs criticised the planners for not focusing on more immediate risks to individuals. In this webinar, our three speakers will examine if these are the most important and immediate risks; what risks should be the central concerns of policymakers, and what are the best regulatory policies to mitigate such risks. The moderator will discuss these issues for the first 30 minutes and the next 30 will be open to audience questions. Speakers: -Gina Neff, Professor and Director, Minderoo Centre for Technology and Democracy, University of Cambridge - Katie Shilton, Associate Professor Information Science, University of Maryland -Tom Goldstein, Professor and Director of Center for Machine Learning, University of Maryland Moderator: -Susan Ariel Aaronson, Research Professor and Director, Digital Trade and Data Governance Hub, GWU |
| Year(s) Of Engagement Activity | 2023 |
| Description | Planning and Participation in Equinet's Coding Equality In the EU AI ACT: Equality Bodies Rising To the Challenge |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Third sector organisations |
| Results and Impact | From the website: This High-Level Conference took place online and person in Brussels on 12 December 2024. The conference was organised by Equinet, with the support of Unia, the Swedish Equality Ombudsman, the Norwegian Equality and Anti-discrimination Ombud and Luminate. It was aimed at Equality Bodies, European law and policy makers, civil society organisations, experts and activists working on the implications of AI systems for equality and non-discrimination. Three members of our team participated - one as panel moderator and two as panellists, discussing 'How to use the Artificial Intelligence Act to investigate AI bias and discrimination: A guide for Equality Bodies' and 'Not a one's law job: Leveraging synergies and building partnerships between equality law, data protection law and consumer law' |
| Year(s) Of Engagement Activity | 2024 |
| Description | Podcast contribution at 'Policy Pod' |
| Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Dr Mestre and Public Policy Associate Abdullah Afzal were interviewed in a podcast released by Public Policy Southampton in the Policy Pod podcast, talking abour responsible research and development of emerging technologies like biohybrid robots, AI for mental health, and distributed acoustic sensing for smart cities. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://open.spotify.com/episode/5hfGb5uYd5UURrv4nCJn6x?go=1&sp_cid=2508051b37359498fc5589b93ce302d0 |
| Description | Policy workshop on Responsible AI for Mental Health (London) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Dr Mestre organised a policy workshop in London, in September 2024. This hybrid event brought together experts from academia, government, industry, and the third sector to discuss the responsible and ethical use of AI in mental health, aiming to inform future policy guidelines on the topic. It was co-organised by the University of Southampton and the Institute for Experiential AI at Northeastern University. The event had ~65 in-person attendees and more than 125 online participants from around the world. The event featured a diverse lineup of 15 international keynote speakers and panellists across two thematic sessions: "Digital Mental Health" and "Responsible Use of AI in Mental Health". On the next day, September 13th, it continued with an invitation-only workshop, where experts discussed, through structured design-thinking activities, preliminary policy recommendations for the responsible application of AI in Mental Health. This event produced a white paper, a video recording in YouTube, it directly influenced two POSTNotes (white paper cited as evidence and organisers were named as contributors), and influenced a policy brief in writing. Given the success, a follow-up workshop was planned to take place in April 2025 in Charlotte, North Carolina (USA). |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://rai4mh.com/events/london-workshop-sept-2024/ |
| Description | Presentation and Discussion for BEUC and ANEC |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Prof. Dr. Karen Yeung gave a presentation entitled 'Fundamental Rights and Standards: Where to Draw the Line?' and participated in an extended discussion. It was hosted by BEUC and ANEC and attracted policy-makers and civil society organisations interested in the developing standards work on AI. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Presentation and Talk for German Responsible Coding Organisation Civic Coding - "Civic Coding ? Innovationsnetz KI für das Gemeinwohl" (7th June 2024) |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | From the website (translated): In our second Deep Dive session entitled "Formal participation in AI standardisation: Existing formats at CEN/CENELEC and DIN/DKE" you can learn more about which classic participation formats the standardisation provides. Together with experts from European and national standardisation bodies, we show how the responsible institutions and working groups of the European Committee for Standardisation/European Committee for Electrotechnical Standardisation (CEN/CENELEC) and the German Institute for Standardisation (DIN) as well as the German Commission for Electrical and Electronics and Information Technology (DKE) are related. In addition, we discuss the conditions under which members of civil society may actively participate, discuss and participate, where and how. This article is complemented by a practical insight into the current AI standardisation: What are the experiences of civil society actors in the current standardisation process? What challenges arise and what solutions are there? We will discuss these and other questions with Christian Marian, the German representative of the electrical engineering industry in the standards organisation CENELEC, James MacLaren from the University of Birmingham and Martin Uhlherr from the DIN Standards Committee Information Technology and Applications. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.civic-coding.de/angebote/veranstaltungen/civic-coding-x-zvki-formelle-beteiligung-an-der... |
| Description | Presentation at International Symposium on Technology and Society (ISTAS) for IEEE |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Postgraduate students |
| Results and Impact | Hybrid presentation given by A. McStay on 'When is Deception OK? Ethical Considerations of Emulated Empathy in AI (IEEE P7014.1)" at IEEE International Symposium on Technology and Society (ISTAS), Puebla, Mexico. In room, 30+ audience members made up largely of students (undergraduate and postgraduate) and academics, plus online attendance. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Presentation at workshop, University of Winchester |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | Local |
| Primary Audience | Professional Practitioners |
| Results and Impact | Alexander Laffer gave a presentation on 'Automating Empathy, Standards and responsible implementation of AI' to a mixed audience of professionals and academics. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Presentation on the ethics of AI in healthcare at the Digital Health Spanish Association |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | I took part on the annual meeting of the Digital Health society in Spain. The meeting was organised in collaboration with Ernst & Young. I could present my ethical model on the human role that can guarantee an ethical AI in healthcare. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.youtube.com/live/crXvkYMHgzI?si=kgcWRXOzS3yzxzKY |
| Description | Prob Futures AI for intelligence, investigations and public protection |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Prob Futures AI for intelligence, investigations and public protection - Networking. AdSoLve partner Tom Sorell participated in the event. Funding from Responsible Ai UK (KP0016). |
| Year(s) Of Engagement Activity | 2025 |
| Description | Prob Futures webinar |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Tom Sorell, Uni of Warwick, AdSoLve partner, participated in the webinar. Funding from Responsible Ai UK (KP0016). |
| Year(s) Of Engagement Activity | 2024 |
| Description | Prof Liakata news article for Queen Mary University of London website, opinion piece. (January 2025). |
| Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Public/other audiences |
| Results and Impact | A recent expert opinion piece published as a news story by Prof Liakata. Responsible Ai UK (KP0016). |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://www.qmul.ac.uk/media/news/2025/science-and-engineering/se/professor-maria-liakata-how-to-mak... |
| Description | Prof. Liakata interview for BBC Radio 4 "Inside Science" on AI in Science: Promise and Peril BBC Inside Science |
| Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Public/other audiences |
| Results and Impact | Prof. Liakata interview for BBC Radio 4 "Inside Science" on AI in Science: Promise and Peril BBC Inside Science. Funding from Responsible Ai UK (KP0016). |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://www.bbc.co.uk/programmes/m0028bpf |
| Description | Prof. Liakata quoted by AI NEWS (February 2025) |
| Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | Prof. Liakata quoted by AI NEWS (February 2025), in an article entitled: AI Action Summit: Leaders call for unity and equitable development.Funding from Responsible Ai UK (KP0016 |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://www.artificialintelligence-news.com/news/ai-action-summit-leaders-call-for-unity-equitable-d... |
| Description | Project AIPAS Consortium |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | Regional |
| Primary Audience | Professional Practitioners |
| Results and Impact | AIPAS Consortium held in December 2024. The consortium discussed the current research findings and the strategic direction of the project. Much debate was given to AI accountability at the development of the 12 Accountability Principles for a UK audience. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Project AIPAS Counter Terrorism Delphi workshop |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Delphi workshop with Counter Terrorism specialists. The purpose of the event was to understand accountability requirements of AI within Counter Terrorism. The workshop also recorded data that was used to validate and develop accountability principles for AI. The outcome of which also included the submission of a paper to the ICEDEG 2025 conference. Due to the reporting period, the notification of admission will be outside the ResearchFish 2025 deadline. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Project AIPAS Delphi workshop with forensics specialists |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Delphi workshop with forensics specialists. The purpose of the event was to understand accountability requirements of AI within forensics. The workshop also recorded data that was used to validate and develop accountability principles for AI. The outcome of which also included the submission of a paper to the ICEDEG 2025 conference. Due to the reporting period, the notification of admission will be outside the ResearchFish 2025 deadline. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Project AIPAS Serious Organised Crime and Child Sexual Exploitation Delphi Workshop |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Delphi workshop with Serious Organised Crime and Child Sexual Exploitation experts. The purpose of the event was to understand accountability requirements of AI within Serious Organised Crime and Child Sexual Exploitation specialisations. The workshop also recorded data that was used to validate and develop accountability principles for AI. The outcome of which also included the submission of a paper to the ICEDEG 2025 conference. Due to the reporting period, the notification of admission will be outside the ResearchFish 2025 deadline. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Public Voices in AI - launch event |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Public/other audiences |
| Results and Impact | At this online launch event, you got a first look at how Public Voices in AI will bring public perspectives to the forefront of AI research, development, use and policy in the UK. Plus, there was information about the Public Voices in AI Fund, including how to apply. A central aspect of responsible AI research, use, development and policy is ensuring that it takes account of public hopes, concerns and experiences. Yet public voice is frequently missing from conversations about AI, an absence which inhibits progress in responsible AI. Addressing this gap is essential to ensure that AI is responsibly used, considers both harms and benefits, and works for everyone. Different groups benefit from and are affected by AI differently, and their hopes, concerns and experiences also vary. Because of structural inequities, some groups are more negatively impacted by AI uses than others. Public Voices in AI therefore centres those most impacted and underrepresented. One way Public Voices in AI will do this is by distributing up to £195,000 of its funding to support participatory projects led by and with negatively affected or underrepresented groups. The Public Voices in AI Fund launched on 31 May 2024, with a closing date for applications of 20 June 2024. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://digitalgood.net/public-voices-in-ai-launch-event/ |
| Description | Public Voices in AI: introduction to the project & resource co-design (webinar) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Led by Helen Kennedy, Director of the Digital Good Network, University of Sheffield and Mhairi Aitken, Alan Turing Institute, this online workshop will introduce the RAi UK satellite project 'Public Voices in AI' which aims to centre public voices in AI research, development and policy. On Public Voices in AI, we are evaluating current public voice methods, their impacts and challenges, and identifying gaps where further support or resources may be needed. Through this research we will develop a set of resources to support wider best practice in this important area. In the workshop, we invite you to join us in co-designing resources to support AI researchers, developers or policy-makers to involve the public in their work. We will share research findings from our work examining current approaches, and discuss what further resources may be beneficial to advance wider involvement of diverse members of the public across the AI innovation lifecycle. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://rai.ac.uk/events/public-voices-in-ai-introduction-to-the-project-resource-co-design/ |
| Description | Public Voices in AI: introduction to the project & resource co-design (webinar) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Led by Helen Kennedy, Director of the Digital Good Network, University of Sheffield and Mhairi Aitken, Alan Turing Institute, an online workshop was organised to introduce the RAi UK satellite project 'Public Voices in AI' which aims to centre public voices in AI research, development and policy. The workshop invited participants to co-design resources to support AI researchers, developers or policy-makers to involve the public in their work. Research findings were also shared from the Public Voices in AI's work examining current approaches, and discuss what further resources may be beneficial to advance wider involvement of diverse members of the public across the AI innovation lifecycle. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://rai.ac.uk/events/public-voices-in-ai-introduction-to-the-project-resource-co-design/ |
| Description | Public-facing talk - Science Club: An Introduction to Intimacy & AI |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | Local |
| Primary Audience | Public/other audiences |
| Results and Impact | Science Club is an event run by Norwich theatre company Curious Directive. A scientist gives a talk about their work and the theatre company along with the audience devise ideas for plays and activities relating to the concepts they've just learned about. Some of these ideas go on to become events in their own right. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://www.curiousdirective.com/science-club |
| Description | RAI Press Release Dec 23 - How are Responsible AI UK connecting the AI ecosystem both nationally and internationally? |
| Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | Press release published by collaborator universities. Increased engagement was reported across all channels (social media/website etc.) |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://www.ecs.soton.ac.uk/news/7138 |
| Description | RAI UK Responsible AI for Mental Health Workshop, London |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Policymakers/politicians |
| Results and Impact | RAI UK Responsible AI for Mental Health Workshop, London. 250 participants from UK, EU and US. This included a lot of UK policy makers. Workshop triggers lots of discussions including a half day policy workshop, which led to a POSTnote (for policy impact). We will run a second workshop in this series in the USA in April (hosted by Northeastern University, Charlotte, USA). |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://rai4mh.com/london-workshop/ |
| Description | RAI UK Webinar |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Industry/Business |
| Results and Impact | RAI UK webinar to present Impacty Project RAKE to audience of partners |
| Year(s) Of Engagement Activity | 2024 |
| Description | RAI UK Webinar: In Conversation with: Professor Simone Stump. RAi Keystone Project: Participatory Harm Auditing Workbenches and Methodologies [PHAWM] |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Industry/Business |
| Results and Impact | Professor Simone Stump, University of Glasgow and PHAWM PI presented an overview of the project. The potential outcomes of this event were: - Networking with potential users of PHAWM tools / training / certification - Increase public awareness of AI bias/harms and importance of participatory auditing |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://events.teams.microsoft.com/event/5c3144e1-5c61-4e2d-83ea-55690aefb932@8370cf14-16f3-4c16-b83... |
| Description | RAI UK integrator sandpit |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Integrator sandpit for academics. Josh Kelsall, AdSoLve researcher at the University of Warwick participated in the event. Funding from Responsible Ai UK (KP0016). |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://art-i-project.github.io/ |
| Description | RAI UK presentation at TeenTech |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Supporters |
| Results and Impact | Presenting Responsible AI at TeenTech |
| Year(s) Of Engagement Activity | 2023 |
| Description | RAKE & Newton workshop |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Industry/Business |
| Results and Impact | The workshop was designed to advance the industry collaboration with the partner and to help develop a co-designed framework for responsible AI within their organisation |
| Year(s) Of Engagement Activity | 2024 |
| Description | RAi (Responsible AI) and BRAID (Bridging Responsible AI Divides) event on 'Diversity of Methods for AI' in Edinburgh |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | 15min talk followed by a roundtable discussion and Q&A session that will consider the importance of diversity of methods when studying or advancing AI, see details attached |
| Year(s) Of Engagement Activity | 2025 |
| Description | RAi Concept Workshops - International Workshop on Guiding Concepts for Responsible AI, |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | The event featured presentations from participants from academia and industry from the UK, Netherlands, and Belgium. The outputs include a map of key themes underpinning Responsible AI, which will feed into a future publication. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://sites.google.com/view/rai-concepts-workshop/home |
| Description | RAi Coordinating Keystone Launch |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | An agile, responsive and creative programme of research, RAi UK's Co-ordinating Keystone complements and enhances the wider portfolio of RAi UK funded research. Led by our talented Early Career Researchers, these projects are co-created with stakeholders and refined through an iterative development process to ensure maximum impact and relevance. Formed using the EPSRC Sandpit mechanism, each project is a multidisciplinary, cross-institutional collaboration that will tackle emerging concerns linked to generative and other forms of AI currently being built and deployed across society. Emerging projects build on RAi UK priorities and capitalise on our core research strengths including Human-machine teaming, Science and Technology Studies, Digital Humanities, RRI, Human Robot Interaction, Multiagent Systems, Robotics, Policy, Health, Law & Governance, and Adoption of AI. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://rai.ac.uk/events/rai-uk-coordinating-keystone-launch-event/ |
| Description | RAi International Sandpit - India |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | The Responsible Ai International Sandpit is co-hosted by RAi UK and the Centre for Responsible AI (CeRAI). This event aims at fostering international engagement and addressing critical challenges in the field of artificial intelligence. The sandpit is an intensive, interactive, inclusive and free-thinking environment, where a diverse group of participants from a range of disciplines and backgrounds will work together for five days. The goal is to immerse participants in collaborative thinking processes and ideas sharing in order to construct innovative approaches. The sandpit aims at co-creating 12-month research projects with a view to establishing new relationships and address the challenges of building, operating, and regulating artificial intelligence across different countries and cultures. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://rai.ac.uk/events/responsible-ai-international-sandpit/ |
| Description | RAi Keystone Projects News Release |
| Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | Huge funding boost for Responsible Ai - reported by Daily Echo newspaper. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.dailyecho.co.uk/news/24308146.huge-funding-boost-ai-research-southampton-team/ |
| Description | RAi Policy Impact Workshop Series |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | A series of policy impact workshops, each involving around 20 diverse stakeholders, are planned around emerging challenges of responsible AI. The first of these "Responsible AI to enable flourishing in and by Low and Middle Income Countries" was held in January 2025. It is envisaged that each workshop will yield a white paper or other framework. |
| Year(s) Of Engagement Activity | 2025 |
| Description | RAi Skills Workshop |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | This provided an opportunity for the teams to present their projects and plans for the upcoming year. There was also a panel discussions about the research, their development of open educational resources and critical challenges related to Responsible AI skills, followed by a drinks reception. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://rai.ac.uk/events/rai-skills-projects-workshop/ |
| Description | RAi UK AHM |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Participation and networking by Rob Procter, University of Warwick and Maria Liakata. Prof Liakata was invited to give a talk about the project. Responsible Ai UK project KP0016. |
| Year(s) Of Engagement Activity | 2024 |
| Description | RAi UK All-Hands Meeting 9-10 September 2024 |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | RAi UK All-Hands Meeting - sharing details about the Public Voices in AI project. Chief Scientific Advisors from various Cabinet Offices in attendance. |
| Year(s) Of Engagement Activity | 2024 |
| Description | RAi UK Co-ordinating Keystone Launch Event |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Other audiences |
| Results and Impact | RAi UK Co-ordinating Keystone Launch Event - networking. Several members of the consortium participated (QMUL/University ofNottingham/University of Warwick) AdSoLve participated. Funding from Responsible Ai UK (KP0016) |
| Year(s) Of Engagement Activity | 2024 |
| Description | RAi UK Impact Accelerator and International Partnerships Networking Event |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | A full day event to promote the outcomes and achievements of the Impact Accelerator and International Partnerships projects among the cohort of IA/IP project teams, RAi UK researchers, leadership, UKRI and selected partner organisations. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://rai.ac.uk/events/rai-uk-impact-accelerator-and-international-partnership-networking-event/ |
| Description | RAi UK Keystone Roundtable |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Networking event with the opportunity to represent AdSoLve and talk about the project to other projects. Rob Procter, University of Warwick and AdSoLve partner participated. Responsible Ai UK project KP0016. |
| Year(s) Of Engagement Activity | 2024 |
| Description | RAi UK webinar In Conversation with: Professor Maria Liakata |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | A RAi UK 1-hour webinar, with Prof M Liakata, about the RAi UK funded Keystone Project "AdSolve - Addressing socio-technical limitations of LLMs for medical and social computing". |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://rai.ac.uk/events/in-conversation-with-professor-maria-liakata/ |
| Description | RAi at UoN Network Panel and Network Lunch |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | Local |
| Primary Audience | Other audiences |
| Results and Impact | Please join us for a panel session followed by networking lunch to discuss "What is Responsible AI?". The panel will be chaired by Dr Aislinn Gomez Bergin, RAI UK. Dr Oliver Butler is an Assistant Professor in Law. He convenes the Law and Technology Discussion Group at the School of Law. The group seeks external speakers to present on the new challenges presented by emerging technology for existing approaches to law, regulation and governance. It is deliberately wide and interdisciplinary in its membership and theme, to draw together researchers across law and the wider University. Dr Helena Webb is a socio-technical researcher and Transitional Assistant Professor based in the School of Computer Science. Her research interests lie in exploring the day-to-day lived experience of technology and bringing insights from granular-level qualitative analysis into understandings of innovation. She draws on Responsible Research and Innovation as a set of good practices that benefits the research process. She leads a RAI UK International Partnership project, a UKRI TAS Hub project on the police use of mobile phone data, and a UoN Horizon Digital Economy project on responsible research into online communities. She is also a co-lead of CHAIR, a UoN initiative that brings together researchers across the university with an interest in human-AI interaction. CHAIR will be running events throughout the rest of the academic year, including a summer symposium. Dr Damian Okaibedi Eke is a Transitional Assistant Professor at the school of Computer Science, University of Nottingham, UK. He is representing the UKRI research programme on Responsible AI, UK (RAI-UK). His research interests cover critical philosophical issues at the intersection of Emerging Technologies, Data and Society including but not limited to Responsible AI, Responsible Innovation, ICT4D, Responsible Data Governance and Neuroethics. Dr Kyle Harrington is an Assistant Professor in Human Factors Engineering, and the Chair of the "Human Rights in the Digital Age Network" here at UoN. Kyle has an interdisciplinary background in Artificial Intelligence, Psychology and Philosophy and experience in both automotive and healthcare research. Kyle's research projects have recently focused on cognitive screening and rehabilitation technologies and assistive robotics. He also teaches advanced methods in Human Factors and Human Computer Interaction. Kyle has an interest in the broader, societal, political and ideological implications of technology adoption. |
| Year(s) Of Engagement Activity | 2024 |
| Description | RRI and EDI in RAI UK |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Other audiences |
| Results and Impact | Seminar on the RRI and EDI approach taken in RAI UK Impact project RAKE |
| Year(s) Of Engagement Activity | 2023 |
| Description | RTAI Workshop - India |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Responsible and Trustworthy AI Workshop Co-organised by Responsible AI UK, CERAI, and IIIT Delhi The event was held as part of the UK-India Technology Security Initiative. This workshop brought together leading researchers and innovators from India and the UK to hear from world-leading experts on Responsible and Trustworthy AI. The programme included perspectives from the UN AI Advisory Body. Talks covered the ethical, social, and technical challenges in the development and deployment of artificial intelligence. The event aimed to foster interdisciplinary discussions and collaborations to advance the field of responsible AI. Participants included members of the RAi and CeRAi programmes, multiple Indian Universities, government officials, and industry partners. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://rai.ac.uk/events/responsible-and-trustworthy-ai-workshop/ |
| Description | RTAU - ICO - EHRC 'Fairness Innovation Challenge' (GN) |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | The Responsible Technology Adoption Unit, the Information Commissioner's Office, and the Equality and Human Rights Commission convened this event, at which Gina Neff gave a keynote on the importance and challenges of responsible AI and attendees included the UK Minister for AI, and key DSIT, ICO and EHRC officials along with representatives of the winning fairness challenge projects. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Refreshing our National AI Strategy (GN) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Gina participated in a workshop hosted by AI@Cam on the National AI Strategy. This was an opportunity to reconnect many of the key players from the UK's former AI Council and point forward to what policy for the new government should be. Gina participated in a keynote panel launching the event. About 50 people and attendees included members of DSIT, DCMS, the Financial Conduct Authority, Competitions and Market Authority, Nesta (the government's innovation agency), Ada Lovelace Institute and other key stakeholders |
| Year(s) Of Engagement Activity | 2024 |
| Description | Reimagining the Design of Mobile Robotic Telepresence: Reflections from a Hybrid Design Workshop |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Other audiences |
| Results and Impact | Poster presentation and panel discussion at the TAS Symposium 2024. Mobile robotic telepresence systems have been around for more than a decade, promising to improve on traditional video conferencing by enabling remote movement, and more recently, providing autonomous features for navigation, yet their use in the real world remains limited and infrequent. We share reflections from running a hybrid design workshop on telepresence robotics (at an academic conference) focused on re-imagining the design and use of telepresence robots, where mobility, as the main affordance of these devices, could truly provide value. We describe our hybrid design workshop and reflect on challenges encountered and learning outcomes. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://symposium.tas.ac.uk/2024/program/ |
| Description | Responsible AI Music Artistic Mini-Projects |
| Form Of Engagement Activity | Engagement focused website, blog or social media channel |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Three artistic mini-projects were funded to create impact and interest in Responsible AI (RAI) concerns of bias in AI models. These mini-projects use AI tools such as low-resource AI models with small datasets to showcase the challenges of bias in AI and how RAI techniques can be used to address them. These mini-projcts are part of a 12 month project "Responsible AI international community to reduce bias in AI music generation and analysis" which built an international community to address Responsible AI (RAI) challenges of bias in AI music generation and analysis. This work was supported by the Engineering and Physical Sciences Research Council [grant number EP/Y009800/1], through funding from Responsible Ai UK. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://music-rai.github.io/projects/ |
| Description | Responsible AI Music Artistic Mini-Projects Launch Event and Artist Talks |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Responsible AI Music Launch & Artist Talks Addressing questions of how can AI be used ethically in music creation? Participants were invited to an evening of discussion, exploration, and live performances at the Responsible AI Music Launch & Artist Talks. Discover groundbreaking AI music projects Hear from pioneering artists and researchers Experience AI-driven creative performances Project "Responsible AI international community to reduce bias in AI music generation and analysis. This work was supported by the Engineering and Physical Sciences Research Council [grant number EP/Y009800/1], through funding from Responsible Ai UK. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://music-rai.github.io/projects/ |
| Description | Responsible AI Music Launch and Artist Talks |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | The launch event showcased the results of UAL CCI's open call for speculative artistic mini-projects. This initiative asked three artists to explore how AI can be used to create music in genres currently marginalised by mainstream AI models. The event featured listening sessions and presentations from the artists who discussed their projects and the processes behind them. Each of these works is a response to the challenges of bias in AI models, offering new perspectives on how technology can reflect and celebrate diversity in music. The presentations were followed by a Q&A session with the artists and live performances of musicians using AI tools in their practice. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://www.youtube.com/watch?v=egzcbKAarZg |
| Description | Responsible AI Town Halls 2023 (London, Belfast, Glasgow and Cardiff) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Attending a RAI UK Town hall provided the opportunity to help shape the future of Responsible AI in the UK. Stakeholders and academics of all disciplines were invited to hear more about the programme, opportunities to engage and network with the team and each other. Feedback from these events directly influenced the development of the Keystone Projects call, which was launched in December 2023. |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://www.rai.ac.uk/events |
| Description | Responsible AI Webinar Series |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | The initial webinar series will present our successful Impact Accelerator partners and International Partnerships with the RAI UK research programme. |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://www.rai.ac.uk/events |
| Description | Responsible AI for Death and Dying |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Other audiences |
| Results and Impact | This was an invited talk for the research group Responsible Digital Futures. It gather interest from the audience and further communications took place to organise a special issue at the Journal of Responsible Technology on Responsible AI for Death and Dying and also a conference in 2026. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.responsible-digital-futures.org/home |
| Description | Responsible AI for Mental Health Workshop (EP) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | This was a RAi UK supported initiative that connected experts in mental health, AI, public health and psychology from across academia, government, industry and the third sector. The workshop discussed what responsible use of AI means for the domain of mental health, identifying key challenges and signposting promising future directions of travel. This was the first of a series of events of the RAi UK-funded International Partnership project RAI4MH between the University of Southampton and the Institute for Experiential AI @ Northeastern University. The collective discussions and feedback from the day will inform policy guidelines to help ensure that the uptake of AI in the mental health sector is done ethically and responsibly. Event organised by Rafael Mestre (University of Southampton), Annika M. Schoene (Northeastern University) and Stuart E. Middleton (University of Southampton), with funding from RAi UK and support from the Centre for Machine Intelligence (CMI) and Centre for Democratic Futures (CDF). |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://rai.ac.uk/ |
| Description | Responsible AI: Ensuring Ethical Governance and Cultural Sensitivity (SK) |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Industry/Business |
| Results and Impact | Professor Sana Khareghani was among distinguished speakers at this event organised by WoMENAITLondon. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Responsible Innovation into action: RAKE engagement with the business sector (by Prof Rob Procter and Dr Virginia Portillo) |
| Form Of Engagement Activity | Engagement focused website, blog or social media channel |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Reflective blog from a RAi UK Impact Accelerator project: RAKE (Responsible Innovation Advantage in Knowledge Exchange) engaging with the commercial sector (a Software AI industry). The blog summarises RAKE's work with software project delivery managers (DMs) in a digital services company to explore the use of Responsible Innovation (RI) Prompts and Practice cards (RI Cards) (Portillo el at., 2023) to support integrating RI within their work. This tool has been welcomed by the DMs and the company Innoavtion team. Interst to co-create open access material(s) to support RI practice within the comercial sector has been discussed between the reserach and commercial teams. Dr Portillo was invited to present this work at the First Responsible Innovation Conference focused in the commercial sector organised in Belfast (Nov. 2024) https://aisling-events.com/events/responsible-innovation-conference/birmingham-speakers/ |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://rai.ac.uk/responsible-innovation-into-action-rake-engagement-with-the-business-sector/ |
| Description | Responsible Internet Futures workshop, Texas September 2024 (TAS Hub - Good Systems Strategic Partnership) project |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Postgraduate students |
| Results and Impact | The workshop brought together interested participants to suggest and debate visions for responsible Internet futures. It combined invited speaker presentations with discussion sessions and sought to co-create an initial pathway towards responsibility as part of these discussions. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://symposium.tas.ac.uk/2024/program/workshops/ |
| Description | Responsible Research and Innovation: A Framework for Ethical Considerations in fNIRS and AI |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Industry/Business |
| Results and Impact | I delivered an invited talk and workshop at the University of Cambridge to the Brain Research Team at the School of Psychology about the application of the RR framework in fNIRS and AI research. The aim of the event was to introduce the project, its objectives, and aims to the team. Additionally, I conducted an RRI workshop using RRI prompt cards to gather stakeholders' perceptions about responsible and ethical approaches to the application of AI in fNIRS research, including data collection, analysis, storage, and sharing. The outcome of the project informs the objectives of WP2 and will be published in a journal paper (writing in progress) |
| Year(s) Of Engagement Activity | 2024 |
| Description | Responsible Research and Innovation: A Framework for Ethical Considerations in fNIRS and AI- Poster Presentation at fNIRS2024 |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Poster presentation at the international fNIRS2024 conference. Introducing the RRI framework to fNIRS community (researchers, academics, industry, students and health professionals). The aim was to introduce responsible research and innovation approaches for fNIRS and AI for data collection, analysis, storage and sharing and to introduce RAI UK aims and objectives. We established networks and connections with other universities and industry partners. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Rethink: Is the internet getting worse? (GN) |
| Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | n this episode, we look at "Enshittification", or to put it more politely - the problem of internet platform decay. Facebook used to be about posts from your friends, but its feed now also includes groups, adverts, reels, and threads posts. Trying to work out if the Amazon product you want is any good can be tricky, because sellers can pay for their product to appear higher in your list of results. Search engines are not immune; German researchers have found that Google, Bing and Duck Duck Go are prone to spam marketing, making it more difficult to find what you want. There's no ill-intent behind this: platform decay is a side-effect of the way these businesses work. So what can governments and individuals do, to try to get a better internet for everyone? Presenter: Ben Ansell Producer: Ravi Naik Editor: Clare Fordham Contributors: Cory Doctorow, visiting Professor of Computer Science at the Open University, and co-founder of the UK Open Rights group. Professor Gina Neff, RAi Deputy CEO and Executive Director of the Minderoo Centre for Technology & Democracy at the University of Cambridge Marie Le Conte, political journalist and author of the book escape - about the rise and demise of the internet Dr Cristina Caffarra, competition economist and former anti-trust consultant. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.bbc.co.uk/programmes/m0022z9r? |
| Description | Rethink: is big tech stealing your life? (JS) |
| Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | With contributions from Prof. Jack Stilgoe, the programme discusses the evolving relationship between the public and big tech companies, particularly how AI's growing reliance on data has sparked legal and privacy concerns. He highlights the need for effective governance of emerging technologies to protect individuals and creators from exploitation. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://www.bbc.co.uk/programmes/m0027d7n |
| Description | Robopragmatics international workshop |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Despite impressive technological developments, the social capabilities of robots remain remarkably limited. In this workshop our goal is to respecifying human-robot interaction and contributing an EMCA perspective to robotics. During the workshop we aim to develop conceptual directions for EMCA work around robots and to explore unique contributions that a focus on moment-by-moment interaction can make to robotics. The costs for the workshop as well as accommodation in Nottingham are covered by Responsible AI UK. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://1drv.ms/w/s!ApPvsxJ3r3C9hwnPkWL1xmUgPThA |
| Description | Robopragmatics workshop |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Other audiences |
| Results and Impact | Robopragmatics - Guiding Robot Designers towards Understanding People. The goal of the workshop was to bring together EMCA (ethnomethodology and conversation analysis) researchers who are interested in studying human-robot interaction and who want to make insights on human social interaction accessible to robot designers. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Roundtable on Generative AI in Writing and Publishing |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | The roundtable on Generative AI in Writing and Publishing was held on 19 Sept. |
| Year(s) Of Engagement Activity | 2023 |
| Description | Roundtable on Regulation of Immersive Technology for Mental Health |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Roundtable hosted by HIN South London with policymakers to discuss regulation of XR in mental health. Aislinn Bergin, project partner, from the University of Nottingham, participated. Responsible Ai UK project KP0016. |
| Year(s) Of Engagement Activity | 2025 |
| Description | Royal Society / Chinese Academy of Sciences, AI Ethics Workshop on Operationalising AI Safety (GN) |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | This event brought together the Royal Society with its Chinese equivalent on AI Safety Research. Gina spoke as part of a set of talks on the theme socio[1]technical approaches to AI safety. The workshop included 15 guests from China and 15 high-level invitees from the UK. Gina spoke on a panel with Ada Lovelace's Andrew Strait and the co-director of the UK AI Safety Institute |
| Year(s) Of Engagement Activity | 2024 |
| Description | Runnymede Trust Research Group Talk: Introducing Participatory Harm Auditing Workbenches and Methodologies [PHAWM] |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | Regional |
| Primary Audience | Third sector organisations |
| Results and Impact | This meeting was about co-design of responsible AI. A project overview was presented and this was an opportunity to network with potential users of PHAWM tools / training / certification. |
| Year(s) Of Engagement Activity | 2024 |
| Description | SCVO The Gathering, "putting people and values at the heart of AI" panel: The Participatory Harm Auditing Workbenches and Methodologies [PHAWM] |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Third sector organisations |
| Results and Impact | Presented about project overview, this was a networking with potential users of PHAWM tools / training / certification. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://scvo.scot/the-gathering/events/a1vP100000DIbKzIAL/putting-people-and-values-at-the-heart-of-... |
| Description | School visit and guest lecture at UTC Sheffield |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | Local |
| Primary Audience | Schools |
| Results and Impact | School visit and guest lecture at UTC Sheffield. Funding from Responsible Ai UK (KP0016). |
| Year(s) Of Engagement Activity | 2025 |
| Description | ScotSoft Conference: Is this AI good (enough)? Assessing AI using Participatory Harm Auditing Workbenches and Methodologies [PHAWM] |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Industry/Business |
| Results and Impact | ScotSoft is a full day conference that aims to enhance knowledge and facilitate networking and collaboration. This event is aimed at technical professionals and leaders in the sector. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.scotlandis.com/scotsoft-2024/ |
| Description | Seminar at Ulster University |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | Regional |
| Primary Audience | Postgraduate students |
| Results and Impact | Ben Bland was invited to give a seminar on 'Machines with Feelings - Emotions and Empathy in AI - Opportunities, Risks and Ethics' at Ulster University |
| Year(s) Of Engagement Activity | 2024 |
| Description | Seminar on the Ethics of medical AI at King's College London |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Postgraduate students |
| Results and Impact | A two hours session addressed to MSc students about the ethical challenges of AI. I introduced several challenges, my own research in the field, and we discussed together different case studies |
| Year(s) Of Engagement Activity | 2024 |
| Description | Sheffield Policy campus |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | The Sheffield Policy Campus (SPC), established in June 2023, aims to build a critical mass of policy makers across government departments in Sheffield and South Yorkshire, to drive better policymaking and support opportunity in the region. It is a partnership, collectively endorsed by major institutions across the city and region. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Soapbox Science Nottingham |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | Local |
| Primary Audience | Public/other audiences |
| Results and Impact | Soapbox Science is a novel public outreach platform promoting women scientists and the science they do. The events started in London in 2011 and have since spread globally to showcase 2,500 scientists who have presented to over 300,000 attendees and counting! When Soapbox Science arrives, it transforms the venue into an arena for public learning and scientific debate. Each event showcases 12 inspirational women speakers from the world of science, technology, engineering, medicine, and maths. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.nottingham.ac.uk/news/women-scientists-get-on-their-soapboxes-to-inspire-next-generation |
| Description | StoryFutures Women in Innovation Talk - Dr Aislinn Bergin. |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Other audiences |
| Results and Impact | Invited talk to StoryFutures Academy at Royal Holloway to discuss my research in digital mental health |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://storyfutures.com/ |
| Description | Street-based outreach |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | Local |
| Primary Audience | Professional Practitioners |
| Results and Impact | Participants took flyers and refreshments to the streets to engage with fellow riders and invite them to complete the survey and get involved with the organisation |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://x.com/workobservatory/status/1861369327930196300 |
| Description | Sustainable AI Report (SDR) |
| Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | Gopal was interviewed as part of a piece by ITV News on the tensions between the energy used by data centres and our Net Zero commitments. This piece was scheduled to be shown on ITV on Monday 10th Feb, following the publication of the RAEng report. |
| Year(s) Of Engagement Activity | 2025 |
| Description | TAS 2024 Showcase Demo on Dynamically Adaptive Human-Swarm Interaction |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Public/other audiences |
| Results and Impact | We presented a demo of our research on dynamically adaptive human-swarm interaction at the Trustworthy Autonomous Systems (TAS) 2024 Showcase Event held in London, United Kingdom from the 5th - 6th of March 2024. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://uos-haris.online/demo/tas2024/ |
| Description | TV interview, roundtable show on the TRT World News channel, discussing LLMs and language learning, Pro Liakata |
| Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Public/other audiences |
| Results and Impact | Prof Liakata featured on Roundtable, a TV discussion show on the TRT World News channel, discussing LLMs and language learning. Turkish TV channel. Funding from Responsible Ai UK (KP0016). |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://www.youtube.com/watch?v=M7RpglNB9h4 |
| Description | Talk at IEEE International Conference on Robotics and Automation (ICRA 2024) |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Ben Bland gave a talk at IEEE International Conference on Robotics and Automation (ICRA 2024) on "Diversity and International Perspectives - standards and guidelines" |
| Year(s) Of Engagement Activity | 2024 |
| URL | http://ieee-icra.org |
| Description | Talk at UCL Bioethics society on the ethics of medical AI |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Postgraduate students |
| Results and Impact | I gave an hour seminar introducing the main challenges the healthcare sector faces to ethically integrate AI. I made a demo on how an AI assisted and I discussed with the audirnce about related ethical challenges. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://x.com/ucl_bioethics/status/1758181679850574240?s=20 |
| Description | Talk at a Symposium on Using digital methods to enable digital co-production with young, Ireland |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Talk at the ISRII Conference. With Aislinn Bergin, AdSoLve partner, University of Nottingham. Responsible Ai UK project KP0016. |
| Year(s) Of Engagement Activity | 2024 |
| URL | http://www.isrii.org |
| Description | Talk for BRAID at University of Edinburgh |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Other audiences |
| Results and Impact | Talk given by A. McStay on "Moving fast and breaking people? Empathic AI companions and UK citizen perspectives" for BRAID at the University of Edinburgh. The talk examines the ethical challenges and societal implications of empathic AI companions, drawing on UK public attitudes and civil lawsuits against Character.ai. The lawsuit highlights critical design flaws, inadequate safeguards, and ethical dilemmas, especially the blurred boundaries between reality and fiction. Survey findings reveal demographic divides in familiarity and usage, but also shared concerns about privacy, emotional dependency, and the appropriateness of AI companions for children and older adults. While respondents recognise benefits such as reducing loneliness and aiding education, anthropomorphic design elements evoke mixed reactions, raising ethical questions about simulated emotion and inappropriate user deception. The talk advocates for age-appropriate design and stronger regulatory frameworks, emphasising the need for balanced policies to protect vulnerable populations while fostering creativity and responsible innovation. Actionable recommendations aim to guide policymakers, industry leaders, and scholars in addressing the ethical complexities of this emerging digital technology. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://www.designinformatics.org/event/braid-idi-seminar-andrew-mcstay/ |
| Description | Talk given at China University of Politics and Law, Beijing |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Talk given by A. McStay on "Automating Empathy: Developing Cross-Cultural Soft Law Standards for Artificial Partners based on Foundational AI" at China University of Politics and Law, Beijing, involving a mixed audience of politicians, academics, students and those in industry. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Talk given at FENS-Chen Neuroscience Summer School, University of Lausanne |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Postgraduate students |
| Results and Impact | A. McStay gave a talk on the topic of "Automating Empathy, Establishing Standards and Good Governance" at the FENS-Chen Neuroscience Summer School, University of Lausanne. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.fens.org/wp-content/uploads/2023/11/FENS-CHEN-NeuroLeman-Program2024.pdf |
| Description | Talk given at International Society for Research on Emotion (ISRE) |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Other audiences |
| Results and Impact | Talk given by A. McStay and B. Bland on "Developing Ethical Standards for Emulated Empathy (lessons learned)" at 2024 Conference of the International Society for Research on Emotion (ISRE) |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.isre2024.org |
| Description | Talk given at University of Sussex |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Other audiences |
| Results and Impact | Talk given by A. McStay on "Moving fast and breaking people? Empathic AI companions and UK citizen perspectives" at the University of Sussex. The talk examines the ethical challenges and societal implications of empathic AI companions, drawing on UK public attitudes and civil lawsuits against Character.ai. The lawsuit highlights critical design flaws, inadequate safeguards, and ethical dilemmas, especially the blurred boundaries between reality and fiction. Survey findings reveal demographic divides in familiarity and usage, but also shared concerns about privacy, emotional dependency, and the appropriateness of AI companions for children and older adults. While respondents recognise benefits such as reducing loneliness and aiding education, anthropomorphic design elements evoke mixed reactions, raising ethical questions about simulated emotion and inappropriate user deception. The talk advocates for age-appropriate design and stronger regulatory frameworks, emphasising the need for balanced policies to protect vulnerable populations while fostering creativity and responsible innovation. Actionable recommendations aim to guide policymakers, standards development, industry leaders, and scholars in addressing the ethical complexities of this emerging digital technology. |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://www.sussex.ac.uk/broadcast/read/67067 |
| Description | Talk given for Infosys and British High Commission, Bengaluru |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Talk by A. McStay on 'Soft Law Standards for Technologies that Gauge Intimate Life' given to professionals, those in the tech industry and policymakers, discussing the work of the project |
| Year(s) Of Engagement Activity | 2024 |
| Description | Talk to 800+ A-level students on Responsible AI |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | Regional |
| Primary Audience | Schools |
| Results and Impact | I gave a talk on AI in our everyday lives for Computer Science in Action - a day of subject specific talks for A-level, IB, BTEC Level 3 and T Level computer science students. Students were engaged and interacted by asking questions via Slido. Teachers later reported back that the students found it interesting and that some of the ideas were new to them. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://educationinaction.org.uk/study-day/computer-science-in-action-27-11-2024/ |
| Description | Tea and Talk Tech |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Third sector organisations |
| Results and Impact | Invited talk to present on digital mental health for charity |
| Year(s) Of Engagement Activity | 2024 |
| Description | Tech Solent Event (Round Table Discussion) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | Regional |
| Primary Audience | Industry/Business |
| Results and Impact | This Tech Solent event was hosted at the PWC office in Southampton. We were invited to talk about our work in the areas of responsible AI. We presented some research on human swarm teaming. We led a roundtable discussion session to gather their challenges with the regulation and adoption of artificial intelligence in their businesses. |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://www.techsolent.org/event-details/techsolent-december-event-with-rai-uk |
| Description | Tech Wales Week 2023 |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Responsible AI participation in the panel - Building Responsible and Trustworthy AI through International Collaboration: A Global Perspective |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://www.walestechweek.com/ |
| Description | Tech that Liberates: Embedding AI in Public Service Reform (GN) |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Gina Neff spoke on a panel at a Demos event, marking the launch of their 'Tech that Liberates' report which focuses on public sector tech use. Co-panellists included Hetan Shah (Chief Executive of the British Academy) and Dan Aldrige MP. Gina's comments focused on the need to push back on big tech narratives that present tech as a "magical solution", and that tech adoption takes work. |
| Year(s) Of Engagement Activity | 2024 |
| Description | The Artificial Human - Why can't Ai drive me home yet? - BBC Sounds |
| Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | Show about the practical, legal, ethical and technical needs that autonomous vehicles will need to meet in order to function, featuring RAi's Professor Jack Stilgoe among others. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.bbc.co.uk/sounds/play/m00202gr |
| Description | The Ethics of AI |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Joshua Krook presented 'The Ethics of AI,' at Stanton Library North Sydney (19 March 2025). |
| Year(s) Of Engagement Activity | 2025 |
| URL | https://www.northsydney.nsw.gov.au/events/event/757/the-ethics-of-ai |
| Description | The Ethics of Human-AI relationships (IDEA workshop, Leeds) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | Regional |
| Primary Audience | Professional Practitioners |
| Results and Impact | This was a workshop organised by IDEA and the The Centre for Love, Sex and Relationships in Leeds. This event included a number of talks where we heard insights about the kinds of relationships humans can form with a range of sophisticated social AI-systems, including digital duplicates, deathbots, social/sex robots, AI therapists, VR avatars, and companion chatbots. The goal of the workshop was to draw conclusions about the extent to which these kinds of relationships can play a role in the good human life. |
| Year(s) Of Engagement Activity | 2025 |
| Description | The Impact of Ethics on S&T R&D, presentation on the topic of 'Are ethics a barrier to tech adoption' |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Invited talk to UKIC S&T staff 5 November 2024 - Do Different Event: The Impact of Ethics on S&T R&D, on the topic of: Are ethics a barrier to tech adoption - presentation from Professor Marion Oswald (Northumbria University) |
| Year(s) Of Engagement Activity | 2024 |
| Description | The Paradigm Shift is here: Using AI to improve teaching and learning |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Prof. John Domingue participated in a panel discussion on how Generative Al is supporting both teachers and students, while putting forward a critical analysis to evaluate the actual pedagogical gains and the quality of the learning experience augmented by Al. |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://oeb.global/conference#agenda |
| Description | The Seventh Workshop on Fact Extraction and Verification (FEVER 2024) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Shared task focused on efficient, reproducible and open-source approaches to automated fact-checking. Michael Schlichtkrull, QMUL, AdSoLve project team participated in the worshop. Funding from Responsible Ai UK (KP0016). |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://fever.ai/ |
| Description | The UK Science Comedy festival- Science Showoff |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Public/other audiences |
| Results and Impact | The UK's First Science Comedy Festival! An all-day science comedy show and a chance to fill your brains with SIX WHOLE HOURS OF HILARIOUS NERDY PROGRAMMING (and breaks). Top comedians and scientists took to the stage to unravel the mysteries of the universe, one punchline at a time. The audience learnt about the failures of AI, the challenges of living the nerd life and the secrets to finding the funny in your work while wiping away tears of laughter. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://scienceshowoff.wordpress.com/2024/05/24/the-uk-science-comedy-festival-july-14th/ |
| Description | The future of UAVs: towards trustworthy applications and development tools (Workshop) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Industry/Business |
| Results and Impact | We organised the workshop to bring together experts from academia, government, and industry to establish connections and form groups for future research projects centered around the use of UAVs. The workshop was organised as part of the EPSRC Autonomy Project on Smart Solutions Towards Cellular-Connected Unmanned Aerial Vehicles System (EP/W004364/1). The workshop was organised by Dr Ayodeji O. Abioye, Dr Sergio A. Araujo-Estrada, and Dr Mohammad D. Soorati with the support of Prof Jim Scanlan and Prof Gopal Ramchurn. The "Future of UAVs: towards trustworthy applications and development tools" workshop was a national event hosted at the University of Southampton, on Monday 20th November 2023, 09:30 - 16:30 Hrs (BST). We invited four speakers to kick start the roundtable sessions with the talks on "Future UAVs" by Prof Jim Scanlan (SOTON UAV), "Trustworthy & Responsible use of AI" by Prof Gopal Ramchurn (TAS Hub and RAI UK), "Future Industry Applications of UAVs" by Prof Henry Tse (Connected Places Catapult), and "Public perception and ethical concerns on using drones" by Prof Christian Enemark (DRONETHICS). The event had three round table sessions to discuss novel applications of UAVs, as well as the ethical use, and development tools. The discussion was guided around themes ranging from agile economy (e.g. parcel delivery), emergency response (e.g. medical delivery), and environment monitoring. New ideas for potential future applications, such as long-range drones with perch and recharge capability, were identified. Areas for future research and funding such as the development of an all-weather drone and better airspace management were also discussed. Attendees were able to expand their network for future collaborations. |
| Year(s) Of Engagement Activity | 2023 |
| Description | Training for Project: Technology in Professional Services (TiPS) accelerator |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Industry/Business |
| Results and Impact | Equality Project Officer, Fabian Luetz, delivered a meeting alongside SuperTech. It offered training for the project: Technology in Professional Services (TiPS) accelerator, a project funded by ESRC and UKRI working with legal and accountancy firms that are adopting technology so we can help/study them during this process. It was present on 24/09/2024. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Transforming University Education with Generative AI |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | It has been a little over a year since ChatGPT burst onto the world. This high exposure of Generative AI has forced universities to reflect upon and rethink many aspects of the way they teach and assess. Building on AI research, innovation and deployment which stretches over many decades, since the beginning of 2023 we have been exploring the potential of Generative AI to aid in OU teaching and learning. In this talk I will report on early results, including proof of concepts in the areas of content generation and digital assistants for students, the views of our students on these technologies, and outline possible future directions. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://siks.nl/activities/activities/siks-symposium-on-advancing-education-and-knowledge-exploring-... |
| Description | Trust Conference: When tech policy becomes foreign policy (GN) |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | he Thomas Reuters Foundation's Trust Conference is a global forum aimed at strengthening free, fair, and informed societies. Gina Neff moderated a panel discussion featuring Mark Surman the CEO of the Mozilla Foundation, Jennifer Bachus from the US State Department, and Mariagrazia Squicciarini of UNESCO about the global governance of AI. Themes spanned actions to take now, AI[1]powered economies, and effective technology governance. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Trust in AI (workshop in Singapore) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Third sector organisations |
| Results and Impact | During this workshop organised by the NTU Institute of Science and Technology for Humanity I provided a talk on 'Why diversity in AI is everyone's responsibility'. This networking event was very successful and there are plans to organise a second workshop aiming to solidify the relationships and apply together for funding. |
| Year(s) Of Engagement Activity | 2023 |
| Description | Turing SIG on Multiomics |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Other audiences |
| Results and Impact | Presentation on Multimodal AI. Prof Greg Slabaugh, QMUL/AdSoLve participated. Funding from Responsible Ai UK (KP0016) |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.turing.ac.uk/events/multimodal-data-integration-omics |
| Description | Turkish TRT World Interview |
| Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Public/other audiences |
| Results and Impact | Live interview with TRT World news agency about deepfakes |
| Year(s) Of Engagement Activity | 2024 |
| Description | Two meetings with Law Society |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Meeting with Law Society, stakeholder & project advisor. With AdSoLve partner, University of Sheffield, Jiahong Chen and Prof Maria Liakata, QMUL. Funding from Responsible Ai UK (KP0016). |
| Year(s) Of Engagement Activity | 2024 |
| Description | UCL Podcast (JS) |
| Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Public/other audiences |
| Results and Impact | In this special episode UCL's evolved Grand Challenge, which aims to harness the power of data to tackle pressing societal issues and create more Data-Empowered Societies, was explored. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.ucl.ac.uk/grand-challenges/news/2024/sep/new-podcast-episode-released-launch-ucls-grand-... |
| Description | UK FCDO establishing Responsible AI Centre in India |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Served on a working group to help establish a joint UK-India Responsible AI centre via FCDO |
| Year(s) Of Engagement Activity | 2024 |
| Description | UK Government Office for Science |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Served on an expert panel to UK Government Office for Science about AI horizon scanning |
| Year(s) Of Engagement Activity | 2024 |
| Description | UKRI AI in Health & Social Care Symposium |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | The UKRI AI in Health & Social Care Symposium brought together the healthcare community to drive conversations about the current state of AI deployment and Responsible Research and Innovation within the rapidly changing field of AI in health and social care. We addressed sector-specific challenges by bringing together leading professionals from physical and mental health, social care, social scientists, UKRI-funded projects, RAi UK researchers, industry representatives, and members of the RAi UK Health and Social Care Working Group. Co-hosted by RAi UK and UKRI, this in-person event looked at existing use cases, identifying areas of greatest opportunity and need, exploring barriers to implementation, and assessing AI's potential to address inequities. RAi UK AI Champions initiative We were thrilled to launch the RAi UK AI Champions initiative and introduce the first nominated AI Champions. Led by RAi UK Health and Social Care Working Group, this initiative aims to foster a network of AI advocates who will drive the responsible development and adoption of AI technologies across the health and social care system. The RAi UK AI Champion role is particularly important as these individuals will combine frontline healthcare experience with AI expertise and play a vital role in helping to ensure that AI solutions are effective, relevant, and address real-world needs. By empowering and supporting these champions, we aspire to accelerate innovation for improved patient outcomes and ensure AI technologies are implemented in a responsible and equitable manner for the benefit of all. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://rai.ac.uk/events/ai-for-healthcare/ |
| Description | UKRI AI in Health and Social Care Symposium |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Facilitated and organised the workshop. With Aislinn Bergin, AdSoLve partner, University of Nottingham. Responsible Ai UK project KP0016. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://rai.ac.uk/events/ai-for-healthcare/ |
| Description | UKRI AIR24 Day: Adaptive Human-Swarm Interaction Using Mental Workload measured by fNIRS Demo |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Industry/Business |
| Results and Impact | We delivered a demo using fNIRS and the HARSI drone simulator with the University of Southampton at AIR24 - UKRI AI and Robotics Accelerator. |
| Year(s) Of Engagement Activity | 2024 |
| Description | UNSW invited talk on AI regulation |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Presented AI regulation international partnership to Electrical Engineering dept at UNSW in Australia |
| Year(s) Of Engagement Activity | 2025 |
| Description | UnMute toolkit launch, IIT Guwahati |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | 51 people from a mix of industry, academia and third-sector attended a day-long workshop to learn about the UnMute toolkit. Attendees reported a positive change in views, and were highly complimentary about the event and the toolkit itself. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://unmute.tech/toolkit-event/ |
| Description | UoN AI+Engineering Event |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | Local |
| Primary Audience | Other audiences |
| Results and Impact | University of Nottingham academic network of research in AI. Presentation to highlight RAi |
| Year(s) Of Engagement Activity | 2024 |
| Description | UoN Knowledge Exchange - How to evaluate public engagement and outreach activities |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | Local |
| Primary Audience | Other audiences |
| Results and Impact | Invited talk to share how public engagement and outreach can be evaluated within research |
| Year(s) Of Engagement Activity | 2024 |
| Description | UoN Workshop with RAi, HRDA, and CHAIR |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | Local |
| Primary Audience | Other audiences |
| Results and Impact | You are invited to join us at discussion and networking event that poses the question 'What will we be saying about AI in ten years' time?' What will be considered cutting edge in AI and what AI-informed technologies will be shaping how we live our lives? What crisis points may have emerged and what might AI governance structures look like? We will explore these questions through presentations and open discussion. This is an interdisciplinary event designed to bring together different perspectives on this central question. It is also intended as a networking event to help academics and researchers across the University to meet each other and build connections. The event is open to everyone across the University with an interest in AI. Registration is essential and can be completed here [ link ] We are delighted to welcome two guest speakers who will give presentations on their work and respond to the question posed in the session: Gianluca Sergi is Professor in Film Studies and the Director of the Institute for Screen Research Industries at the University of Nottingham. He recently co-organised a Members' Conversation on AI in film for the Academy of Motion Picture Arts and Sciences. This was attended by around 500 leading film makers. Julia Ive is a lecturer in Natural Language processing at Queen Mary University of London. She is the author of many mono- and multimodal text generation approaches in Machine Translation and Summarisation. Currently, she is working on the theoretical aspects of style preservation and privacy-safety in artificial text generation. Lightning talks will also be given by members of three groups who have collaborated to organise the session: Connecting Human AI Interaction Researchers (CHAIR), Responsible AI UK (Rai UK) and Human Rights in the Digital Age (HRDA). There will also be time for Q and A, and open discussion. The event is funded through a Faculty of Arts Collaborative Research and KE Seed Corn Award given to CHAIR, and through sponsorship from Rai UK on behalf of the RAi @ Nottingham Network. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Vattikuti Foundation symposium on AI and robotics (PD) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Professor Dasgupta represented RAi UK as an invited guest at the Vattikuti Foundation symposium on AI and robotics in Jaipur from 13-16 Feb 25. He spoke about "AI in prostate cancer" and "20yrs of Surgical Simulation" and expanded the RAi UK AI Champions program in India and USA through experts at the cutting edge who he met at this symposium |
| Year(s) Of Engagement Activity | 2025 |
| Description | Visit of DCMS Chief Scientific Advisor Prof. Tom Crick |
| Form Of Engagement Activity | Participation in an open day or visit at my research institution |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Jiahong Chen, AdSoLve project member, visited of DCMS Chief Scientific Advisor Prof. Tom Crick. Funding from Responsible Ai UK (KP0016). |
| Year(s) Of Engagement Activity | 2025 |
| Description | Wales This Week: The AI Revolution (MJ) |
| Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | Artificial Intelligence, or AI is now in our homes, our workplaces, even in our pockets. But how well equipped are we here in Wales to cope with the so-called AI revolution? |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.itv.com/walesprogrammes/articles/wales-this-week-the-ai-revolution |
| Description | Watch a humanoid robot driving a car extremely slowly |
| Form Of Engagement Activity | A magazine, newsletter or online publication |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | Article by NewScientist, with comments from Professor Jack Stilgoe. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.newscientist.com/article/2435826-watch-a-humanoid-robot-driving-a-car-extremely-slowly/ |
| Description | Webinar - In conversation with PHAWM |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | In this webinar, Professor Simone Stumpf from the University of Glasgow introduces PHAWM - Participatory Harm Auditing Workbenches and Methodologies, a four-year, £3.4M RAi funded project. This groundbreaking project brings together 25 researchers from seven leading UK universities, alongside 23 partner organisations, to address a critical challenge in AI: the lack of systematic auditing to assess the potential harms of predictive and generative AI. Led by the University of Glasgow, with support from institutions such as the Universities of Edinburgh, Sheffield, and King's College London, the consortium aims to develop methods to maximise the benefits of predictive and generative AI while minimising risks like bias and AI "hallucinations." This project will pioneer participatory AI auditing where diverse stakeholders without an AI background undertake audits of predictive and generative AI, either individually or collectively. The predictive AI use cases in the research will focus on health and media content, analysing data sets for predicting hospital readmissions and assessing child attachment for potential bias, and examining fairness in search engines and hate speech detection on social media. Professor Stumpf covered how to: Build novel participatory auditing workbenches for designing and deploying predictive and generative AI, targeted at diverse stakeholders Develop novel participatory AI auditing methodologies Embed participatory auditing in future AI deployment practices Extend the network of researchers in Responsible AI The webinar was chaired by Professor Elvira Perez Vallejos, Chair Equities Pillar - RAi UK. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://rai.ac.uk/wp-content/uploads/2024/11/Recording_Webinar-Prof-Simone-Stumpf-1080p.mp4 |
| Description | Wellcome Digital Sensors Workshop |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Policymakers/politicians |
| Results and Impact | Expert conversation at Wellcome Digital Sensors Workshop. With Aislinn Bergin. Responsible Ai UK project KP0016. |
| Year(s) Of Engagement Activity | 2024 |
| Description | When Tech Policy Becomes Foreign Policy: The Future Global Governance of AI |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Professor Gina Neff, moderated the panel titled "When Tech Policy Becomes Foreign Policy: The Future Global Governance of AI," featuring leading experts from the government and industry. This engaging discussion delved into the intersection of AI policy and international relations. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://event.trustconference.com/ |
| Description | Who Trolled Amber? An investigation into trolling, social media misinformation and what it means for all of us, June 12, 2024 |
| Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Media (as a channel to the public) |
| Results and Impact | Professor Gina Neff along with Jen Robinson joined Alexi Mostrous, in the Tortoise newsroom to discuss Who Trolled Amber? and give their insights into the world of online disinformation and its effect on our everyday lives. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.tortoisemedia.com/thinkin/who-trolled-amber-an-investigation-into-trolling-social-media-... |
| Description | WoMENAIT London - Responsible AI: Ensuring Ethical Governance and Cultural Sensitivity |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Women of MENA in Technology London - "Responsible AI: Ensuring Ethical Governance and Cultural Sensitivity" - April 24, 2024 Speakers: - Sana Khareghani - Professor of Practice in AI, King's College London - Ebru Binboga - Director Data AI and Automation, IBM - Mahtab Ghamsari - Senior Product Manager, Flawless Moderator: - Hussein Sahyouni - Senior Manager Operational GenAI, Baringa |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.youtube.com/watch?v=mR9FOz7wGg4 |
| Description | Women in RAi: Generative AI and the representation of women |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | This workshop was hosted in partnership with Women in Tech Week and King's Institute for Artificial Intelligence. The workshop, for women and gender non-conforming people, encouraged participants to develop their own AI portrait. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://rai.ac.uk/events/women-in-rai-generative-ai-and-the-representation-of-women/ |
| Description | Workshop (in collaboration with with Action Lab Indonesia and the Data and Democracy Research Hub at Monash University, Indonesia) with diverse stakeholders to explore ethics of emulated empathy in human-AI partnerships |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Other audiences |
| Results and Impact | The workshop was a collaboration with Action Lab Indonesia and the Data and Democracy Research Hub at Monash University, Indonesia. It was attended by 22 people, bringing expertise from academia, tech industry, government, journalism and civil society organisations. The theme was "to explore ethics of emulated empathy in human-AI partnerships", particularly examining how the forthcoming IEEE 7014.1 recommended practice could be informed by Indonesian perspectives. The workshop discussion pointed to an urgent need for both technology regulators and producers to accommodate the nuanced needs of cultural and regional groups that exist across this sprawling archipelago. |
| Year(s) Of Engagement Activity | 2025 |
| Description | Workshop - AI and Defence: Readiness, Resilience and Mental Health |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | Organised and chaired international workshop around use of AI for Mental Health in Defence domain. 125 participants from goverment (defence mostly including MOD and DSTL), academia and commercial practitioners. Feedback from participants was event was very useful and has helped shape UK conversation in this area. Work is underway on a RUSI Journal paper as a direct output from this event. |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://www.southampton.ac.uk/~sem03/AI-and-Defence-2023.html |
| Description | Workshop College of Policing 'Advanced AI Skills in Policing' |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Policymakers/politicians |
| Results and Impact | To discuss AI in policing and development of skills, this was fed into PROBabLE Futures outputs |
| Year(s) Of Engagement Activity | 2024 |
| Description | Workshop for Secondary School pupils held at Ulster University |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | Regional |
| Primary Audience | Schools |
| Results and Impact | Ben Bland held a workshop on "AI Ethics Vs the Real World" for Secondary School pupils held at Ulster University. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Workshop on responsible futures for the Internet, Nottingham, January 2025 (TAS Hub- Good Systems Strategic Partnership project) |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Industry/Business |
| Results and Impact | This workshop followed on from a previous workshop session held in September 2024, picking up on themes raised. Industry participants from SMEs in Europe and Africa joined members of the University of Nottingham and members of third sector organisations to debate the priorities and values for a future, responsible internet. |
| Year(s) Of Engagement Activity | 2025 |
| Description | Workshop with legal academics and law students explore AI law and legal training |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Undergraduate students |
| Results and Impact | 40 people attended an online workshop where we discussed the knowledge and skills required for future lawyers to use AI ethically and responsibly. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Workshop with legal professionals to explore AI law and legal training |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Forty legal professionals- from law firms, the bar and inhouse- attended an online workshop to discuss the opportunities and challenges for GenAI within legal practice. |
| Year(s) Of Engagement Activity | 2025 |
| Description | i-Emerge Workshop with Peer Support Workers |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | Local |
| Primary Audience | Professional Practitioners |
| Results and Impact | This workshop will offer a space in which to engage with emerging technologies like AI and VR. We are interested in understanding your thoughts, concerns, and ideas about their use in mental health; especially in the context of the communities that you serve. We would like to learn about different ways that we could inclusively engage with people about these emerging technologies for mental health across different organisations, environments, and with different communities and individuals to help ensure that everyone's voice is included in their development and use. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.nottingham.ac.uk/city-as-lab/city-as-lab-projects/immersive-and-emerging-technologies-fo... |
| Description | i-Emerge Workshop with Third Sector Organisation |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | Local |
| Primary Audience | Third sector organisations |
| Results and Impact | This workshop will offer a space in which to engage with emerging technologies like AI and VR. We are interested in understanding your thoughts, concerns, and ideas about their use in mental health; especially in the context of the communities that you serve. We would like to learn about different ways that we could inclusively engage with people about these emerging technologies for mental health across different organisations, environments, and with different communities and individuals to help ensure that everyone's voice is included in their development and use. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.nottingham.ac.uk/city-as-lab/city-as-lab-projects/immersive-and-emerging-technologies-fo... |
| Description | i-Emerge Workshops |
| Form Of Engagement Activity | Participation in an activity, workshop or similar |
| Part Of Official Scheme? | No |
| Geographic Reach | Local |
| Primary Audience | Professional Practitioners |
| Results and Impact | In a first workshop, we explored emerging technologies like AI, XR and robotics from the perspective of Peer Support Workers. In a second workshop, we explored emerging technologies like AI, XR and robotics from the perspective of a charity for BAME people in recovery. With Aislinn Bergin, AdSoLve partner, University of Nottingham. Responsible Ai UK project KP0016. |
| Year(s) Of Engagement Activity | 2024 |
| URL | https://www.nottingham.ac.uk/city-as-lab/city-as-lab-projects/immersive-and-emerging-technologies-fo... |
