Realising Accountable Intelligent Systems (RAInS)
Lead Research Organisation:
University of Aberdeen
Department Name: Computing Science
Abstract
Intelligent systems technologies are being utilised in more and more scenarios including autonomous vehicles, smart home appliances, public services, retail and manufacturing. But what happens when such systems fail, as in the case of recent high-profile accidents involving autonomous vehicles? How are such systems (and their developers) held to account if they are found to be making biased or unfair decisions? Can we interrogate intelligent systems, to ensure they are fit for purpose before they're deployed? These are all real and timely challenges, given that intelligent systems will increasingly affect many aspects of everyday life.
While all new technologies have the capacity to do harm, with intelligent systems it may be difficult or even impossible to know what went wrong or who should be held responsible. There is a very real concern that the complexity of many AI technologies, the data and interactions between the surrounding systems and workflows, will reduce the justification for consequential decisions to "the algorithm made me do it", or indeed "we don't know what happened". And yet the potential for such systems to outperform humans in accuracy of decision-making, and even safety suggests that the desire to use them will be difficult to resist. The question then is how we might endeavour to have the best of both worlds. How can we benefit from the superhuman capacity and efficiency that such systems offer without giving up our desire for accountability, transparency and responsibility? How can we avoid a stalemate choice between forgoing the benefits of automated systems altogether or accepting a degree of arbitrariness that would be unthinkable in society's usual human relationships?
Working closely with a range of stakeholders, including members of the public, the legal profession and technology companies, we will explore what it means to realise future intelligent systems that are transparent and accountable. The Accountability Fabric is our vision of a future computational infrastructure supporting audit of such systems - somewhat analogous to (but more sophisticated than) the 'blackbox' flight recorders associated with passenger aircraft. Our work will increase transparency not only after the fact, but also in a manner which allows for early interrogation and audit which in turn may help to prevent or to mitigate harm ex ante. Before we can realise the Accountability Fabric, several key issues need to be investigated:
What are the important factors that influence citizen's perceptions of trust and accountability of intelligent systems?
What form ought legal liability take for intelligent systems? How can the law operate fairly and incentivize optimal behaviour from those developing/using such systems?
How do we formulate an appropriate vocabulary with which to describe and characterise intelligent systems, their context, behaviours and biases?
What are the technical means for recording the behaviour of intelligent systems, from the data used, the algorithms deployed, and the flow-on effects of the decisions being made?
Can we realise an accountability solution for intelligent systems, operating across a range of technologies and organisational boundaries, that is able to support third party audit and assessment?
Answers to these (and the many other questions that will certainly emerge) will lead us to develop prototype solutions that will be evaluated with project partners. Our ambition is to create a means by which the developer of an intelligent system can provide a secure, tamper-proof record of the system's characteristics and behaviours that can be shared (under controlled circumstances) with relevant authorities in the event of an incident or complaint.
While all new technologies have the capacity to do harm, with intelligent systems it may be difficult or even impossible to know what went wrong or who should be held responsible. There is a very real concern that the complexity of many AI technologies, the data and interactions between the surrounding systems and workflows, will reduce the justification for consequential decisions to "the algorithm made me do it", or indeed "we don't know what happened". And yet the potential for such systems to outperform humans in accuracy of decision-making, and even safety suggests that the desire to use them will be difficult to resist. The question then is how we might endeavour to have the best of both worlds. How can we benefit from the superhuman capacity and efficiency that such systems offer without giving up our desire for accountability, transparency and responsibility? How can we avoid a stalemate choice between forgoing the benefits of automated systems altogether or accepting a degree of arbitrariness that would be unthinkable in society's usual human relationships?
Working closely with a range of stakeholders, including members of the public, the legal profession and technology companies, we will explore what it means to realise future intelligent systems that are transparent and accountable. The Accountability Fabric is our vision of a future computational infrastructure supporting audit of such systems - somewhat analogous to (but more sophisticated than) the 'blackbox' flight recorders associated with passenger aircraft. Our work will increase transparency not only after the fact, but also in a manner which allows for early interrogation and audit which in turn may help to prevent or to mitigate harm ex ante. Before we can realise the Accountability Fabric, several key issues need to be investigated:
What are the important factors that influence citizen's perceptions of trust and accountability of intelligent systems?
What form ought legal liability take for intelligent systems? How can the law operate fairly and incentivize optimal behaviour from those developing/using such systems?
How do we formulate an appropriate vocabulary with which to describe and characterise intelligent systems, their context, behaviours and biases?
What are the technical means for recording the behaviour of intelligent systems, from the data used, the algorithms deployed, and the flow-on effects of the decisions being made?
Can we realise an accountability solution for intelligent systems, operating across a range of technologies and organisational boundaries, that is able to support third party audit and assessment?
Answers to these (and the many other questions that will certainly emerge) will lead us to develop prototype solutions that will be evaluated with project partners. Our ambition is to create a means by which the developer of an intelligent system can provide a secure, tamper-proof record of the system's characteristics and behaviours that can be shared (under controlled circumstances) with relevant authorities in the event of an incident or complaint.
Planned Impact
Issues of accountability regarding automated and intelligent systems touch all parts of society. Therefore, in broad terms, our work on providing the means for articulating, interrogating, validating and assessing intelligent systems and their behaviour brings great benefits to:
* Individuals
* Public sector organisations
* Government and policy-makers - in terms of those
- developing the regulatory frameworks around emerging technology (AI, autonomous systems, etc);
- using intelligent systems as part of policy implementation.
* Business (including SMEs), including both:
- those active in the autonomous systems, AI, smart technology marketplace;
- users of intelligent systems to achieve business aims.
How will they benefit from this research?
Broadly, individuals will benefit from this work, as it brings transparency and the means to challenge automated systems affecting their lives. Specifically, members of the public will benefit from their direct involvement in the research - through their participation in activities (including user workshops) which explore issues of accountability - and their ability to directly shape the research agenda. The wider public will be exposed to these issues via a series of public engagement activities (organised under the Alt-AI [Accountability-Liability-Transparency] banner) - our aim being to stimulate debate about the future of intelligent systems and society.
Public organisations will gain greater understanding of the challenges associated with future technology deployments, and models for system accountability. Importantly, increased accountability and explainability of systems will work towards the public acceptability of such technology, while working to address public-sector concerns regarding safety, fairness, bias, etc, thereby encouraging the benefits of data-driven policy implementation.
Government and policy-makers at local, devolved and national levels will be able to access evidence drawn from real user scenarios, as well as the opinions of citizens and members of the legal profession. We will provide useful resources both for legislators and for courts considering how such technologies should be used, as well as for public authorities and policy-makers more generally in establishing public trust in the use of such systems. At a technical-level, devising novel approaches for both capturing evidence on how intelligent systems operate, and by making this auditable, we provide the means for producing the evidence for proper (governmental/judicial) oversight over intelligent systems. Further, technical means will work to shape regulatory frameworks (e.g. which might embed "accountability by design" principles, as has been done for 'privacy/security by design').
Technology businesses will gain access to a range of solutions necessary to enhance transparency and accountability of future intelligent systems. This is crucial for the industry, as otherwise the public concern regarding such issues will hinder adoption. Our approach will be accessible through a range of open source software prototypes and frameworks, promoted through academic and industrial forums and through an online presence. Through preliminary conversations with IBM who are leaders in the intelligent system (cognitive computing space) there is clear evidence of interest in our proposals.
In terms of industry in general, businesses see much value in automating a range of processes, to bring about innovation and efficiency. Again, by tackling issues of accountability, this work directly works towards increasing public acceptability - to best ensure the full economic potential for the technology is realised.
* Individuals
* Public sector organisations
* Government and policy-makers - in terms of those
- developing the regulatory frameworks around emerging technology (AI, autonomous systems, etc);
- using intelligent systems as part of policy implementation.
* Business (including SMEs), including both:
- those active in the autonomous systems, AI, smart technology marketplace;
- users of intelligent systems to achieve business aims.
How will they benefit from this research?
Broadly, individuals will benefit from this work, as it brings transparency and the means to challenge automated systems affecting their lives. Specifically, members of the public will benefit from their direct involvement in the research - through their participation in activities (including user workshops) which explore issues of accountability - and their ability to directly shape the research agenda. The wider public will be exposed to these issues via a series of public engagement activities (organised under the Alt-AI [Accountability-Liability-Transparency] banner) - our aim being to stimulate debate about the future of intelligent systems and society.
Public organisations will gain greater understanding of the challenges associated with future technology deployments, and models for system accountability. Importantly, increased accountability and explainability of systems will work towards the public acceptability of such technology, while working to address public-sector concerns regarding safety, fairness, bias, etc, thereby encouraging the benefits of data-driven policy implementation.
Government and policy-makers at local, devolved and national levels will be able to access evidence drawn from real user scenarios, as well as the opinions of citizens and members of the legal profession. We will provide useful resources both for legislators and for courts considering how such technologies should be used, as well as for public authorities and policy-makers more generally in establishing public trust in the use of such systems. At a technical-level, devising novel approaches for both capturing evidence on how intelligent systems operate, and by making this auditable, we provide the means for producing the evidence for proper (governmental/judicial) oversight over intelligent systems. Further, technical means will work to shape regulatory frameworks (e.g. which might embed "accountability by design" principles, as has been done for 'privacy/security by design').
Technology businesses will gain access to a range of solutions necessary to enhance transparency and accountability of future intelligent systems. This is crucial for the industry, as otherwise the public concern regarding such issues will hinder adoption. Our approach will be accessible through a range of open source software prototypes and frameworks, promoted through academic and industrial forums and through an online presence. Through preliminary conversations with IBM who are leaders in the intelligent system (cognitive computing space) there is clear evidence of interest in our proposals.
In terms of industry in general, businesses see much value in automating a range of processes, to bring about innovation and efficiency. Again, by tackling issues of accountability, this work directly works towards increasing public acceptability - to best ensure the full economic potential for the technology is realised.
Publications
Agne Zainyte
(2021)
Challenges and Future Directions for Accountable Machine Learning
Chiu Pang Fung
(2021)
Towards Accountability Driven Development for Machine Learning Systems
Fung C.P.
(2021)
Towards accountability driven development for machine learning systems
in CEUR Workshop Proceedings
Naja I
(2022)
Using Knowledge Graphs to Unlock Practical Collection, Integration, and Audit of AI Accountability Information
in IEEE Access
Norval C
(2021)
Workshop on Reviewable and Auditable Pervasive Systems (WRAPS)
Pang W.
(2021)
On evidence capture for accountable AI systems
in CEUR Workshop Proceedings
Wei Pang
(2021)
On Evidence Capture for Accountable AI Systems
Title | An introduction to the RAInS Project |
Description | A video introducing the RAInS project to the general audience |
Type Of Art | Film/Video/Animation |
Year Produced | 2021 |
Impact | None to date |
URL | https://vimeo.com/481206247 |
Description | To realise accountable AI systems, different types of information from a range of sources need to be recorded throughout the system life cycle. However, the creation of such accountability records must be planned and embedded within different life cycle stages, e.g., during the design of a system, during implementation, etc. We have developed a vocabulary and supporting toolset able to not only capture accountability information, but also abstract descriptions of accountability plans that guide the data collection process. Key components are: SAO - a lightweight generic ontology for describing accountability plans and corresponding accountability records for computational systems; RAInS - an ontology which extends SAO to model accountability information relevant to AI systems; and the AccountabilityFabric - a proof-of-concept implementation utilising the proposed ontologies to provide a visual interface for designing accountability plans, and managing accountability records. |
Exploitation Route | The SAO/RAInS vocabulary has been designed to be re-usable and extendable across a host of application domains. The AccountabilityFabric prototype has thus far been illustrated using healthcare and domestic technology use-cases, but could be deployed in any application context requiring accountability records to be maintained. |
Sectors | Digital/Communication/Information Technologies (including Software) Financial Services and Management Consultancy Healthcare Government Democracy and Justice Transport |
Description | Through our engagement with a range of stakeholders (Scottish Breast Cancer Screening Service, BSI, Law Commission) we have raised awareness of the potential for digital solutions to support transparency and accountability of intelligent systems. We were consulted as part of the development of the Automated Vehicles joint report of the Law Commission and Scottish Law Commission (published January 2022). In March 2022, the IEEE Standard for Transparency of Autonomous Systems (7001-2021) was published. Members of the RAInS project team were members of the working group tasked with drafting this standard; it followed as a direct response to a recommendation in the general principles section of IEEE Ethically Aligned Design. |
First Year Of Impact | 2020 |
Sector | Digital/Communication/Information Technologies (including Software),Healthcare,Government, Democracy and Justice,Transport |
Impact Types | Societal Policy & public services |
Description | Law Commission - Automated Vehicles: Joint Report |
Geographic Reach | National |
Policy Influence Type | Contribution to a national consultation/review |
URL | https://s3-eu-west-2.amazonaws.com/lawcom-prod-storage-11jsxou24uy7q/uploads/2022/01/Automated-vehic... |
Description | Membership of IEEE P7001 Transparency of Autonomous Systems Working Group |
Geographic Reach | Multiple continents/international |
Policy Influence Type | Membership of a guideline committee |
URL | https://standards.ieee.org/project/7001.html |
Description | AI and MR physics simulation to assess low-cost, low-field MRI as a cancer screening tool |
Amount | £96,553 (GBP) |
Funding ID | C69862/A29020 |
Organisation | Cancer Research Campaign |
Sector | Charity/Non Profit |
Country | United Kingdom |
Start | 06/2019 |
End | 07/2020 |
Description | Digital Circular Electrochemical Economy (DCEE) |
Amount | £964,620 (GBP) |
Funding ID | EP/V042432/1 |
Organisation | Engineering and Physical Sciences Research Council (EPSRC) |
Sector | Public |
Country | United Kingdom |
Start | 08/2021 |
End | 05/2024 |
Description | Endo.AI Real time automated endoscopic detection of oesophageal squamous cell cancer in early and precancerous stages |
Amount | £100,000 (GBP) |
Funding ID | C68574/A29021 |
Organisation | Cancer Research UK |
Sector | Charity/Non Profit |
Country | United Kingdom |
Start | 04/2019 |
End | 05/2020 |
Description | Enhancing Agri-Food Transparent Sustainability - EATS |
Amount | £408,499 (GBP) |
Funding ID | EP/V042270/1 |
Organisation | Engineering and Physical Sciences Research Council (EPSRC) |
Sector | Public |
Country | United Kingdom |
Start | 01/2022 |
End | 12/2024 |
Description | Open, reproducible analysis and reporting of data provenance for high-security health and administrative data |
Amount | £49,267 (GBP) |
Funding ID | 219700/Z/19/Z |
Organisation | Wellcome Trust |
Sector | Charity/Non Profit |
Country | United Kingdom |
Start | 04/2021 |
End | 04/2022 |
Description | Protecting Minority Ethnic Communities Online (PRIME) |
Amount | £1,466,412 (GBP) |
Funding ID | EP/W032333/1 |
Organisation | Engineering and Physical Sciences Research Council (EPSRC) |
Sector | Public |
Country | United Kingdom |
Start | 03/2022 |
End | 03/2025 |
Description | SARA: Semi-Automated Risk Assessment of Data Provenance and Clinical Free-Text in TREs A DARE UK Driver Project |
Amount | £383,147 (GBP) |
Funding ID | MC_PC_23005 |
Organisation | Medical Research Council (MRC) |
Sector | Public |
Country | United Kingdom |
Start | 02/2023 |
End | 10/2023 |
Description | SPRITE+: The Security, Privacy, Identity, and Trust Engagement NetworkPlus |
Amount | £1,386,196 (GBP) |
Funding ID | EP/S035869/1 |
Organisation | Engineering and Physical Sciences Research Council (EPSRC) |
Sector | Public |
Country | United Kingdom |
Start | 08/2019 |
End | 08/2024 |
Description | IBM Research UK |
Organisation | IBM |
Department | IBM UK Ltd |
Country | United Kingdom |
Sector | Private |
PI Contribution | Interviews with IBM staff regarding accountability requirements. |
Collaborator Contribution | Access to an intelligent system use-case (cognitive credit) to inform RAInS regarding issues of accountability. |
Impact | No specific outcomes to date. |
Start Year | 2019 |
Description | Law Commission Collaboration |
Organisation | Law Commission |
Country | United Kingdom |
Sector | Public |
PI Contribution | We have engaged with the Law Commission to understand their current thinking on emerging legal frameworks surrounding autonomous systems; this has included direct discussion with Commission staff, and invitations to RAInS events. |
Collaborator Contribution | Provision of advice on emerging legal frameworks, analysis of RAInS accountable AI use cases. |
Impact | No specific outcomes to date. |
Start Year | 2019 |
Title | The Accountability Fabric |
Description | This is a prototype implementation of a solution for managing accountability information related to different life cycles of an AI system. The system demonstrates the utility of RAInS (https://w3id.org) and SAO (https://w3id.org/sao) ontologies. |
Type Of Technology | Software |
Year Produced | 2021 |
Open Source License? | Yes |
Impact | none to date |
URL | https://github.com/RAINS-UOA/rains-workflow-builder |
Title | The Realising Accountable Intelligent Systems (RAInS) Ontology |
Description | The RAInS ontology is an ex-tension of of the System Accountability Ontology (SAO) for the AI systems' domain by defining a set of concepts required to document the design stage of such systems. Subclasses of sao:AccountableAction and sao:AccountableResult are defined to provide a minimal set of high-level constructs for describing accountability plans consisting of actions producing design specifications(e.g. a ML model de-sign specification) and human decisions(e.g. approval of a specification by an accountable person). The ontology will be extended in the future to cover additional system life cycle stages. See also https://w3id.org/sao |
Type Of Technology | Software |
Year Produced | 2021 |
Impact | none to date |
URL | https://w3id.org/rains |
Title | The System Accountability Ontology (SAO) |
Description | The System Accountability Ontology (SAO) is a generic, reusable, lightweight core ontology which introduces a set of concepts to model accountability plans and their corresponding traces to support accountability of computational systems. SAO introduces sao:AccountableObject to model an abstract representation of any meaningful grouping (software component, dataset, model, evaluation process, etc.) that may be used to organise system-related accountability information. See also RAINS ontology (https://w3id.org/rains) |
Type Of Technology | Software |
Year Produced | 2021 |
Open Source License? | Yes |
Impact | none to date |
URL | https://w3id.org/sao |
Description | AI the Good, the Bad, and the Ugl |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | Regional |
Primary Audience | Other audiences |
Results and Impact | We presented our project and explored the accountability and transparency challenges of AI, with particular focus on facial recognition technology. |
Year(s) Of Engagement Activity | 2019 |
URL | https://www.explorathon.co.uk/events/ai-the-good-the-bad-and-the-ugly/ |
Description | Breast Cancer Awareness: A discussion about AI in breast cancer screening |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Other audiences |
Results and Impact | Coinciding with Breast Cancer Awareness Month, we organised a zoom event; an informal discussion about the role of Artificial Intelligence (AI) in routine breast cancer screening with opportunities to hear more about this potential use of new technology. Participants were also given the chance to share their thoughts or questions related to using AI for these procedures. |
Year(s) Of Engagement Activity | 2020 |
Description | Bright Club |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | Local |
Primary Audience | Public/other audiences |
Results and Impact | Bright Club mixes research and comedy to create an entertaining night of laughs, music and new ideas. The nights are held in a comedy club setting where presenters perform an 8 minute comedy sketch based on their research. |
Year(s) Of Engagement Activity | 2022 |
Description | Created a Video for PechaKucha 2021 |
Form Of Engagement Activity | Engagement focused website, blog or social media channel |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Public/other audiences |
Results and Impact | Our team member Dr Milan Markovic participated in September's Explorathon Pechakucha and talked about our project. The event is availabe on youtube https://youtu.be/FAnVoIUM8MI . Milan's presentations is from 01:24 to 08:14. |
Year(s) Of Engagement Activity | 2021 |
URL | https://www.pechakucha.com/events/aberdeen-vol-29 |
Description | Distributed Future podcast |
Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Media (as a channel to the public) |
Results and Impact | Podcast topic was provenance and accountability for automated systems |
Year(s) Of Engagement Activity | 2019 |
URL | https://distributedfutu.re/#episode24 |
Description | Held the Explorathon event What Went Wrong When The AI Got it Wrong? |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | Regional |
Primary Audience | Public/other audiences |
Results and Impact | We discussed how the use of AI systems could lead to potential gender or racial discrimination, risks to healthcare, and increased business costs. Our focus was on how the root cause of such failures can be traced to any stage of an AI's design and development or in its use, and that liability can lie within different different stakeholders of the system - even users. |
Year(s) Of Engagement Activity | 2021 |
URL | https://www.explorathon.co.uk/events-programme/what-went-wrong-when-the-ai-got-it-wrong/ |
Description | Participate in the healthcare technology workshop at University of Strathclyde |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Professional Practitioners |
Results and Impact | Dr Wei Pang attended the healthcare technology workshop at the University of Strathclyde on 16 February 2024. Participants of the workshop include researchers from several Scottish universities and people from industry. He proposed the topic of responsible AI for healthcare and protecting ethnic minority groups to be discussed during the workshop. A more generic topic (responsible AI for healthcare) was adopted as one of the four topics discussed towards the end of the workshop. The topic has attracted a group of researchers for discussion. |
Year(s) Of Engagement Activity | 2024 |
Description | Presentation (MM at National Taiwan University, as part of Workshop on AI, Ethics & Healthcare. |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | Presentation entitled "Computational Models of Provenance as a Substrate for Transparent & Accountable AI " given at part of the Workshop on 'Ethical, Legal and Societal Issues (ELSI) in Artificial Intelligence-assisted Medical Care - Challenges and Responses' organised by National Taiwan University Hospital , Taipei, Taiwan - January 18th 2020. Discussion with audience members focussed on how to architect technical solutions to support AI systems accountability. |
Year(s) Of Engagement Activity | 2020 |
Description | Presentation (PE) at National Taiwan University, as part of Workshop on AI, Ethics & Healthcare. |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | Presentation entitled "Towards Future Accountable Intelligent (Healthcare) Systems" given at part of the Workshop on 'Ethical, Legal and Societal Issues (ELSI) in Artificial Intelligence-assisted Medical Care - Challenges and Responses' organised by National Taiwan University Hospital , Taipei, Taiwan - January 18th 2020. Discussion with audience members focussed on the potential for future autonomous system in healthcare (and their designers) to be made accountable. |
Year(s) Of Engagement Activity | 2020 |
Description | Presentation at IIIT-Bangalore on RAInS activities |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Postgraduate students |
Results and Impact | Presentation entitled 'Towards Accountable AIs" given during visit to IIIT-Bangalore on February 24th, 2020. Discussions with faculty and postgraduate students focussed on what accountability means for intelligent systems, and how this might be subject to audit (by humans or other machines). |
Year(s) Of Engagement Activity | 2020 |
Description | Roundtable on 'Making AI Use Cases Useful' held at Pembroke College, Oxford on 9/12/19 |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Professional Practitioners |
Results and Impact | Roundtable on 'Making AI Use Cases Useful' held at Pembroke College, Oxford on 9/12/19. Participants included representatives from the RAInS project, Law Commission, ORBIT, Turing Institute. Focus was on exploring questions of accountability arising in two intelligent systems use cases - automated breast cancer screening and autonomous vehicles. |
Year(s) Of Engagement Activity | 2019 |
Description | Scottish Research Showcase Flashmob - RAInS video |
Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Public/other audiences |
Results and Impact | The Scottish Research Showcase, in collaboration with the Global Science Show, organised a twitter "flashmob" of science and learning. We showcased our work by sharing a shared video. |
Year(s) Of Engagement Activity | 2020 |
URL | https://vimeo.com/483942669 |
Description | Seminar Presentation at RGU School of Computing - 20 Jan 2022 |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | Local |
Primary Audience | Postgraduate students |
Results and Impact | I presented our work so far in RAInS to a mix of research staff and PhD students from the School of Computing at Robert Gordon University. The presentation covers work presented in three published papers (ESWC, ISWC, and Data & Policy). There was a Q&A afterwards. |
Year(s) Of Engagement Activity | 2022 |
Description | When AI gets it wrong: Who's to blame for technology's failure? |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Public/other audiences |
Results and Impact | During the one-hour zoom event, we explained that there are different decisions that are made when developing AI systems (by designers, builders, and operators, and users). The audience was given the chance to "vote" on the best outcomes for proposed scenarios before learning happens behind the scenes of those AI systems |
Year(s) Of Engagement Activity | 2020 |
URL | https://www.explorathon.co.uk/events/when-ai-gets-it-wrong/ |
Description | Workshop on AI Ethics in the Financial Sector |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | A two-day symposium dedicated to the ethics of AI in the financial sector. |
Year(s) Of Engagement Activity | 2019 |
URL | https://www.turing.ac.uk/events/ai-ethics-financial-sector |
Description | Workshop on Accountability and Emerging Technologies |
Form Of Engagement Activity | A formal working group, expert panel or dialogue |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | This meeting took place at the Alan Turing Institute where a group of academic and industry researchers as well as lawyers discussed the challenges facing accountability in emerging technologies and shared their views on the definition of accountability, regulating accountable systems, what would it mean after the laws are set, how would an accountable system look like,what might it look like to hold a system to account and when can you say that a system has the property of being accountable. |
Year(s) Of Engagement Activity | 2019 |
Description | Workshop on Accountability of Autonomous Vehicles |
Form Of Engagement Activity | A formal working group, expert panel or dialogue |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Policymakers/politicians |
Results and Impact | The workshop was to discuss accountability of autonomous vehicles and elicit the requirements for accountability within this use case. The participants were from regulatory/policy making and professional backgrounds. |
Year(s) Of Engagement Activity | 2021 |
Description | Workshop on Reviewable and Auditable Pervasive Systems (WRAPS) |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Other audiences |
Results and Impact | This was an academic workshop organised by the Compliant and Accountable Systems Group (University of Cambridge) and the Realising Accountable Intelligent Systems (RAInS) project. This worksop was co-located with the International Conference for Ubiquitous Computing (UbiComp 2021). The workshop was held as a virtual event on 25th September 2021 |
Year(s) Of Engagement Activity | 2021 |
URL | https://wraps-workshop.github.io/ |