Realising Accountable Intelligent Systems (RAInS)

Lead Research Organisation: University of Aberdeen
Department Name: Computing Science

Abstract

Intelligent systems technologies are being utilised in more and more scenarios including autonomous vehicles, smart home appliances, public services, retail and manufacturing. But what happens when such systems fail, as in the case of recent high-profile accidents involving autonomous vehicles? How are such systems (and their developers) held to account if they are found to be making biased or unfair decisions? Can we interrogate intelligent systems, to ensure they are fit for purpose before they're deployed? These are all real and timely challenges, given that intelligent systems will increasingly affect many aspects of everyday life.

While all new technologies have the capacity to do harm, with intelligent systems it may be difficult or even impossible to know what went wrong or who should be held responsible. There is a very real concern that the complexity of many AI technologies, the data and interactions between the surrounding systems and workflows, will reduce the justification for consequential decisions to "the algorithm made me do it", or indeed "we don't know what happened". And yet the potential for such systems to outperform humans in accuracy of decision-making, and even safety suggests that the desire to use them will be difficult to resist. The question then is how we might endeavour to have the best of both worlds. How can we benefit from the superhuman capacity and efficiency that such systems offer without giving up our desire for accountability, transparency and responsibility? How can we avoid a stalemate choice between forgoing the benefits of automated systems altogether or accepting a degree of arbitrariness that would be unthinkable in society's usual human relationships?

Working closely with a range of stakeholders, including members of the public, the legal profession and technology companies, we will explore what it means to realise future intelligent systems that are transparent and accountable. The Accountability Fabric is our vision of a future computational infrastructure supporting audit of such systems - somewhat analogous to (but more sophisticated than) the 'blackbox' flight recorders associated with passenger aircraft. Our work will increase transparency not only after the fact, but also in a manner which allows for early interrogation and audit which in turn may help to prevent or to mitigate harm ex ante. Before we can realise the Accountability Fabric, several key issues need to be investigated:

What are the important factors that influence citizen's perceptions of trust and accountability of intelligent systems?

What form ought legal liability take for intelligent systems? How can the law operate fairly and incentivize optimal behaviour from those developing/using such systems?

How do we formulate an appropriate vocabulary with which to describe and characterise intelligent systems, their context, behaviours and biases?

What are the technical means for recording the behaviour of intelligent systems, from the data used, the algorithms deployed, and the flow-on effects of the decisions being made?

Can we realise an accountability solution for intelligent systems, operating across a range of technologies and organisational boundaries, that is able to support third party audit and assessment?

Answers to these (and the many other questions that will certainly emerge) will lead us to develop prototype solutions that will be evaluated with project partners. Our ambition is to create a means by which the developer of an intelligent system can provide a secure, tamper-proof record of the system's characteristics and behaviours that can be shared (under controlled circumstances) with relevant authorities in the event of an incident or complaint.

Planned Impact

Issues of accountability regarding automated and intelligent systems touch all parts of society. Therefore, in broad terms, our work on providing the means for articulating, interrogating, validating and assessing intelligent systems and their behaviour brings great benefits to:

* Individuals

* Public sector organisations

* Government and policy-makers - in terms of those
- developing the regulatory frameworks around emerging technology (AI, autonomous systems, etc);
- using intelligent systems as part of policy implementation.

* Business (including SMEs), including both:
- those active in the autonomous systems, AI, smart technology marketplace;
- users of intelligent systems to achieve business aims.

How will they benefit from this research?

Broadly, individuals will benefit from this work, as it brings transparency and the means to challenge automated systems affecting their lives. Specifically, members of the public will benefit from their direct involvement in the research - through their participation in activities (including user workshops) which explore issues of accountability - and their ability to directly shape the research agenda. The wider public will be exposed to these issues via a series of public engagement activities (organised under the Alt-AI [Accountability-Liability-Transparency] banner) - our aim being to stimulate debate about the future of intelligent systems and society.

Public organisations will gain greater understanding of the challenges associated with future technology deployments, and models for system accountability. Importantly, increased accountability and explainability of systems will work towards the public acceptability of such technology, while working to address public-sector concerns regarding safety, fairness, bias, etc, thereby encouraging the benefits of data-driven policy implementation.

Government and policy-makers at local, devolved and national levels will be able to access evidence drawn from real user scenarios, as well as the opinions of citizens and members of the legal profession. We will provide useful resources both for legislators and for courts considering how such technologies should be used, as well as for public authorities and policy-makers more generally in establishing public trust in the use of such systems. At a technical-level, devising novel approaches for both capturing evidence on how intelligent systems operate, and by making this auditable, we provide the means for producing the evidence for proper (governmental/judicial) oversight over intelligent systems. Further, technical means will work to shape regulatory frameworks (e.g. which might embed "accountability by design" principles, as has been done for 'privacy/security by design').

Technology businesses will gain access to a range of solutions necessary to enhance transparency and accountability of future intelligent systems. This is crucial for the industry, as otherwise the public concern regarding such issues will hinder adoption. Our approach will be accessible through a range of open source software prototypes and frameworks, promoted through academic and industrial forums and through an online presence. Through preliminary conversations with IBM who are leaders in the intelligent system (cognitive computing space) there is clear evidence of interest in our proposals.

In terms of industry in general, businesses see much value in automating a range of processes, to bring about innovation and efficiency. Again, by tackling issues of accountability, this work directly works towards increasing public acceptability - to best ensure the full economic potential for the technology is realised.

Publications

10 25 50