Trust in Human-Machine Partnership

Lead Research Organisation: King's College London
Department Name: Informatics

Abstract

Interaction with machines is commonplace in the modern world, for a wide range of everyday tasks like making coffee, copying documents or driving to work. Forty years ago, these machines existed but were not automated or intelligent. Today, they all have computers embedded in them and can be programmed with advanced functionality beyond the mechanical jobs they performed two generations ago. Tomorrow, they will be talking to each other: my calendar will tell my coffee maker when to have my cuppa ready so that I can arrive at work on time for my first meeting; my satnav will tell my calendar how much time my autonomous car needs to make that journey given traffic and weather conditions; and my office copier will have documents ready to distribute at the meeting when I arrive in the office. And they will all be talking to me: I could request the coffee maker to produce herbal tea because I had too much coffee yesterday; and the copier could remind me that our office is (still) trying to go paperless and wouldn't I prefer to email the documents to meeting attendees instead of killing another tree?

This scenario will not be possible without three key features: an automated planner that coordinates between the various activities that need to be performed, determining where there are dependencies between tasks (e.g., don't drive to the office until I get in the car with my hot drink); a high level of trust between me and this intelligent system that helps organise the mundane actions in my life; and the ability for me to converse with the system and make joint decisions about these actions. Advancing the state-of-the-art in trustworthy, intelligent planning and decision support to realise these critical features lies at the centre of the research proposed by this Trust in Human-Machine Partnerships (THuMP) project.

THuMP will move us toward this future by following three avenues of investigation. First, we will introduce innovative techniques to the artificial intelligence (AI) community through a novel, intra-disciplinary strategy that brings computational argumentation and provenance to AI Planning. Second, we will take human-AI collaboration to the next level, through an exciting, inter-disciplinary approach that unites human-agent interaction and
information visualisation to AI Planning. Finally, we will progress the relationship between Technology and Law through a bold, multi-disciplinary approach that links legal and ethics research with new and improved AI Planning.

Why do we focus on AI Planning? A traditional sub-field of artificial intelligence, Planning develops methods for creating and maintaining sequences of actions for an AI (or a person) to execute, in the face of conflicting objectives, optimisation of multiple criteria, and timing and resource constraints. Ultimately, most decisions result in some kind of action, or action sequence. By focussing on AI Planning, THuMP captures the essence of what a collaborative AI decision-making system needs to do.

We believe that most AI systems will (need to) involve a human in-the-loop and that it is crucial to develop new AI technologies such that people can use, understand and trust them. THuMP strives for complete understanding and trustworthiness through transparency in AI. We will develop and test a general framework for "Explainable AI Planning (XAIP)", in which humans and an AI system can co-create plans for actions; and then we instantiate two use cases for this framework that focus on resource allocation in two very different critical domains.

A cross-disciplinary project team of seven investigators, four collaborators and four postdoctoral research assistants will work with three project partners--a leading oil & gas services corporation; a leading international charity; and a leading global law firm--to move us into this envisioned future. An ambitious and realistic programme of networking, development, evaluation and public engagement is proposed.

Planned Impact

According to a June 2017 report from PricewaterhouseCoopers/PwC on the impact of Artificial Intelligence (AI) on the UK economy, the economic growth directly attributable to AI will be no less than 5% of GDP, and may be up to 10% by 2030. That figure represents a projected increase of more than 230 billion pounds in less than 15 years, and means that AI holds significant potential benefit for the UK. However, that potential is dependent on AI being seamlessly integrated throughout the economy: at work and at home; in factories, in offices, in shops and in schools. And that integration will not happen while people do not trust the technology. Unhappily, distrust in black-box AI techniques seems to be growing. THuMP directly addresses the issue of trust in AI, and one side effect of the overarching aim of the project--to achieve complete understanding and trustworthiness through transparency in AI systems--is to ensure that the promise of that additional 230 billion pounds a year is realised.

The main impact to be delivered by THuMP will be to demonstrate that trust in AI systems can be fostered if the AI can explain how it arrives at its recommendations. The demonstration will be in the context of AI planning, a sub-area of artificial intelligence that is already seeing deployment in industries such as our project partner, Schlumberger. Through the project evaluation, THuMP will demonstrate that "Explainable AI Planning" (XAIP) systems engender greater trust than planning systems that do not provide explanations. More specifically, THuMP will carry out this demonstration not only in a laboratory setting, but also in the real world setting of two of our project partners: an international oil & gas service company and an international charity. Thus, in addition to scientific results on the increase of trust that XAIP systems engender in humans, we will produce two use case studies for dissemination.

Of course, if AI is to become an accepted part of our lives, this will not happen by purely technological means. Magic technology will not make people's worries evaporate. Rather we need to understand the causes of people's concerns about AI, and overcome them. We need to establish a sound legal, ethical and regulatory framework in which AI systems will operate, and we need to help understand people's fears, and make sure that they are addressed by the composers of laws and regulations and the programmers who create the AI systems. Through a series of public engagement activities, THuMP seeks to identify and assuage the AI fears of the general public, to help educate not only the lay person, but also the engineers and policymakers responsible for enabling and regulating AI in society.

THuMP aims to have an impact on multiple audiences. On the question of a suitable legal and ethical framework, THuMP will identify how AI systems can be made to fit within the EU General Data Protection Regulation that is coming into force in the near future. On the question of understanding and addressing people's fears, THuMP will make progress through an ambitious programme of public engagement activities. These aim to both discover the issues around AI that concern young adults, a group chosen both for their tech savvy as "digital natives" and for the fact that they will be the first generation to live alongside AI systems for the bulk of their lives, and to provide a vehicle for working through these concerns.

Publications

10 25 50