Explainable Reasoning for Human-Robot Collaboration

Lead Research Organisation: University of Birmingham
Department Name: School of Computer Science

Abstract

The use of robots to collaborate with humans in complex domains such as healthcare, navigation and disaster rescue poses some fundamental open problems. This thesis focuses on the problem of enabling a robot to explain its knowledge, beliefs and decisions to humans. It is challenging to achieve this objective because such a robot will be equipped with different descriptions of knowledge and uncertainty. The robot often has access to rich common sense domain knowledge that holds in all but a few exceptional circumstances, e.g., "books are usually in the library, but cookbooks may be in the kitchen". The robot can also extract information from its sensor inputs, and the associated uncertainty is often modelled probabilistically, e.g., "I am 90% certain I saw the robotics book on the desk". In addition, human participants may not have the time or expertise to provide comprehensive knowledge or elaborate feedback. To achieve explainable reasoning and learning, we will thus need to enable the robot to represent, reason with, and revise different descriptions of incomplete domain knowledge and uncertainty at different levels of abstraction.
Non-monotonic logical reasoning paradigms such as Answer Set Prolog (ASP) support elegant representation of, and reasoning with, common sense domain knowledge. However, these paradigms require comprehensive knowledge of the domain and its dynamics, and (by themselves) are not well-suited to represent and reason with probabilistic knowledge. Probabilistic sequential decision-making algorithms, on the other hand, can be used to reason with probabilistic models of uncertainty in sensing and actuation. Recent work has developed an architecture that reasons with tightly-coupled transition diagrams of any given domain at different resolutions, with the fine-resolution descriptions defined as a refinement of the coarse-resolution description of the domain. For any given goal, non-monotonic logical reasoning with the coarse-resolution description, which includes common sense knowledge, provides a plan of abstract actions. Each such abstract action is implemented as a sequence of concrete actions by zooming to and reasoning probabilistically over just the relevant part of the fine-resolution description. There has also been work on using non-monotonic logical reasoning, relational reinforcement learning, and inductive learning to interactively and incrementally learn and revise the existing domain knowledge.
This thesis will significantly expand the capabilities of the existing architectures to allow a robot to explain its knowledge, beliefs and decisions on demand. This ability, also called "Explainable Agency", in turn requires four distinct functional abilities: (1) Explain decisions made during plan generation; (2) Report which actions it executed; (3) Explain how actual events diverged from a plan and how it adapted in response; (4) Communicate its decisions and reasons to humans at desired level of abstraction. Explainable agency has the potential to make the behaviour of robots more transparent, but it still remains a fundamental open problem in robotics.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/N509590/1 01/10/2016 30/09/2021
2104531 Studentship EP/N509590/1 01/10/2018 30/12/2021 Oliver Kamperis
EP/R513167/1 01/10/2018 30/09/2023
2104531 Studentship EP/R513167/1 01/10/2018 30/12/2021 Oliver Kamperis