Contrastive Explanations for Explainable Artificial Intelligence Planning

Lead Research Organisation: King's College London
Department Name: Informatics

Abstract

Explainable Planning is an important area of research which focuses on providing justifications and explanations for planner decisions. Much effort has been made in this field to provide explanations for users. However, there has not been as much effort in determining what users want explained. We observe that the need for plan explanation is driven by the fact that a human and a planning agent may have different models of the planning problem and different computational abilities. We hypothesise that users prompt explanations through the use of local contrastive questions rather than global "how" or "why" questions. To test this hypothesis we conducted an empirical study to find what questions users have when faced with a plan. We present our analysis of the results of the study, categorising our findings to create a taxonomy of questions.
We find that the majority of user questions are contrastive questions of the form "Why A rather than B?". These kinds of questions can be answered with a contrastive explanation that compares properties of the original plan containing A against the contrastive plan containing B. We introduce the plan negotiation problem where there is some disagreement between the proposed plan and the plan expected by the user. We propose a solution to this problem through an iterative process of question asking and answering. We propose a domain-independent approach of compiling these questions into constraints. These constraints are added to the planning model, so that a solution to the new model represents the contrastive plan which we use to produce an explanation. We introduce a formal description of the compilation from user question to constraints in a temporal and numeric PDDL2.1 planning setting. We created a framework implementing the above.
We then evaluated the framework through a user study. The results of the user study show that the explanations we provide are majoritively satisfactory to the user.

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/R513064/1 01/10/2018 30/09/2023
2569547 Studentship EP/R513064/1 01/10/2018 26/11/2022 Benjamin Krarup
 
Description Explainable Planning is an important area of research which focuses on providing justifications and explanations for planner decisions. Much effort has been made in this field to provide explanations for users. However, there has not been as much effort in determining what users want explained. We observe that the need for plan explanation is driven by the fact that a human and a planning agent may have different models of the planning problem and different computational abilities. We hypothesise that users prompt explanations through the use of local contrastive questions rather than global "how" or "why" questions. To test this hypothesis we conducted an empirical study to find what questions users have when faced with a plan. We present our analysis of the results of the study, categorising our findings to create a taxonomy of questions. We find that the majority of user questions are contrastive questions of the form "Why A rather than B?".

These kinds of questions can be answered with a contrastive explanation that compares properties of the original plan containing A against the contrastive plan containing B. We introduce the plan negotiation problem where there is some disagreement between the proposed plan and the plan expected by the user. We propose a solution to this problem through an iterative process of question asking and answering. We propose a domain-independent approach of compiling these questions into constraints. These constraints are added to the planning model, so that a solution to the new model represents the contrastive plan which we use to produce an explanation. We introduce a formal description of the compilation from user question to constraints in a temporal and numeric PDDL2.1 planning setting.

We created a framework implementing the above. We found that the compilations of these questions into planning models had little effect on the time taken to find solutions, nor the quality of the solutions found, compared to the original models. We then evaluated the framework through a user study. The results of the user study show that the explanations we provide are majoritively satisfactory to the user.

With the results of the user study we noted that simply showing the detailed differences between two plans is often not very helpful to the user. Instead, the explanation should focus on the essential characteristics of the problem that make one solution better than the other. We show how these explanations can be generated by abstracting features of the planning problem until the two plans become equi-quality -- that is, of equal or similar quality. We can then explain why one plan is better or worse than the other in terms of the abstracted features that impacted the difference in plan quality between the two.

It was also apparent that it is essential to consider the ethical principles under which autonomous systems operate. We therefore examined how contrastive and non-contrastive explanations can be used in understanding the ethics of action plans. We proposed an iterative framework to allow users to ask questions about the ethics of plans and receive automatically generated contrastive and non-contrastive explanations. We ran a user study in a robo-ethics domain that indicated that the generated explanations help humans to understand the ethical principles that underlie a generated action plan.
Exploitation Route Automated planning is being used in increasingly complex applications, and explanation plays an important role in building trust, both in automated planners and in the plans they produce. When the audience for a plan includes humans then it is natural to suppose that some users might wish to question the reasoning, intention and underlying assumptions that lead to those choices. Through the work funded through this award users can get answers to these questions. Therefore this research is useful in any industry in which automated planning is used. For example, in industries in which logistics and route planning are key, in robotics industries, in automated agricultural work, in the aerospace industry, and in defence. The work funded through this award is beneficial . We found that users tend to ask contrastive questions when faced with action plans. Therefore, there should be further research towards answering these type of questions. We also created a general conversational process (or framework) for explainable planning. We showed how this framework can be used to provide explanations by plan highlighting and through model abstractions. We also showed how this framework can be used for specific explanations about the ethics of plans. This framework can be used in further research for explainable automated planning.
Sectors Aerospace, Defence and Marine,Agriculture, Food and Drink,Construction,Environment,Healthcare,Manufacturing, including Industrial Biotechology,Transport

URL https://scholar.google.co.uk/citations?user=yEjXNsQAAAAJ&hl=en