Trust and Explainable AI in Human-Machine Interaction

Lead Research Organisation: University of Manchester
Department Name: Computer Science

Abstract

Trust in collaborative human-machine interaction (HMI) is a two-way process. One depends on the user's perception of the robot/machine's capabilities, whilst the second considers the robot's perception and trust of the human's intention and goals. In both cases, this relies on each agent having a "Theory of Mind" (ToM) of the other entity. A ToM is the capability to infer implicitly the belief, intention and goals of the other person. This has been shown to be linked to trust, in both people and HMI systems (e.g. Gaudiello et al. 2011; Zanatto et al. in press).

Recent approaches in HMI have proposed computational models of artificial ToM for trust. For example, Cangelosi and collaborators (Vinanzi et al. 2019; Patacchiola and Cangelosi, in review) have developed artificial ToM models for robots based on probabilistic machine learning methods (e.g. belief network). In parallel, explainable AI (XAI) systems have been proposed for transparent intelligent systems, especially in the field of health informatics, but with only few applications to robotics (Anjomshoae et al. 2019; Wachter et al. 2017).

This iCase PhD project aims at the development of novel, explainable ToM models for human-robot interaction. The integration of probabilistic robot ToM models with explainable AI methods offer the opportunity to improve trust in collaborative HMI scenarios by adding a component of "explicit" ToM building and update, to complement existing "implicit" models of intention reading. Moreover, explainable AI interaction on the machine's decision making process can allow the interacting agents to repair their ToM, e.g. in uncertain and vague situations, and when errors are produced. These explainable ToM models can contribute to human-cobot (collaborative robots) interaction for joint manipulation task within a flexible manufacturing scenario, or in other HMI scenarios relevant to BAE Systems. Such a project directly contributes to the topic on uncertainty, vagueness and trust in HMI, by linking trust and ToM modelling with explainable AI for uncertain situations.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/W522065/1 01/10/2021 30/09/2026
2859094 Studentship EP/W522065/1 01/10/2021 30/09/2024 Wolodymyr Krywonos