Formalising and evaluating social trust in human-robot partnerships

Lead Research Organisation: University of Oxford
Department Name: Computer Science

Abstract

Rapid advances in robotics and artificial intelligence have led to robots with increasing degrees of autonomy being deployed on a wide scale, to mention home assistive robots, driverless cars and robot-assisted surgery. Autonomous robots are independent decision makers and will need to interact and work in partnerships with humans, for example in semi-autonomous driving or firefighter teams that include robotic agents. It is therefore important that they understand the social and ethical context. Human partnerships are guided by trust, which is a subjective notion that evolves based on interactions between agents. Inappropriate trust in autonomous systems can lead to misuse, as demonstrated by the recent fatal Tesla crash, or disuse. Trust has been studied in a variety of contexts, including reputation or credentials based trust in e-commerce, but few works have considered social trust, which is the human notion of trust needed to form human-robot partnerships. One exception is the theoretical framework for reasoning about trust proposed in (http://qav.comlab.ox.ac.uk/bibitem.php?key=HK17), where a probabilistic temporal logic with novel cognitive and trust operators is formulated and its model checking complexity analysed.
This proposal aims to develop a practical implementation of the above framework for reasoning about trust that is powerful enough to capture autonomy, subjective preferences and rational behaviour. A key feature will be evaluation of trust achieved through probabilistic model checking. The main uses of the framework will be to (i) guide human trust in robots, (ii) enable programming of robots with subjective social trust, (iii) integrate with data about dynamical interaction, and (iv) provide information about trust-based decisions that can inform analyses of accountability in case of accidents. Our project is rooted in temporal logic, which is used to express trust. In particular, we plan to revise the logi semantics to ensure it is consistent, intuitive and expressive. Evaluating trust formulas uses model checking techniques and therefore developing our tool will involve implementing model checking algorithms or integrating with existing model checkers, for example the PRISM probabilistic model checker. We also envisage a cooperation with industrial partners (such as manufacturers of robots or driverless cars) to validate our framework and investigate how it can be used in real-life scenarios. A satisfying outcome of the project would be a complete framework and its implementation, which could be used as a reference for software developers programming autonomous systems, and by humans to guide their levels of reliance on robots.
We believe our proposed research fits very well with ESPRC's Cross-ICT priorities for 2017-2020. Our main objective - improving the interaction between humans and robots and providing guidance for humans on how to use autonomous systems safely - contributes not only to the People at the Heart of ICT priority, but also to Safe and Secure ICT priority, by reducing risk and unpredictability involved in human-robot interaction.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/N509711/1 01/10/2016 30/09/2021
1896701 Studentship EP/N509711/1 01/10/2017 30/09/2020 Maciek Olejnik