Calibrating Trust between Humans and Autonomous Systems

Lead Research Organisation: University of Glasgow
Department Name: School of Psychology

Abstract

The project will seek to investigate which parameters influence trust between artificial intelligences and human users. Our partner for this project, Qumodo, are a company dedicated to helping people interface with artificial intelligence; we will examine their Intelligent Iris system. Intelligent Iris is a modular data analysis system which is designed to facilitate human users in extracting meaningful results from large sets of data, including images (such as photos, medical scans, military sensor data etc.). The visual nature of this task makes it challenging as humans bring a wealth of social expectancies and uniquely human visual processes to understand an image. Fostering trust within man-machine teams is expected to improve both mental health and productivity. Guided by recent research into trust from domains like autonomous vehicles and social robotics, we will perform experiments to examine which parameters influence the calibration of trust when interacting with the image understanding software. We hope to advance a conceptual understanding of trust between man and machine and identify effective strategies to adjust system parameters to properly calibrate trust. These results will be valuable in advancing product development at Qumodo and will importantly inform the wider debate over how to design intelligent systems.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
ES/P000681/1 01/10/2017 30/09/2027
1941720 Studentship ES/P000681/1 01/10/2017 30/04/2021 Martin Ingram
 
Description When working with an autonomous system that is designed to label objects in a series of images, trust towards the system was heavily based on system performance. When the performance was bad, trust was lower, and when the performance was good, trust towards the system was higher. Trust was further improved by the presence of confidence information, but only in one format. Trust was significantly improved when people were presented with a numerical cue of system confidence, which essentially told the user that the system is, for example, '12%', '54%' or '89%' confident for each of its decisions being correct. However, when given the choice, people did not show an explicit preference for numbers alone. Instead, they preferred for this confidence information to be displayed in a more complex format, visualised in a bar graph, which gave them the most detailed form of confidence information. This suggested that when working with autonomous systems, people may prefer more transparency within the interface of the system, to illustrate system decision making, We are currently following up these findings with a more detailed experiment, in which we will explore the implications of this further.
Exploitation Route These findings may help deep our understanding of decision support information, particularly in how they could be used to inform trust towards systems that work independently.
Sectors Digital/Communication/Information Technologies (including Software)