A Person-Centred Approach to Understanding Trust in Moral Machines

Lead Research Organisation: University of Kent
Department Name: Sch of Psychology

Abstract

Artificial intelligence (AI) is increasingly used to perform tasks with a moral dimension, such as prioritising
scarce medical resources. Debates rage about the ethical issues of AI, how we should programme ethical AI,
and which ethical values we need to prioritize. But machine morality is as much about human moral
psychology as it is about the philosophical and practical issues of building artificial agents. To reap the
benefits of AI, stakeholders need to be willing to use, adopt, and rely on these systems: they must trust in the
AI agents.
TRUST-AI draws on psychology and philosophy to explore how and when humans trust AI agents that act as
'moral machines'. Drawing from classic models of trust and recent theoretical work from moral psychology on
the complexity of trust in the moral domain, this five-year project explores 1) The characteristics of AI agents
that predict trust; 2) the individual differences that make us more or less likely to trust AI agents; 3) the
situations where we are more likely to 'outsource' moral decisions to AI agents; and 4) how these findings
should be used to design AI agents that warrant our trust.
My approach is methodologically pluralistic and includes qualitative analysis and natural language processing,
behavioural economic games, and in-person behavioural experiments. I develop a customised data collection
platform that will run online experiments globally in at least 3 different languages, as well as cross-cultural
studies in 9 different countries and a range of experiments with an estimated 29,000 participants across four
work programmes and 20 studies.
These findings will help us understand the how, when, and why people trust AI agents with important
theoretical and methodological implications for research on the antecedents and consequences of trust in moral
machines.

Publications

10 25 50