Ethical AI for Decision Making Under Uncertainty

Lead Research Organisation: University of Exeter
Department Name: Mathematics

Abstract

Examples of the importance and power of AI are all around us. We see AI providing new data insights, and potentially having a much bigger say in our lives. With the broadest possible definition, AI is the blend of statistics with computing, hopefully harnessing modern computing power and theoretical developments in statistics to provide insight and decision support to some of the hardest and largest problems we face. The reality of AI currently is a great deal of computing power, a vast suite of impressive algorithms that any junior data analysis could try (because code is available) and yet very little in the way of the sort of foundational underpinning to give confidence in their use for supporting decision making. There is a great deal of interest in Ethical AI. What should the standards be on data use? When is an algorithm useable? What is an algorithm doing? But these are largely questions studied in social science. Algorithms in data science use probability and statistical principles and hence there should be a role for foundational statistical theory underpinning the ethical use of AI. This studentship will explore the foundational underpinnings of modern methods in AI. How is probability used? What does it mean? Is a classifier acting with the same utilities of the human decision maker responsible? How can inferences made by AI be believed/owned by decision makers? These are all questions that may be discussed within many disciplines, but to discuss them in terms of the foundations of mathematics is new. We will focus on AI for decision making, using decision theoretic principles to infer the meaning of algorithm decisions/probabilities or recommendations under subjectivist, objectivist and falsificationist world views that would be required in order for the decision to follow the algorithm to be "correct". We will explore the extension of posterior belief assessment, a method for deriving subjectivist beliefs a posteriori from multiple Bayesian analysis when the prior/model is too complex to believe, to algorithms in AI. This will extend work begun during a preliminary masters project that reviewed Ethical AI and looked at subjectivist interpretations of certain "Bayesian" neural network algorithms.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/V520317/1 01/10/2020 31/10/2025
2406050 Studentship EP/V520317/1 01/10/2020 30/09/2024 Cassandra Bird