An Economics Approach to Machine Learning and Algorithm Bias

Lead Research Organisation: Heriot-Watt University
Department Name: Sch of Social Sciences

Abstract

Machine learning (ML) is quickly becoming synonymous with automated decision-making systems. Due to the expansion in capabilities of these statistical models, they are increasingly being used in high-stakes areas of life to supplement or even replace expert decision-making, such as in finance, law, unemployment, and childhood welfare. Through harnessing correlations between quantifiable features, ML algorithms produce some predicted metric of human behaviour (Corbett-Davies and Goel 2018). However, there are a growing set of findings showing that ML algorithms can and have produced systematically worse outcomes for certain demographics, often reinforcing some pre-existing inequality, bias, or lack of representation (Obermeyer et al. 2019; Buolamwini and Gebru 2018; Fuster et al. 2020; Hullman 2021; Raghavan et al. 2020).
In response, there is a rapidly growing literature within machine learning that attempts to infer the extent to which these systems are fair, or to the extent they propagate existing patterns of discrimination and inequality (Finocchiaro et al. 2020). Mehrabi et al (2021) argue that one of the biggest challenges facing the field is synthesising a robust definition of fairness as definitions tend to be limited in their analysis and understanding of the long-term effects, complex preferences and strategic actions of individuals (Finocchiaro et al. 2020). Raghavan et al. (2020) believe that the current definitions of fairness can often give undue credibility to vague claims of unbiasedness. As the use of ML to make automated decisions becomes even more widespread, the demand for fair and accountable algorithms will only increase. There is a large literature in economics and behavioural science on fairness and discrimination which can contribute to the analysis of algorithmic bias.
Main Research Objectives
1. To link the large economics, behavioural science, and moral philosophy literature on theories of fairness and discrimination to the ML literature on algorithm bias.
2. Contribute to the nascent literature that integrates various economic notions of fairness into ML algorithms as constraints to build fair machine learning algorithms.
3. Embed ML bias within a welfare economics framework to understand dynamic and equilibrium effects and to explore the regulatory and policy options available.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
ES/P000681/1 01/10/2017 30/09/2027
2759740 Studentship ES/P000681/1 12/09/2022 11/12/2025 Joseph Paul