Algorithmic bias: patterns, consequences and alternatives

Lead Research Organisation: University of Sheffield
Department Name: Sheffield Methods Institute

Abstract

Data-driven technologies are transforming society, as governments, businesses and other sectors are increasingly adopting automated and algorithmic systems in the search for greater efficiency in the delivery of their services. Among these actors government departments, often resource-poor and in need of effective, streamlined, automated systems, are increasingly turning to digital technologies. But data-driven and algorithmic systems are far from straightforward. As a number of researchers have noted, they can discriminate in opaque ways through bias written into the systems, which can be intentional or unintentional. This is something that government departments providing support and services to the most vulnerable in society wish to avoid, but how to do so is in need of investigation. Similarly, more knowledge is needed about the expectations of citizens as public service users and related questions of ethics and trust. Using a combination of methods, this PhD project involves working closely with one such government department to explore algorithmic bias, its risks and consequences, alternative approaches, communicating about algorithmic processes with service users, and integrating alternative, or 'fairer', processes into existing workflows. The partner on this PhD project is the Department for Work and Pensions (DWP), which is responsible for welfare, pensions and child maintenance policy.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
ES/P000401/1 01/10/2017 30/09/2024
2120535 Studentship ES/P000401/1 01/10/2018 05/03/2023 Hadley Beresford