Prosthetic or Supervisor?: AI and the Remaking of the British State

Lead Research Organisation: Birkbeck, University of London
Department Name: Politics

Abstract

Artificial intelligence (AI) systems are effecting a fundamental change in decision-making across public policy in the UK, reorganising even sensitive subjects such as children's social care. While the use of AI has been studied in areas such as criminal justice, there has been less scholarly or political attention to the use of AI in social care, despite the inherent vulnerability of those affected and potential for biases based on socioeconomic status,
ethnicity, and gender.
More broadly, AI systems move past the traditional roles of statistics in enumerating populations, describing the levels of social phenomena, or the more recent causal evaluation of average outcomes of policies in the British "What Works" agenda. Instead, AI systems allow for the personalisation of risk - decisions are made and sometimes explained based on a single person's data, even if the systems "learn" from historical data, rather than bureaucratic rules that apply to all persons who meet fixed criteria.
This project will then examine why public servants choose to adopt AI systems given these differences from prior practice and the implications for the ethical use of AI in public policy with a focus on children's social care. This covers ESRC Priority 2 (Digital Society) by ensuring social science guides the adoption of AI technologies and Priority 5 (Human Behaviour) by linking psychology and policy.
The rationale is that despite assurances that AI will improve state decision-making, this is not always evidence-based and rarely publicly debated. For instance, a report I co-authored for the What Works Centre for Children's Social Care (Clayton, Gibbons, Schoenwald, Surkis, and Sanders 2020) found that if a machine learning model identifies a child is at risk, it is wrong six out of ten times, meaning if deployed a significant number of children and families could suffer the consequences of wrongful intervention. The political stakes of this shift are therefore high and highlight the increasing need for regulation of AI systems used by the state. This is urgently needed to help protect citizens from the negative consequences of misused or inaccurate systems while allowing them to benefit from the appropriate usage of AI.
On a disciplinary level, the rationale for the thesis is to improve understanding of the role of technology in shaping and being shaped by state practice. For instance, in political theory, technology is typically assumed away, often by analysing a given society with, as Gabriel (2022) puts it, a "specific sociotechnical character (that is, one with a functioning legal system, economic division of labor, capacity for taxation, and so on)" (219).
Finally, the timing of this project is opportune as AI systems are starkly in policy focus, with the UK government consulting on releasing new AI principles, the Netherlands creating the world's first algorithm register, and the European Union passing the Artificial Intelligence Regulation in December 2022. However, all of these efforts fall far short of the World Bank's (2021) call for a "new social contract for data", for example because new regulations tend to focus on the safe development of AI systems, rather than regulating purposes for which AI systems might be justly used. Similarly, the focus on social care is apt due to adoption of recommendations in the far-reaching Independent Review of Children's Social Care by Josh MacAlister published in 2022.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
ES/P000592/1 01/10/2017 30/09/2027
2862705 Studentship ES/P000592/1 01/10/2023 30/09/2026 Daniel Gibbons