Embedding ethical uncertainty in algorithmic decision-making

Lead Research Organisation: University of Southampton
Department Name: Southampton Business School

Abstract

With the constant integration of technology into everyday life, there is an ever-growing list of decision-making processes which are now being performed by non-human agents with intelligent software using algorithms to make such decisions; for example, Facebook's use of artificial intelligence to remove "bad content", such as hate speech or terror-related posts.The benefits of such algorithms being used in decision-making processes is evident with an increase in speed and efficacy being indicative of this; for example, in the legal field, where it would have taken solicitors days to sift through numerous cases, such algorithms can be used to improve the way cases are analysed and outcomes are predicted radically speeding up and improving such processes.

Despite the rise of artificial intelligence and the use of such algorithms becoming more common and widespread, there is still a significant issue to be solved: such artificial agents are not human and so do not have the all of the faculties which are used by humans to address ethical uncertainty. A fundamental aspect of human decision-making processes is that the human agents use their moral judgement, in order to help make "good" decisions. If there is, therefore, the absence of such moral agency in algorithmic decision making, there is a risk that the results of such processes may inadvertently lead to outcomes which are not desirable in terms of their ethical consequences.

This research aims to explore how to embed ethical uncertainty into such algorithmic processes to help avoid such consequences.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
ES/P000673/1 01/10/2017 30/09/2027
2279911 Studentship ES/P000673/1 01/10/2019 30/09/2023 Thomas Phipps