How can we create a more just society with A.I.?

Lead Research Organisation: The Open University
Department Name: Faculty of Sci, Tech, Eng & Maths (STEM)

Abstract

Justice can be viewed as "objective" or mediated through power [Chomsky & Foucault, 1971; Costanza-Chock, 2018]. Finding commonalities across different legal and ethical frameworks [Floridi & Cowls, 2019; Jobin et al., 2019] is an example of the former. In the latter, justice is a "requirement" for non-equitable societies, ensuring protection for the most harmed [Cugueró-Escofet & Fortin, 2014]. The difficulty in achieving this type of justice through A.I. is that A.I. is used primarily for classification and prediction [Vinuesa et al., 2020]. Growing evidence indicates that A.I. accelerates and compounds social bias, contributing to unequal distributions of power [O'Neil, 2016, p. 3, Noble, 2018; Benjamin]. "Trade-offs" in providing accurate and fair predictions also impact sub-populations disproportionately [Yu et al. 2020], meaning that people with multiple forms of marginalisation are more likely to be misunderstood by A.I. than those with normative characteristics [Costanza-Chock, 2018]. While there are legal and ethical frameworks that should govern the way we use A.I., minority voices are still under-represented [Buolamwini, J. and Gebru, T., 2018, Costanza-Chock, 2018; Magalhães & Couldry, 2020] and there are few structures for enforcement and accountability [Mittelstadt, 2019]. We need to rethink how A.I. is contributing to justice as a relational concept, which includes dimensions of power and marginalisation. My proposal draws together the cultural, technical, and socio-technical expertise necessary to extend our current notions of justice in empirical research for A.I. for social good (AI4SG).

To start with, the core team will develop a conceptual model of A.I. and "justice" that includes a) different definitions of justice used to frame the tasks of A.I. and evaluate their efficacy, b) the questions that can be answered under that definition and c) the trade-offs that are determined to be acceptable in the process. The research team will map scholarly literature from AI4SG to the ethical, legal or political frameworks that underpin the research, identifying gaps or conflicts in how justice is operationalised within AI4SG in comparison with other social justice models. In particular, we will explore the questions: are different positions on justice incompatible with A.I.? Can we identify new pathways for justice to emerge?

To extend our conceptual model, we will conduct 3 case studies in which minority interests are ignored within specific A.I. tasks: 1) non-binary people in gender-based analysis of sexism 2) discriminatory deplatforming of sex workers or artists through content moderation and 3) shadow-banning activists as part of a counter-terrorism approach. The case studies will explore conflicts between these communities' concept of justice and the A.I. task, and which alternative solutions exist. They will also contribute to the global problem of tackling online harm and using A.I. techniques to help identify and classify relevant cases.

Finally, to test alternative solutions, a multi-sectoral Advisory Board of A.I. and community experts will be brought together to create a design challenge for A.I. researchers. Issued through 2 workshops at top-level A.I. conferences, the challenge will be to prioritise marginalised perspectives. The outputs of the challenge and their evaluation will inform a set of guidelines for dealing with errors and trade-offs in AI4SG.

Our contribution is to a) expose connections between how A.I. researchers define justice and which justice questions we attend to in AI4SG; b) reflect on the benefits of A.I. for which societies; and c) influence and inspire researchers to question assumptions of A.I. research around acceptable trade-offs and errors. This research will bring together social scientists, community experts and A.I. researchers to explore what new lines of inquiry can be opened by focusing on maximising the benefits in A.I. for marginalised groups

Publications

10 25 50
 
Description Ecology of AI Impact 
Organisation Trilateral Research and Consulting LLP
Country United Kingdom 
Sector Private 
PI Contribution Our team is making networking inroads with different groups approaching the question of AI and its impacts from Queer, Indigenous and Black feminist perspectives. We are seeking new paradigms to consider impact of AI technology that are not originating in Western European philosophical ideas of ethics, nor tied to nation-state politics, which can be unfair and assymetrical (such as is the case for AI for social good), nor able to be pushed into the realm of cultural subjectivity. In this first collaboration, we have brokered a partnership with the only institution working on critical ecology in a specific way that is relevant to our project, namely the "whole systems" view of ecology that includes marginalisation and oppression as an ecological impact that also has repurcussions for other parts of our ecosystem. It was the innovation of our team to apply this way of thinking to looking at the impacts of Artificial Intelligence - on the whole system of organisms, populations, communities, the ecosystem and biosphere, with special attention to the role of injustice, power and privilege in creating the future impacts of AI as a socio-technical assemblage. Our team has developed a set of workshops to further flesh out this approach. We have had one workshop accepted, and one is pending.
Collaborator Contribution The school of computing and communications at the OU has expertise in Critical Systems Thinking and Decolonial AI. This team has collaborated on the development of the workshop series and will serve on the Program Committee for this workshop series. The Critical Ecology Lab is educating our team on a new approach to ecology that considers the impacts of injustice on the planet. Members of the lab will be providing our key note discussions for the workshop, to introduce the concept of "critical" ecology, so that we can apply this to our case of AI and its impacts. Trilateral Research has expertise in seductive surveillance and privacy. They will serve on the program committee for our workshop series.
Impact Our workshop of the Ecology of AI Impacts is expected to take place in June (confirmed) and in August (pending). The collaboration is multi-disciplinary including those from sociology, education, Explainable AI, decolonial theory and critical studies, and ecology
Start Year 2023
 
Description Seminar (Warwick University ERCs) 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Postgraduate students
Results and Impact Warwick University Secure Cyber Systems Research Group is holding a series of seminars and training opportunities for their early career researchers who are women from Black, Asian and other ethnic backgrounds that are viewed as minorities in the United Kingdom. I delivered a seminar on developing and communicating research ideas that are exciting and "sticky". I also shared my experience in applying for a UKRI Future Leaders Fellowship, sharing parts of my proposal with the participants. The participants reported that they appreciated the concrete advice and tools delivered in this seminar. It also allowed our fellowship project to network with future collaborators.
Year(s) Of Engagement Activity 2023