How can we create a more just society with A.I.?

Lead Research Organisation: The Open University
Department Name: Faculty of Sci, Tech, Eng & Maths (STEM)

Abstract

Justice can be viewed as "objective" or mediated through power [Chomsky & Foucault, 1971; Costanza-Chock, 2018]. Finding commonalities across different legal and ethical frameworks [Floridi & Cowls, 2019; Jobin et al., 2019] is an example of the former. In the latter, justice is a "requirement" for non-equitable societies, ensuring protection for the most harmed [Cugueró-Escofet & Fortin, 2014]. The difficulty in achieving this type of justice through A.I. is that A.I. is used primarily for classification and prediction [Vinuesa et al., 2020]. Growing evidence indicates that A.I. accelerates and compounds social bias, contributing to unequal distributions of power [O'Neil, 2016, p. 3, Noble, 2018; Benjamin]. "Trade-offs" in providing accurate and fair predictions also impact sub-populations disproportionately [Yu et al. 2020], meaning that people with multiple forms of marginalisation are more likely to be misunderstood by A.I. than those with normative characteristics [Costanza-Chock, 2018]. While there are legal and ethical frameworks that should govern the way we use A.I., minority voices are still under-represented [Buolamwini, J. and Gebru, T., 2018, Costanza-Chock, 2018; Magalhães & Couldry, 2020] and there are few structures for enforcement and accountability [Mittelstadt, 2019]. We need to rethink how A.I. is contributing to justice as a relational concept, which includes dimensions of power and marginalisation. My proposal draws together the cultural, technical, and socio-technical expertise necessary to extend our current notions of justice in empirical research for A.I. for social good (AI4SG).

To start with, the core team will develop a conceptual model of A.I. and "justice" that includes a) different definitions of justice used to frame the tasks of A.I. and evaluate their efficacy, b) the questions that can be answered under that definition and c) the trade-offs that are determined to be acceptable in the process. The research team will map scholarly literature from AI4SG to the ethical, legal or political frameworks that underpin the research, identifying gaps or conflicts in how justice is operationalised within AI4SG in comparison with other social justice models. In particular, we will explore the questions: are different positions on justice incompatible with A.I.? Can we identify new pathways for justice to emerge?

To extend our conceptual model, we will conduct 3 case studies in which minority interests are ignored within specific A.I. tasks: 1) non-binary people in gender-based analysis of sexism 2) discriminatory deplatforming of sex workers or artists through content moderation and 3) shadow-banning activists as part of a counter-terrorism approach. The case studies will explore conflicts between these communities' concept of justice and the A.I. task, and which alternative solutions exist. They will also contribute to the global problem of tackling online harm and using A.I. techniques to help identify and classify relevant cases.

Finally, to test alternative solutions, a multi-sectoral Advisory Board of A.I. and community experts will be brought together to create a design challenge for A.I. researchers. Issued through 2 workshops at top-level A.I. conferences, the challenge will be to prioritise marginalised perspectives. The outputs of the challenge and their evaluation will inform a set of guidelines for dealing with errors and trade-offs in AI4SG.

Our contribution is to a) expose connections between how A.I. researchers define justice and which justice questions we attend to in AI4SG; b) reflect on the benefits of A.I. for which societies; and c) influence and inspire researchers to question assumptions of A.I. research around acceptable trade-offs and errors. This research will bring together social scientists, community experts and A.I. researchers to explore what new lines of inquiry can be opened by focusing on maximising the benefits in A.I. for marginalised groups

Publications

10 25 50
 
Description Policy Consultation (Policy Connect, UK Government)
Geographic Reach National 
Policy Influence Type Participation in a guidance/advisory committee
Impact These consultations resulted in the White paper: "An Ethical AI Future: Guardrails & Catalysts to make Artificial Intelligence a Force for Good" https://www.policyconnect.org.uk/research/ethical-ai-future-guardrails-catalysts-make-artificial-intelligence-force-good. The report called for international collaboration, towards a global AI Convention and Watchdog, a national AI centre that would convene existing regulators to support agile regulation of AI, and promote accountability for "doing no harm."
URL https://www.policyconnect.org.uk/research/ethical-ai-future-guardrails-catalysts-make-artificial-int...
 
Description "After AI Symposium" 
Organisation Arts Council England
Country United Kingdom 
Sector Public 
PI Contribution The Shifting Power UKRI Future Leaders Fellowship Team is co-organising the event with our former collaborator, R. Justin Hunt, who works as Senior Relationship Manager with Arts Council England and as a Cultural Advocacy Fellow at the Mile End Institute of Queen Mary University of London. We were responsible for co-designing the event, recruiting speakers and participants, and identifying and recruiting program committee members. With the PC, we are advertising the event, and will review abstracts, develop the final program and co-facilitate the event with R. Justin Hunt.
Collaborator Contribution Our co-organiser R. Justin Hunt helped to co-design the event, recruit speakers and participants, and will review abstracts, develop the program and co-facilitate the event with our team. Our partners, the Radical Methodologies Research and Enterprise Group at Bristol helped to co-design the event, recruit speakers and participants, and have provided a member of their group to act on the program committee.
Impact the "After AI" symposium is a first of its kind, post disciplinary symposium examining questions of who/what/when is "after AI". The symposium will address questions around the future of this technology, and the consequences we expect it to have on people, society, the planet and beyond.
Start Year 2023
 
Description "After AI Symposium" 
Organisation Queen Mary University of London
Country United Kingdom 
Sector Academic/University 
PI Contribution The Shifting Power UKRI Future Leaders Fellowship Team is co-organising the event with our former collaborator, R. Justin Hunt, who works as Senior Relationship Manager with Arts Council England and as a Cultural Advocacy Fellow at the Mile End Institute of Queen Mary University of London. We were responsible for co-designing the event, recruiting speakers and participants, and identifying and recruiting program committee members. With the PC, we are advertising the event, and will review abstracts, develop the final program and co-facilitate the event with R. Justin Hunt.
Collaborator Contribution Our co-organiser R. Justin Hunt helped to co-design the event, recruit speakers and participants, and will review abstracts, develop the program and co-facilitate the event with our team. Our partners, the Radical Methodologies Research and Enterprise Group at Bristol helped to co-design the event, recruit speakers and participants, and have provided a member of their group to act on the program committee.
Impact the "After AI" symposium is a first of its kind, post disciplinary symposium examining questions of who/what/when is "after AI". The symposium will address questions around the future of this technology, and the consequences we expect it to have on people, society, the planet and beyond.
Start Year 2023
 
Description "After AI Symposium" 
Organisation University of Bristol
Country United Kingdom 
Sector Academic/University 
PI Contribution The Shifting Power UKRI Future Leaders Fellowship Team is co-organising the event with our former collaborator, R. Justin Hunt, who works as Senior Relationship Manager with Arts Council England and as a Cultural Advocacy Fellow at the Mile End Institute of Queen Mary University of London. We were responsible for co-designing the event, recruiting speakers and participants, and identifying and recruiting program committee members. With the PC, we are advertising the event, and will review abstracts, develop the final program and co-facilitate the event with R. Justin Hunt.
Collaborator Contribution Our co-organiser R. Justin Hunt helped to co-design the event, recruit speakers and participants, and will review abstracts, develop the program and co-facilitate the event with our team. Our partners, the Radical Methodologies Research and Enterprise Group at Bristol helped to co-design the event, recruit speakers and participants, and have provided a member of their group to act on the program committee.
Impact the "After AI" symposium is a first of its kind, post disciplinary symposium examining questions of who/what/when is "after AI". The symposium will address questions around the future of this technology, and the consequences we expect it to have on people, society, the planet and beyond.
Start Year 2023
 
Description Ecology of AI Impact 
Organisation Trilateral Research and Consulting LLP
Country United Kingdom 
Sector Private 
PI Contribution Our team is making networking inroads with different groups approaching the question of AI and its impacts from Queer, Indigenous and Black feminist perspectives. We are seeking new paradigms to consider impact of AI technology that are not originating in Western European philosophical ideas of ethics, nor tied to nation-state politics, which can be unfair and assymetrical (such as is the case for AI for social good), nor able to be pushed into the realm of cultural subjectivity. In this first collaboration, we have brokered a partnership with the only institution working on critical ecology in a specific way that is relevant to our project, namely the "whole systems" view of ecology that includes marginalisation and oppression as an ecological impact that also has repurcussions for other parts of our ecosystem. It was the innovation of our team to apply this way of thinking to looking at the impacts of Artificial Intelligence - on the whole system of organisms, populations, communities, the ecosystem and biosphere, with special attention to the role of injustice, power and privilege in creating the future impacts of AI as a socio-technical assemblage. Our team has developed a set of workshops to further flesh out this approach. Last reporting period, we had one workshop accepted, and one was pending. This year, we have executed one workshop and will run the second annual Ecology of AI workshop in June 2024.
Collaborator Contribution The school of computing and communications at the OU has expertise in Critical Systems Thinking and Decolonial AI. This team continues to collaborate on the development of the workshop series and will serve on the Program Committee for this workshop series. The Critical Ecology Lab is educating our team on a new approach to ecology that considers the impacts of injustice on the planet. Members of the lab will be providing our key note discussions for the workshop, to introduce the concept of "critical" ecology, so that we can apply this to our case of AI and its impacts. Trilateral Research has expertise in seductive surveillance and privacy. They will serve on the program committee for our workshop series.
Impact Our second workshop of the Ecology of AI Impacts is expected to take place in June (confirmed). The collaboration is multi-disciplinary including those from sociology, education, Explainable AI, decolonial theory and critical studies, and ecology. This partnership also resulted in the symposium "After AI", a post-disciplinary meeting organised in collaboration with R. Justin Hunt from the Arts Council, and the Radical Methodologies Research Group at Bristol.
Start Year 2023
 
Description "How can we create a more just society with AI?" (GenAI Community of Practice, Open University UK) 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Professional Practitioners
Results and Impact The GenAI Community of Practice (CoP) meets regularly to discuss the role of GenAI in education, industry and politics. The community consists of approximately 100 researchers, practitioners, educators and general interest groups. At this meeting, I discussed the ethical aspects of generative AI, in the medium and long-term. After this presentation, I was contacted by two separate groups within the Open University, one group working on AI-enabled pedagogy through the EDIA lens, and another who is creating a framework for learning design that addresses use of generative AI. I have since brought these two groups together to assist educators at the OU with navigating the pedagogical, ethical and training aspects of working with GenAI.
Year(s) Of Engagement Activity 2023
 
Description Artificial Intelligence and Justice (iTV interview, national news) 
Form Of Engagement Activity A press release, press conference or response to a media enquiry/interview
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Public/other audiences
Results and Impact In summer 2023, I was interviewed by ITV news on the dangers of artificial intelligence. Last month, ITV.com has approximately 45 million visits per month, so this was a high value engagement. This interview raised my profile and resulted in a number of inquiries from within the OU and beyond.
Year(s) Of Engagement Activity 2023
URL https://www.itv.com/news/anglia/2023-06-15/could-the-east-benefit-from-uks-move-to-become-global-lea...
 
Description Seminar (Warwick University ERCs) 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Postgraduate students
Results and Impact Warwick University Secure Cyber Systems Research Group is holding a series of seminars and training opportunities for their early career researchers who are women from Black, Asian and other ethnic backgrounds that are viewed as minorities in the United Kingdom. I delivered a seminar on developing and communicating research ideas that are exciting and "sticky". I also shared my experience in applying for a UKRI Future Leaders Fellowship, sharing parts of my proposal with the participants. The participants reported that they appreciated the concrete advice and tools delivered in this seminar. It also allowed our fellowship project to network with future collaborators.
Year(s) Of Engagement Activity 2023