How can we create a more just society with A.I.?
Lead Research Organisation:
The Open University
Department Name: Faculty of Sci, Tech, Eng & Maths (STEM)
Abstract
Justice can be viewed as "objective" or mediated through power [Chomsky & Foucault, 1971; Costanza-Chock, 2018]. Finding commonalities across different legal and ethical frameworks [Floridi & Cowls, 2019; Jobin et al., 2019] is an example of the former. In the latter, justice is a "requirement" for non-equitable societies, ensuring protection for the most harmed [Cugueró-Escofet & Fortin, 2014]. The difficulty in achieving this type of justice through A.I. is that A.I. is used primarily for classification and prediction [Vinuesa et al., 2020]. Growing evidence indicates that A.I. accelerates and compounds social bias, contributing to unequal distributions of power [O'Neil, 2016, p. 3, Noble, 2018; Benjamin]. "Trade-offs" in providing accurate and fair predictions also impact sub-populations disproportionately [Yu et al. 2020], meaning that people with multiple forms of marginalisation are more likely to be misunderstood by A.I. than those with normative characteristics [Costanza-Chock, 2018]. While there are legal and ethical frameworks that should govern the way we use A.I., minority voices are still under-represented [Buolamwini, J. and Gebru, T., 2018, Costanza-Chock, 2018; Magalhães & Couldry, 2020] and there are few structures for enforcement and accountability [Mittelstadt, 2019]. We need to rethink how A.I. is contributing to justice as a relational concept, which includes dimensions of power and marginalisation. My proposal draws together the cultural, technical, and socio-technical expertise necessary to extend our current notions of justice in empirical research for A.I. for social good (AI4SG).
To start with, the core team will develop a conceptual model of A.I. and "justice" that includes a) different definitions of justice used to frame the tasks of A.I. and evaluate their efficacy, b) the questions that can be answered under that definition and c) the trade-offs that are determined to be acceptable in the process. The research team will map scholarly literature from AI4SG to the ethical, legal or political frameworks that underpin the research, identifying gaps or conflicts in how justice is operationalised within AI4SG in comparison with other social justice models. In particular, we will explore the questions: are different positions on justice incompatible with A.I.? Can we identify new pathways for justice to emerge?
To extend our conceptual model, we will conduct 3 case studies in which minority interests are ignored within specific A.I. tasks: 1) non-binary people in gender-based analysis of sexism 2) discriminatory deplatforming of sex workers or artists through content moderation and 3) shadow-banning activists as part of a counter-terrorism approach. The case studies will explore conflicts between these communities' concept of justice and the A.I. task, and which alternative solutions exist. They will also contribute to the global problem of tackling online harm and using A.I. techniques to help identify and classify relevant cases.
Finally, to test alternative solutions, a multi-sectoral Advisory Board of A.I. and community experts will be brought together to create a design challenge for A.I. researchers. Issued through 2 workshops at top-level A.I. conferences, the challenge will be to prioritise marginalised perspectives. The outputs of the challenge and their evaluation will inform a set of guidelines for dealing with errors and trade-offs in AI4SG.
Our contribution is to a) expose connections between how A.I. researchers define justice and which justice questions we attend to in AI4SG; b) reflect on the benefits of A.I. for which societies; and c) influence and inspire researchers to question assumptions of A.I. research around acceptable trade-offs and errors. This research will bring together social scientists, community experts and A.I. researchers to explore what new lines of inquiry can be opened by focusing on maximising the benefits in A.I. for marginalised groups
To start with, the core team will develop a conceptual model of A.I. and "justice" that includes a) different definitions of justice used to frame the tasks of A.I. and evaluate their efficacy, b) the questions that can be answered under that definition and c) the trade-offs that are determined to be acceptable in the process. The research team will map scholarly literature from AI4SG to the ethical, legal or political frameworks that underpin the research, identifying gaps or conflicts in how justice is operationalised within AI4SG in comparison with other social justice models. In particular, we will explore the questions: are different positions on justice incompatible with A.I.? Can we identify new pathways for justice to emerge?
To extend our conceptual model, we will conduct 3 case studies in which minority interests are ignored within specific A.I. tasks: 1) non-binary people in gender-based analysis of sexism 2) discriminatory deplatforming of sex workers or artists through content moderation and 3) shadow-banning activists as part of a counter-terrorism approach. The case studies will explore conflicts between these communities' concept of justice and the A.I. task, and which alternative solutions exist. They will also contribute to the global problem of tackling online harm and using A.I. techniques to help identify and classify relevant cases.
Finally, to test alternative solutions, a multi-sectoral Advisory Board of A.I. and community experts will be brought together to create a design challenge for A.I. researchers. Issued through 2 workshops at top-level A.I. conferences, the challenge will be to prioritise marginalised perspectives. The outputs of the challenge and their evaluation will inform a set of guidelines for dealing with errors and trade-offs in AI4SG.
Our contribution is to a) expose connections between how A.I. researchers define justice and which justice questions we attend to in AI4SG; b) reflect on the benefits of A.I. for which societies; and c) influence and inspire researchers to question assumptions of A.I. research around acceptable trade-offs and errors. This research will bring together social scientists, community experts and A.I. researchers to explore what new lines of inquiry can be opened by focusing on maximising the benefits in A.I. for marginalised groups
Organisations
- The Open University (Lead Research Organisation)
- Queen Mary University of London (Collaboration)
- University of Brighton (Collaboration)
- University of the West Indies (Collaboration)
- Trilateral Research and Consulting LLP (Collaboration)
- Open University (Collaboration)
- Arts Council England (Collaboration)
Publications
Bayer V
(2024)
Co-creating an equality diversity and inclusion learning analytics dashboard for addressing awarding gaps in higher education
in British Journal of Educational Technology
Brown V
(2024)
A Qualitative Study on Cultural Hegemony and the Impacts of AI
in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society
Retno Larasati
(2023)
AI in Healthcare: Impacts, Risks and Regulation to Mitigate Adverse Impacts
| Description | In our project, we are interested in intersections between AI and matters of social justice. The particular framework we are using is queer theory and practice, which has been heavily influenced by disability justice movements and Crip Theory, intersectional feminisms and decolonial scholarship.To date, we have conducted two qualitative studies into the impacts of AI that relate to social justice. In the first study, we were exploring Mainstream narratives about ethical, fair and responsible AI, which we analysed via in-depth literature review and direct data gathering with 22 participants. Our study found that mainstream narratives originate primarily from elite Universities and wealthier, AI intensive regions. Ethical protocols are typically aligned with White, Western European notions of morality, despite many of the most negative downstream impacts of AI impacting the Global South (such as exploitative arrangements in content moderation or annotation, climate impacts, disproportionate surveillance). Our research indicates 1) that scholars in the Global South are primarily excluded from global discourse on ethical, fair and responsible AI, 2) that other moral frameworks originating in the Global South create new opportunities for understanding and innovating with AI, including challenging the assumption of individual users and supporting community use of AI, contributing to decolonial concepts of data sovereignty, and developing region-specific moral frameworks (Islamic ethics, afro-feminist liberatory ethics) to govern the use of AI. In our second study we were exploring the contributions of a transversal, minoritized population, queer people, to the study of AI. Queer people are an interesting minority community to study in the context of ethics, or thinking about harm more generally, because they are discriminated against on the basis of their person, but also on the basis of their perceived immorality. Once again, we used a combination of in-depth literature review and qualitative data gathering from queer AI practitioners and enthusiasts. The analysis of this study is still ongoing, but our literature review as highlighted that queerness is brought into AI via 1) the identities of AI researchers (Queer in AI), 2) the study of normativity and queer people as distinctly "unreadable" and therefore likely ignored by AI (Queer and AI), 3) experimentation with different methods and approaches in AI using queer frameworks or queer data to illustrate challenges of normativity and representation (Queering AI). Our qualitative data showed that most common uses of AI that queer practitioners would support would be unlikely to emerge in ways that support justice because of the current geopolitical and socio-economic systems that govern our world. Issues such as equal access to innovation (for example in healthcare) or using innovation to support equality objectives like re-designing cities to be more accessible, are believed to be less likely to receive attention and investment because they are not necessarily profitable. In addition, participants from both the queer study and the study with practitioners in the Global South highlighted the need to explore new objectives with AI, to direct it toward goals that promote justice in materials terms. In the next stage of our project, we will be exploring what a "queer AI" would mean, what it would do, what capabilities or functions it would have. In order to do this, we need to both imagine these future "AI objects" AND the context of imaginary worlds in which the social, cultural, political and economic structures of the world are aligned with social justice objectives. We will be using the concept of worldmaking in the construction of diagetic prototypes in the next phase of the project, bringing queer artists and AI researchers together. Artists will focus on knowledge exchange with AI researchers around how embodiment and somatic practice contribute to worldmaking. AI researchers will focus on knowledge exchange with artists around the capabilities and functions of AI. Together they will imagine a selection of prototypes that represent what queer AI would look like. These prototypes will then inform broader engagements with a wider variety of AI researchers at key conferences and events globally. |
| Exploitation Route | Other researchers into fair, ethical and responsible AI will be able to better see the rest of the iceberg beneath the water of AI ethics that pertain not only to how AI is developed and used in the UK, but also to the wider ecosystem of AI that creates geopolitical disturbance and creates challenges for injustice at a more global level. In addition, researchers will be made aware of the myraid of other ethical approaches that are more contextually relevant, and aligned with the different moral frameworks of different regions in the world. This will hopefully shift the conversation toward plurality in ethical alignment, which is something that queer theory and practice can contribute to. In the next stages of our research, we will be exploring how queerness handles plurality in a subcultural set of communities that exist on every continent, in every religion, across every class. |
| Sectors | Communities and Social Services/Policy Government Democracy and Justice |
| Description | Our team has contributed to our University's policy development around the use of AI. For example, we have been consulted and contributed to guidance and policy documents, we are asked to join expert panels at the University to discuss these policies, we participate in various working groups around how best to bring (specifically generative AI) into teaching and learning at the Open University. Our collaboration with artists has already led to knowledge exchange that is feeding back into artistic practice and feeding forward into how we think about the opportunties and challenges of AI. We are currently devising additional protocols for utilising artists in research that go beyond asking artists to respond to academic work, and move forward into thinking about how artistic practice can inform academic work. In addition, our team's focus on world-making and embodiment practice as a worldmaking exercise has been shared with previous networks active in European Youth Work. 6 former collaborators have just submitted a joint proposal under the European Erasmus+ Key Action 2 program, for bringing embodiment practice into youth work as a vehicle for knowledge discovery and engagement. |
| First Year Of Impact | 2024 |
| Sector | Communities and Social Services/Policy,Education |
| Impact Types | Societal Policy & public services |
| Description | Policy Consultation (Policy Connect, UK Government) |
| Geographic Reach | National |
| Policy Influence Type | Participation in a guidance/advisory committee |
| Impact | These consultations resulted in the White paper: "An Ethical AI Future: Guardrails & Catalysts to make Artificial Intelligence a Force for Good" https://www.policyconnect.org.uk/research/ethical-ai-future-guardrails-catalysts-make-artificial-intelligence-force-good. The report called for international collaboration, towards a global AI Convention and Watchdog, a national AI centre that would convene existing regulators to support agile regulation of AI, and promote accountability for "doing no harm." |
| URL | https://www.policyconnect.org.uk/research/ethical-ai-future-guardrails-catalysts-make-artificial-int... |
| Title | AI for Social Good Global Contributions |
| Description | We are creating three datasets and conducting research/comparisons among and between them. The first dataset is the top 1000 papers on AI from High and upper-middle income countries. The second dataset is the top 1000 papers on AI from low and lower-middle income countries. The third is a comparison dataset of the United Nations Sustainable Development Goals (SDGs). As part of the study, we are manually annotating each of the papers and mapping them to UN SDGs. We are also calculating semantic similarity between both datasets as a whole and the SDG comparison dataset. We have 2 main research questions: 1. Are there differences in the top 1000 papers from each dataset in terms of how many and how many different SDGs are addressed in top papers? 2. Is the first or second dataset more semantically aligned (as a whole) with the comparison dataset? We also have plans to build an automated classifier and compare this against our ground truth datasets. As this project is ongoing, the dataset is not yet available to the public but we will make it available when annotation is complete. In this way, other researchers will be able to use the dataset to create or support the development of their own classifiers and have a human annotated ground truth to compare them. |
| Type Of Material | Database/Collection of data |
| Year Produced | 2025 |
| Provided To Others? | No |
| Impact | Impacts for our team include the ability to test assumptions around whether the global north is truly invested in aligning AI research with UN SDGs, if they focus on specific SDGs, if other SDGs are not addressed or addressed differently between the Global North and the Global South. For the wider research community, the dataset addresses an ongoing challenge of mapping AI research to SDGs, by providing a ground truth, human annotated dataset. |
| Description | "After AI Symposium" |
| Organisation | Arts Council England |
| Country | United Kingdom |
| Sector | Public |
| PI Contribution | The Shifting Power UKRI Future Leaders Fellowship Team coorgnaised the After AI event with our collaborator, R. Justin Hunt, who works as Senior Relationship Manager with Arts Council England and as a Cultural Advocacy Fellow at the Mile End Institute of Queen Mary University of London. We were responsible for co-designing the event, recruiting speakers and participants, and identifying and recruiting program committee members. With the PC, we advertised the event, reviewed abstracts, developed the final program and co-facilitated the event with R. Justin Hunt. The event was a success with more than 150 participants. We have decided to make it an annual event. Justin Hunt has now been contracted to work with us once again to bring the presenters from After AI, along with some additional authors from other events we have done (the Ecology of AI, our AIES conference presentation of our paper on culturally hegemonic positions on AI), to create a book by the same title. This book will be published by an arts academic press as a "chap book", which will be continuously expanded with subsequent iterations of the symposium. |
| Collaborator Contribution | For the initial event, our co-organiser R. Justin Hunt helped to co-design the event, recruit speakers and participants, review abstracts, develop the program and co-facilitate the event with our team. Our partners, the Radical Methodologies Research and Enterprise Group at Brighton (Ben Sweeting) helped to co-design the event, recruit speakers and participants, and have provided a member of their group to act on the program committee. Ben Sweeting has continued to partner with us on the After AI book publication, and also on the 2025 iteration of the After AI event. |
| Impact | the "After AI" symposium was a first of its kind, post disciplinary symposium examining questions of who/what/when is "after AI". The symposium addressed questions around the future of this technology, and the consequences we expect it to have on people, society, the planet and beyond. The event was extremely successful, run with live BSL translation and had 150 participants from computer science, philosophy, theology, history, economics, political science, law and various artistic practices (performance, visual art, etc.). The event led to various collaborations and adjustments to existing partnerships. After that event, two of the speakers spoke at different events we hosted (The ecology of AI), one of which is another Future Leaders Fellow working on the subject of AI. One of the program committee members (Liz Rosenfeld) is now working with us on the development of a protocol for interdisciplinary work with performance artists. Another one of the program committee members (Mustafa Ali) is now working with us on theological/faith-based interpretations of AI that has instigated new lines of thinking around the subject of "prediction". A forthcoming publication from my post doc Retno Larasati will relate to the psychology of explainable AI and the psychology of astrology belief. This type of interdisciplinary work, that makes novel connections and provides new avenues for research, was facilitated by bringing many different types of perspectives, methodologies and epistemologies to the subject of what will happen with AI. |
| Start Year | 2023 |
| Description | "After AI Symposium" |
| Organisation | Queen Mary University of London |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | The Shifting Power UKRI Future Leaders Fellowship Team coorgnaised the After AI event with our collaborator, R. Justin Hunt, who works as Senior Relationship Manager with Arts Council England and as a Cultural Advocacy Fellow at the Mile End Institute of Queen Mary University of London. We were responsible for co-designing the event, recruiting speakers and participants, and identifying and recruiting program committee members. With the PC, we advertised the event, reviewed abstracts, developed the final program and co-facilitated the event with R. Justin Hunt. The event was a success with more than 150 participants. We have decided to make it an annual event. Justin Hunt has now been contracted to work with us once again to bring the presenters from After AI, along with some additional authors from other events we have done (the Ecology of AI, our AIES conference presentation of our paper on culturally hegemonic positions on AI), to create a book by the same title. This book will be published by an arts academic press as a "chap book", which will be continuously expanded with subsequent iterations of the symposium. |
| Collaborator Contribution | For the initial event, our co-organiser R. Justin Hunt helped to co-design the event, recruit speakers and participants, review abstracts, develop the program and co-facilitate the event with our team. Our partners, the Radical Methodologies Research and Enterprise Group at Brighton (Ben Sweeting) helped to co-design the event, recruit speakers and participants, and have provided a member of their group to act on the program committee. Ben Sweeting has continued to partner with us on the After AI book publication, and also on the 2025 iteration of the After AI event. |
| Impact | the "After AI" symposium was a first of its kind, post disciplinary symposium examining questions of who/what/when is "after AI". The symposium addressed questions around the future of this technology, and the consequences we expect it to have on people, society, the planet and beyond. The event was extremely successful, run with live BSL translation and had 150 participants from computer science, philosophy, theology, history, economics, political science, law and various artistic practices (performance, visual art, etc.). The event led to various collaborations and adjustments to existing partnerships. After that event, two of the speakers spoke at different events we hosted (The ecology of AI), one of which is another Future Leaders Fellow working on the subject of AI. One of the program committee members (Liz Rosenfeld) is now working with us on the development of a protocol for interdisciplinary work with performance artists. Another one of the program committee members (Mustafa Ali) is now working with us on theological/faith-based interpretations of AI that has instigated new lines of thinking around the subject of "prediction". A forthcoming publication from my post doc Retno Larasati will relate to the psychology of explainable AI and the psychology of astrology belief. This type of interdisciplinary work, that makes novel connections and provides new avenues for research, was facilitated by bringing many different types of perspectives, methodologies and epistemologies to the subject of what will happen with AI. |
| Start Year | 2023 |
| Description | "After AI Symposium" |
| Organisation | University of Brighton |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | The Shifting Power UKRI Future Leaders Fellowship Team coorgnaised the After AI event with our collaborator, R. Justin Hunt, who works as Senior Relationship Manager with Arts Council England and as a Cultural Advocacy Fellow at the Mile End Institute of Queen Mary University of London. We were responsible for co-designing the event, recruiting speakers and participants, and identifying and recruiting program committee members. With the PC, we advertised the event, reviewed abstracts, developed the final program and co-facilitated the event with R. Justin Hunt. The event was a success with more than 150 participants. We have decided to make it an annual event. Justin Hunt has now been contracted to work with us once again to bring the presenters from After AI, along with some additional authors from other events we have done (the Ecology of AI, our AIES conference presentation of our paper on culturally hegemonic positions on AI), to create a book by the same title. This book will be published by an arts academic press as a "chap book", which will be continuously expanded with subsequent iterations of the symposium. |
| Collaborator Contribution | For the initial event, our co-organiser R. Justin Hunt helped to co-design the event, recruit speakers and participants, review abstracts, develop the program and co-facilitate the event with our team. Our partners, the Radical Methodologies Research and Enterprise Group at Brighton (Ben Sweeting) helped to co-design the event, recruit speakers and participants, and have provided a member of their group to act on the program committee. Ben Sweeting has continued to partner with us on the After AI book publication, and also on the 2025 iteration of the After AI event. |
| Impact | the "After AI" symposium was a first of its kind, post disciplinary symposium examining questions of who/what/when is "after AI". The symposium addressed questions around the future of this technology, and the consequences we expect it to have on people, society, the planet and beyond. The event was extremely successful, run with live BSL translation and had 150 participants from computer science, philosophy, theology, history, economics, political science, law and various artistic practices (performance, visual art, etc.). The event led to various collaborations and adjustments to existing partnerships. After that event, two of the speakers spoke at different events we hosted (The ecology of AI), one of which is another Future Leaders Fellow working on the subject of AI. One of the program committee members (Liz Rosenfeld) is now working with us on the development of a protocol for interdisciplinary work with performance artists. Another one of the program committee members (Mustafa Ali) is now working with us on theological/faith-based interpretations of AI that has instigated new lines of thinking around the subject of "prediction". A forthcoming publication from my post doc Retno Larasati will relate to the psychology of explainable AI and the psychology of astrology belief. This type of interdisciplinary work, that makes novel connections and provides new avenues for research, was facilitated by bringing many different types of perspectives, methodologies and epistemologies to the subject of what will happen with AI. |
| Start Year | 2023 |
| Description | Critical AI Literacy |
| Organisation | University of the West Indies |
| Country | Barbados |
| Sector | Academic/University |
| PI Contribution | Dr. Venetia Brown, a post-doc in the project and our education and pedagogy expert is in the process of developing a joint proposal with Dr. Adrian Als from UWI Barbados. This proposal addresses the development of a "critical AI literacy" protocol, which would include consideration of historical legacies of colonialism and western imperialism, sustainability and environmental impacts, etc. Dr. Brown will be extending her work on AI in the Global South to thinking about how AI researchers and practitioners in the Global North learn about AI ethics, responsibility and fairness, and ensuring that they learn how these historical legacies impact AI work today. |
| Collaborator Contribution | Dr. Als is bringing the domain expertise of AI development in the island nations, part of the commonwealth. |
| Impact | This collaboration is still in its early phases. An initial proposal to support Venetia's work was submitted to the University to go to Leverhulme but did not pass internal selection due to competition. |
| Start Year | 2024 |
| Description | Ecology of AI Impact |
| Organisation | Open University |
| Department | School of Computing and Communication |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | Our team is making networking inroads with different groups approaching the question of AI and its impacts from Queer, Indigenous and Black feminist perspectives. We are seeking new paradigms to consider impact of AI technology that are not originating in Western European philosophical ideas of ethics, nor tied to nation-state politics, which can be unfair and assymetrical (such as is the case for AI for social good), nor able to be pushed into the realm of cultural subjectivity. In this first collaboration, we have brokered a partnership with the only institution working on critical ecology in a specific way that is relevant to our project, namely the "whole systems" view of ecology that includes marginalisation and oppression as an ecological impact that also has repurcussions for other parts of our ecosystem. It was the innovation of our team to apply this way of thinking to looking at the impacts of Artificial Intelligence - on the whole system of organisms, populations, communities, the ecosystem and biosphere, with special attention to the role of injustice, power and privilege in creating the future impacts of AI as a socio-technical assemblage. Our team has developed a set of workshops to further flesh out this approach. In 2023, we had one workshop accepted, and one was pending. In 2024, we had executed one workshop and were going to run the second annual Ecology of AI workshop in June 2024. This year we have now completed two workshops and have now folded this topic into our "After AI" symposium work, as contributions were often similar and the post disciplinary symposium was gathering more engagement. |
| Collaborator Contribution | The school of computing and communications at the OU has expertise in Critical Systems Thinking and Decolonial AI. This team continues to collaborate on the development of the workshop series and will serve on the Program Committee for this workshop series. The Critical Ecology Lab is educating our team on a new approach to ecology that considers the impacts of injustice on the planet. Members of the lab will be providing our key note discussions for the workshop, to introduce the concept of "critical" ecology, so that we can apply this to our case of AI and its impacts. Trilateral Research has expertise in seductive surveillance and privacy. They will serve on the program committee for our workshop series. |
| Impact | The collaboration on this workshop multi-disciplinary including those from sociology, education, Explainable AI, decolonial theory and critical studies, and ecology. This partnership also resulted in the symposium "After AI", a post-disciplinary meeting organised in collaboration with R. Justin Hunt from the Arts Council, and the Radical Methodologies Research Group at Brighton. The workshop allowed us to collaborate with high profile authors Dan McQuillan ("Resisting AI") and Theodora Dryer ("Your Artificial Future is Repulsive: On Climate Change, Data Tech, and Artifice"), and crystallise our work on bringing broader geopolitical and environmental concerns into the mainstream discussion on fair, ethical and responsible AI. Theodora Dryer is also a featured author for the upcoming "After AI" book. |
| Start Year | 2023 |
| Description | Ecology of AI Impact |
| Organisation | Trilateral Research and Consulting LLP |
| Country | United Kingdom |
| Sector | Private |
| PI Contribution | Our team is making networking inroads with different groups approaching the question of AI and its impacts from Queer, Indigenous and Black feminist perspectives. We are seeking new paradigms to consider impact of AI technology that are not originating in Western European philosophical ideas of ethics, nor tied to nation-state politics, which can be unfair and assymetrical (such as is the case for AI for social good), nor able to be pushed into the realm of cultural subjectivity. In this first collaboration, we have brokered a partnership with the only institution working on critical ecology in a specific way that is relevant to our project, namely the "whole systems" view of ecology that includes marginalisation and oppression as an ecological impact that also has repurcussions for other parts of our ecosystem. It was the innovation of our team to apply this way of thinking to looking at the impacts of Artificial Intelligence - on the whole system of organisms, populations, communities, the ecosystem and biosphere, with special attention to the role of injustice, power and privilege in creating the future impacts of AI as a socio-technical assemblage. Our team has developed a set of workshops to further flesh out this approach. In 2023, we had one workshop accepted, and one was pending. In 2024, we had executed one workshop and were going to run the second annual Ecology of AI workshop in June 2024. This year we have now completed two workshops and have now folded this topic into our "After AI" symposium work, as contributions were often similar and the post disciplinary symposium was gathering more engagement. |
| Collaborator Contribution | The school of computing and communications at the OU has expertise in Critical Systems Thinking and Decolonial AI. This team continues to collaborate on the development of the workshop series and will serve on the Program Committee for this workshop series. The Critical Ecology Lab is educating our team on a new approach to ecology that considers the impacts of injustice on the planet. Members of the lab will be providing our key note discussions for the workshop, to introduce the concept of "critical" ecology, so that we can apply this to our case of AI and its impacts. Trilateral Research has expertise in seductive surveillance and privacy. They will serve on the program committee for our workshop series. |
| Impact | The collaboration on this workshop multi-disciplinary including those from sociology, education, Explainable AI, decolonial theory and critical studies, and ecology. This partnership also resulted in the symposium "After AI", a post-disciplinary meeting organised in collaboration with R. Justin Hunt from the Arts Council, and the Radical Methodologies Research Group at Brighton. The workshop allowed us to collaborate with high profile authors Dan McQuillan ("Resisting AI") and Theodora Dryer ("Your Artificial Future is Repulsive: On Climate Change, Data Tech, and Artifice"), and crystallise our work on bringing broader geopolitical and environmental concerns into the mainstream discussion on fair, ethical and responsible AI. Theodora Dryer is also a featured author for the upcoming "After AI" book. |
| Start Year | 2023 |
| Description | Open Societal Challenge: The Palestine Exception |
| Organisation | Open University |
| Country | United Kingdom |
| Sector | Academic/University |
| PI Contribution | This is part of the University's Open Societal Challenges scheme: https://societal-challenges.open.ac.uk/, and is in collaboration with members of all faculties and professional services staff at the OU. The work of Dr. Tracie Farrell and the rest of the team on her project touches on the impact of colonial legacies, specifically related to AI. The war between Israel and Hamas, which has lead to what the majority of the international community and the international criminal court define as a genocide, has been made more bloody and horrific through the use of AI technology to identify supposed targets at a hyper-increased rate, and to support apartheid conditions and disproportionate surveillance. This makes this a topic of relevance for her research. The difficulty in speaking about Israel as a colonial power, and Palestinians as a colonised people creates a phenomenon called the Palestine Exception, in which even those that consider themselves decolonial scholars or anti-oppression scholars find it difficult to criticise Israel out of fear of being called antisemitic. The Open University adopted both the IHRA and the JD definitions of antisemitism to prevent this, but unfortunately, there is still a chilling effect on staff and students wishing to speak out in support of Palestinian liberation from illegal settlement and apartheid conditions. In this project, Dr. Farrell is bringing her research outputs on the impact of AI in the Global South, her experience in conducting qualitative research and in understanding the landscape of human research ethics. She also brings her experiences with interdisciplinary partnerships with artists and their research methodologies (including autoethnography, somatic practice and world-making). |
| Collaborator Contribution | The collaboration includes a wide range of academic, academic-related and support staff, who bring a variety of skills and domain knowledge including linguistics, political science, theology, sociology, business, computing, global studies and law. In addition to planning a large autoethnographic study on personal experiences of the Palestine Exception, the group plans to submit further funding proposals under the cross-council scheme to explore issues related to world-making and interdisciplinary partnerships. |
| Impact | This is a multi-disciplinary collaboration including the disciplines of linguistics, political science, theology, sociology, business, computing, global studies and law. Together we have produced: 10 Webinars related to Israel-Palestine historical relations and current events 1 internal funding proposal (unfortunately rejected) Policy support to the Open University on academic freedom |
| Start Year | 2023 |
| Description | "How can we create a more just society with AI?" (GenAI Community of Practice, Open University UK) |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | The GenAI Community of Practice (CoP) meets regularly to discuss the role of GenAI in education, industry and politics. The community consists of approximately 100 researchers, practitioners, educators and general interest groups. At this meeting, I discussed the ethical aspects of generative AI, in the medium and long-term. After this presentation, I was contacted by two separate groups within the Open University, one group working on AI-enabled pedagogy through the EDIA lens, and another who is creating a framework for learning design that addresses use of generative AI. I have since brought these two groups together to assist educators at the OU with navigating the pedagogical, ethical and training aspects of working with GenAI. |
| Year(s) Of Engagement Activity | 2023 |
| Description | Artificial Intelligence and Justice (iTV interview, national news) |
| Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Public/other audiences |
| Results and Impact | In summer 2023, I was interviewed by ITV news on the dangers of artificial intelligence. Last month, ITV.com has approximately 45 million visits per month, so this was a high value engagement. This interview raised my profile and resulted in a number of inquiries from within the OU and beyond. |
| Year(s) Of Engagement Activity | 2023 |
| URL | https://www.itv.com/news/anglia/2023-06-15/could-the-east-benefit-from-uks-move-to-become-global-lea... |
| Description | GCSJ (Centre for Global Challenges and Social Justice) Roundtable Series |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | Approximately 80 participants from across the University and its wider networks across the UK met to discuss the impacts and opportunities of artificial intelligence for research and teaching. Dr. Tracie Farrell was invited as a panel expert to present perspectives emerging from the FLF project. The panel was very diverse, reflecting the myriad of positions within the institution that also intersect with various national priorities and perspectives in the UK. Feedback from event participants was very positive overall and the success of the event will likely lead to follow-up events in the series. |
| Year(s) Of Engagement Activity | 2025 |
| Description | Seminar (Warwick University ERCs) |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | Local |
| Primary Audience | Postgraduate students |
| Results and Impact | Warwick University Secure Cyber Systems Research Group is holding a series of seminars and training opportunities for their early career researchers who are women from Black, Asian and other ethnic backgrounds that are viewed as minorities in the United Kingdom. I delivered a seminar on developing and communicating research ideas that are exciting and "sticky". I also shared my experience in applying for a UKRI Future Leaders Fellowship, sharing parts of my proposal with the participants. The participants reported that they appreciated the concrete advice and tools delivered in this seminar. It also allowed our fellowship project to network with future collaborators. |
| Year(s) Of Engagement Activity | 2023 |
| Description | Talk for the Centre for Protecting Women Online NGO-CSW monthly meeting |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | This talk was given at a monthly NGO CSW meeting https://ngocsw.org/. These monthly meetings are where the Centre for Protecting Women Online engages with their global constituency in the UN CSW and NGO CSW Forum processes, as well as in global gender equality advocacy in general. Each month, the meeting addresses a different topic related to the CSW. CSW69/Beijing +30 is the time for the Commission on the Status of Women to examine the gains made of the Beijing Platform for Action, and the twelve critical areas identified as being essential to women's empowerment and opportunities. Each monthly NGO CSW/NY meeting this year considers two of the critical areas for investigation. The December, 2024 meeting focused on the critical areas of media and human rights for women. |
| Year(s) Of Engagement Activity | 2024 |
| Description | Talk to the United Nations Commission on the Status of Women |
| Form Of Engagement Activity | A formal working group, expert panel or dialogue |
| Part Of Official Scheme? | No |
| Geographic Reach | International |
| Primary Audience | Professional Practitioners |
| Results and Impact | This talk was invited by the Open University Centre for Policing Women online as a direct result of a previous talk. The talk will take place on March 17th, 2025 at the annual meeting of the Commission on the Status of Women. For this talk, Dr. Tracie Farrell will be weaving together her previous research on the experiences of women online with her current research on the use of AI toward justice. https://www.unwomen.org/en/how-we-work/commission-on-the-status-of-women |
| Year(s) Of Engagement Activity | 2025 |
| Description | Teaching Forum - Generative AI in Learning, Teaching and Assessment |
| Form Of Engagement Activity | A talk or presentation |
| Part Of Official Scheme? | No |
| Geographic Reach | National |
| Primary Audience | Professional Practitioners |
| Results and Impact | This event was hosted by the WELS faculty at the Open University. The purpose of the event was to consider the impact of using AI in teaching and learning. Participation in this event led to my post doc, Venetia Brown, collaborating with this group on a Pan University/Cross Faculty Scholarship project examining 'Teaching and Assessing Students' use of GAI'. For this, Dr. Brown analysed focus group data and participated in writing the final report. Key findings and recommendations were implemented by the AI group in Learning Design and shared with Associate Deans across faculties. Dr. Brown also contributed a social justice lens to the WELS-initiated project 'An EDIA and AI-enabled pedagogy across the curriculum' and 'Critical AI literacy through critical virtual Exchange'. |
| Year(s) Of Engagement Activity | 2024 |
