Governing Democratic Discourse: Social Media, Online Harms, and the Future of Free Speech
Lead Research Organisation:
University College London
Department Name: Political Science
Abstract
Social-media networks have been weaponised. Through these platforms, bad actors have incited terrorism, stoked racial hatred, peddled deadly disinformation on how to cure Covid-19, and persuaded teenagers to commit suicide. Beyond the harms to individuals caused through social media are the harms to democracy itself, and to the norms of mutual understanding and civil discourse that are continually under assault. Governments and businesses largely agree that harmful content on social media needs to be combatted. But they disagree intensely on what harmful content should be combatted, how, and by whom. In recent months, Facebook and Twitter have publicly disagreed on how to deal with micro-targeted political advertising, as well as the proper response to harmful content posted by the US President. Policymakers are racing to enact a wide variety of regulations -- from the Digital Services Act in the EU, to a proposed set of powers for Ofcom in the UK. Yet these proposals differ in a range of ways, and there is little consensus on what the right approach is.
This project will provide essential philosophical guidance on this fraught controversy. By leading a multidisciplinary research team integrating expertise in philosophy, law, computer science, behavioural social science, and education, the project seeks to answer the following research questions:
1. What are the limits of free speech in a world where individual citizens' words can "go viral" and reach millions? Specifically, what categories of harmful speech fall outside the protection of free speech, such that legally restricting them could be acceptable? This project defends the theory that our right to speak is limited by our moral responsibilities not to harm others, and it uses this theory to explain why a significant proportion of harmful speech on social media is not protected by the right to free speech. This analysis will encompass a wide variety of harmful speech that is pervasive on social media networks, including hate speech, terrorist incitement, disinformation and misinformation, cyber-harassment, advocacy of self-harm, microtargeted electioneering, foreign propaganda, and divisively uncivil political speech.
2. What are the moral obligations of social media companies to combat harmful speech on their platforms? This project defends the idea that if social media companies do nothing to combat harmful speech posted on their platforms, they become complicit in that harm. In order to avoid complicity, social media companies must take action against harmful content. The project will spell out the policies that companies ought to enact to do so.
3. How should social media-companies' obligations be enforced through law? The project will examine existing regulations across a range of jurisdictions to assess which, if any, is morally defensible, and what changes to the law would improve them.
4. Since not all harmful speech can or should be suppressed by social-media firms, what are the moral obligations of citizens to argue back against harmful content online? And what are the duties of policymakers to support this "counter-speech", especially through innovations in civic education?
This project will produce a combination of traditional academic outputs and innovative public resources, including an informational project website, an educational podcast, a major policy report, and a pilot civic education curriculum for secondary school students on how to engage in counter-speech. By offering a moral framework to govern a central infrastructure of democratic communication, this project aims to generate the guidance that citizens, companies, and policymakers need to combat harmful speech on social media, while preserving its power to be used for morally valuable ends.
This project will provide essential philosophical guidance on this fraught controversy. By leading a multidisciplinary research team integrating expertise in philosophy, law, computer science, behavioural social science, and education, the project seeks to answer the following research questions:
1. What are the limits of free speech in a world where individual citizens' words can "go viral" and reach millions? Specifically, what categories of harmful speech fall outside the protection of free speech, such that legally restricting them could be acceptable? This project defends the theory that our right to speak is limited by our moral responsibilities not to harm others, and it uses this theory to explain why a significant proportion of harmful speech on social media is not protected by the right to free speech. This analysis will encompass a wide variety of harmful speech that is pervasive on social media networks, including hate speech, terrorist incitement, disinformation and misinformation, cyber-harassment, advocacy of self-harm, microtargeted electioneering, foreign propaganda, and divisively uncivil political speech.
2. What are the moral obligations of social media companies to combat harmful speech on their platforms? This project defends the idea that if social media companies do nothing to combat harmful speech posted on their platforms, they become complicit in that harm. In order to avoid complicity, social media companies must take action against harmful content. The project will spell out the policies that companies ought to enact to do so.
3. How should social media-companies' obligations be enforced through law? The project will examine existing regulations across a range of jurisdictions to assess which, if any, is morally defensible, and what changes to the law would improve them.
4. Since not all harmful speech can or should be suppressed by social-media firms, what are the moral obligations of citizens to argue back against harmful content online? And what are the duties of policymakers to support this "counter-speech", especially through innovations in civic education?
This project will produce a combination of traditional academic outputs and innovative public resources, including an informational project website, an educational podcast, a major policy report, and a pilot civic education curriculum for secondary school students on how to engage in counter-speech. By offering a moral framework to govern a central infrastructure of democratic communication, this project aims to generate the guidance that citizens, companies, and policymakers need to combat harmful speech on social media, while preserving its power to be used for morally valuable ends.
People |
ORCID iD |
Jeffrey Howard (Principal Investigator / Fellow) |
Publications
Fisher S
(2024)
That's not what you said! Semantic constraints on literal speech
in Mind & Language
Fisher S
(2023)
An empirical investigation of intuitions about uptake
in Inquiry
Fisher S
(2024)
Should Politicians Be Exempt from Fact-Checking?
in Journal of Online Trust and Safety
Howard J
(2022)
Democratic Speech in Divided Times. By Maxime Lepoutre. Oxford: Oxford University Press, 2021. 288p. $100.00 cloth.
in Perspectives on Politics
Jeffrey W. Howard
(2024)
Freedom of Speech
Kira B
(2023)
Is iFood Starving the Market? Antitrust Enforcement in the Market for Online Food Delivery in Brazil
in World Competition
Kira B
(2023)
A Primer on the UK Online Safety Act
Kira B
(2023)
The Politics and Economics of Brazilian Competition Law
in Latin American Law Review
Description | Social media platforms offer enormous potential for productive expression, connection, and organisation. Yet they can also be abused to cause enormous harm through hate speech, misinformation, and other forms of harmful content. The aim of the UCL Digital Speech Lab is to offer philosophical guidance on the proper limits of free speech in the digital age-tracing the concrete obligations of online platforms, governments, and ordinary citizens in the face of this challenge. So far, the project has included a range of foundational research. This work has included a comprehensive review of the scholarly literature on freedom of speech and its appropriate limits-in particular, mapping that debate and pinpointing the ways it needs updating for the digital age. That initial review exercise is now published as a new entry on "Freedom of Speech" in the open-access Stanford Encyclopedia of Philosophy, widely regarded as the most prestigious and authoritative research volume in philosophy. It has also included the development and defence of an organising theory of why social media platforms' ethical responsibilities. The theory offers the first systematic philosophical argument for why platforms have a duty to police their platforms for certain categories of seriously harmful content. The key insight is that platforms have a duty of care to their users (and others) to protect them from harm when they can do so at reasonable cost; they also have a duty to avoid complicity with harmful speech by amplifying it. This argument is forthcoming in the open-access Journal of Practical Ethics. Beyond these foundational exercises, the project so far has confronted a wide range of difficult dilemmas about how platforms should balance respect for free speech with the duty to prevent harm. For example, while platforms often remove seriously dangerous misinformation, they often label other misinformation as false or misleading, linking to a third-party fact-checker. Strikingly, Facebook exempts misinformation posted by politicians from this system, allowing it to spread uncontested. My postdoctoral fellows and I explore possible rationales for exempting politicians, arguing they are unpersuasive; accordingly, we argue that Meta should update its rules to fact-check politicians, too. This argument has been published in the open-access Journal of Online Trust & Safety. As another example. one postdoctoral fellow and I have investigated what platforms should do in cases where language is ambiguous-with one interpretation that violates a content rule, and another that doesn't'. A useful example of this is a veiled threat-where it seems the speaker might be threatening harm, but may not be-though it arises in every category of prohibited speech. We argue that such cases reveal an instability in whether platforms' policies target the intended meaning of speech, or instead its likely consequences-and we offer platforms a guide in how to simplify their policies in response. This paper is forthcoming in the open access Journal of Ethics and Social Philosophy. Other work-e.g., on how platforms should govern AI-generated content-is still in progress, but has already been cited by decisionmakers, such as the Meta Oversight Board. |
Exploitation Route | Our research has direct relevance to decisions about the public and private governance of online speech. We are already at work communicating the findings of our research to the relevant bodies, including the content teams at social-media companies such as Meta, the Oversight Board which oversees Meta's content decisions, members of the House of Lords, members of various civil society organisations, and regulators at Ofcom. Our submitted evidence has been cited by both the Meta Oversight Board and the House of Lords Communications & Digital Committee. Our joint policy report on generative AI, produced with the think-tank Demos, will be launched later this spring. |
Sectors | Creative Economy Digital/Communication/Information Technologies (including Software) Government Democracy and Justice |
URL | https://www.digitalspeechlab.com/ |
Description | The findings of this project are already reaching decisionmakers within industry, civil society oversight organisations, and government. Within industry, my Lab is routinely approached by the content policy teams at Meta who decide the rules for Facebook and Instagram, asking for our guidance with respect to different policy controversies under review. Based on feedback from Meta, I believe I have helped shaped improvements in both its Violence and Incitement Policy, as well as its new policy on labelling AI-generated content. My Lab is also engaged with the Oversight Board, the independent body that oversees Meta's content policy rules and decisions. In one case, we submitted a public comment to the Board arguing that Meta's policy on manipulated media was incoherent and unjustified; the Board cited this comment three times, largely mirroring our reasoning in its decision. Finally, we are engaged with policymakers. For example, our submission to the House of Lords Communications & Digital Committee was cited twice in its recent report on Large Language Models and Generative AI. As the project continues, we will considerably upscale this engagement work. |
First Year Of Impact | 2024 |
Sector | Digital/Communication/Information Technologies (including Software),Government, Democracy and Justice |
Impact Types | Policy & public services |
Description | Cited by House of Lords Communications & Digital Committee |
Geographic Reach | National |
Policy Influence Type | Citation in other policy documents |
URL | https://committees.parliament.uk/work/7827/large-language-models/publications/ |
Description | Cited by Meta Oversight Board |
Geographic Reach | Multiple continents/international |
Policy Influence Type | Citation in other policy documents |
URL | https://www.oversightboard.com/decision/FB-GW8BY1Y3 |
Description | Feedback from Meta on Impact of Our Guidance |
Geographic Reach | Multiple continents/international |
Policy Influence Type | Participation in a guidance/advisory committee |
Impact | The changes I have urged with regard to the Violence and Incitement Policy remain confidential; I will update when they are publicly announced. As for the new policies on labelling generative AI, these were announced here: https://about.fb.com/news/2024/02/labeling-ai-generated-images-on-facebook-instagram-and-threads/ |
Description | Collaboration with Demos on Generative AI |
Organisation | DEMOS |
Country | United Kingdom |
Sector | Charity/Non Profit |
PI Contribution | Over the past year, my team and I have undertaken a sustained collaboration with Demos, the UK's leading cross-party think-tank. The goal of the collaboration is to assess the threat that AI-generated content poses to democracy. As nearly half the globe heads to the polls to vote in 2024, AI-produced misinformation threatens to undermine the integrity of elections, alongside broader democratic values such as truth, equal respect, and nonviolence--in the UK and beyond. Yet it remains unclear what precisely should be done about it--especially by social media companies, on whose platforms this content is already spreading. The collaboration leverages my team's expertise in the politics, philosophy, law, and social science of social media governance alongside Demos's expertise in tech policy, AI, and stakeholder engagement. We conducted a series of joint meetings on the topic throughout spring, summer, and autumn 2023 to generate policy proposals. Then, in January 2024, Demos convened a major workshop in which we presented and discussed our joint policy proposals to an audience including policymakers, regulators, industry professionals, and civil society organisations. Attendees included personnel from DSIT, Ofcom, Meta, TikTok, Ada Lovelace Institute, Internet Watch Foundation, Reset, People vs. Big Tech Coalition, the Campaign Lab, and the Oversight Board. The intense discussions at this workshop generated a wide range of constructive feedback on how to improve the recommendations. These recommendations will be presented and defended in a major policy report entitled "Synthetic Politics: Preparing Democracy for Generative AI," jointly authored between Demos and the UKRI-funded UCL Digital Speech Lab, to be launched in spring 2024. |
Collaborator Contribution | Demos contributed substantially to this collaborative effort. I met with Demos's Director for its Centre for the Analysis of Social Media (CASM) in March 2022 to scope the project and pinpoint particular lines of inquiry, meeting again in October 2022. Then, beginning in March 2023, we formally launched the collaboration, meeting every 4-6 weeks to analyse related research and news developments, and plan further steps. We then held a daylong joint workshop in August 2023 with our full teams to produce our draft policy recommendations on generative AI, addressed to AI companies, social media companies, and policymakers. During the autumn my team and the Demos team began to draft our analysis and recommendations in a shared document, working iteratively to produce and revise the text. A distinctive purpose of this process was for Demos to impart its expertise on how to translate academic research into accessible and useful prose for a non-academic policy audience -- equipping my team with the skills and insights to write high-impact policy reports ourselves in the future. Demos then led the organisation of our January 2024 workshop -- leveraging its extensive contacts among Westminster policymakers, regulators, civil society, and the technology industry. It planned, hosted, catered, and managed all communications in the run-up to the workshop and on the day itself. Since the workshop, Demos and my team have worked intensively on our draft policy report, with attention to it from someone every workday, developing and refining our arguments and recommendations. Demos staff will lead on professionally formatting and printing the report. They are also taking the lead in organising the launch event, which will occur later in the spring, and amplifying the report's recommendations to its massive network. |
Impact | The central output of this collaboration is a major policy report entitled "Synthetic Politics: Preparing Democracy for Generative AI." This will be launched later this spring, and so will be recorded formally during the next UKRI reporting exercise in a year's time. Through our work with Demos, my team and I have developed substantial expertise on how platforms should govern AI-generated content. Based on some of the ideas that will appear in the joint report with Demos, we submitted a lengthy public comment to Meta's Oversight Board, which oversees content moderation on Facebook and Instagram. The comment was submitted in relation to the Board's case on how Meta should govern AI-manipulated content (with a manipulated video of President Biden as its focus). Our submission argues that Meta's policy was incoherent and unjustified. Citing our submission three times in its ruling, the Board concurred with us, largely adopting our reasoning. Later this spring, Meta will issue its reply in how it plans to update its policy in response to the Board's ruling. |
Start Year | 2023 |
Title | Meta Content Library and Content Library API |
Description | The Meta Content Library and Content Library API provide real-time public content from Facebook and Instagram. I was invited to participate in the beta testing of the product and provide feedback on it. Based on the feedback, Meta took initiatives to update the product and enable new features for better access to data. |
Type Of Technology | Webtool/Application |
Year Produced | 2024 |
Impact | Access to public content from Facebook and Instagram platforms enables researchers to study the impact of these platforms on politics, society, and culture, among others. |
Description | Academic Freedom: A Public Event at University College London |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Undergraduate students |
Results and Impact | I co-organised and co-chaired a substantial event at University College London, in conjunction with the UCL President's office, The point of the event was to curate a discussion on the limits of academic freedom and the ethics of disagreement in universities. One central aim was to educate students in how they can have contentious conversations on difficult topics with civility and respect, both offline and online. Nearly 400 guests subscribed to the live event, and the link with the event's video was further circulated to thousands of UCL students and alumni. We received an enormous amount of positive feedback on the event, with many students reporting that it gave them greater confidence in having difficult discussions. |
Year(s) Of Engagement Activity | 2022 |
URL | https://mediacentral.ucl.ac.uk/Play/85378 |
Description | Challenging Pseudoscience on Social Media: A Workshop with Royal Institute, Demos, and Open Society Foundation |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Professional Practitioners |
Results and Impact | I was invited to participate in workshop held at the Royal Institution on the theme "Challenging Pseudoscience on Social Media". I delivered a presentation entitled "The Ethics of Restricting Medical Misinformation on Social Media". There were nearly 100 audience members in attendance from academia, Ofcom, the NHS, the civil service, industry, and other areas. I sat on a panel with a prominent NHS physician with a popular YouTube channel challenging medical misinformation, as well as a public policy manager at Tiktok. During the event I argued for a more aggressive effort to combat harmful medical misinformation by social-media companies, both through a combination of content removal and deamplification. Several members of the audience approached me afterward to say that I'd changed their minds. I developed excellent contacts through the event, especially at TikTok and Ofcom, which I am leveraging for future research impact opportunities. |
Year(s) Of Engagement Activity | 2022 |
Description | Co-host event with Demos on Generative AI's Risks to Democracy |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Industry/Business |
Results and Impact | Over the past year, my team and I have undertaken a sustained collaboration with Demos, the UK's leading cross-party think-tank. The goal of the collaboration is to assess the threat that AI-generated content poses to democracy. As nearly half the globe heads to the polls to vote in 2024, AI-produced misinformation threatens to undermine the integrity of elections, alongside broader democratic values such as truth, equal respect, and nonviolence--in the UK and beyond. Yet it remains unclear what precisely should be done about it--especially by social media companies, on whose platforms this content is already spreading. The collaboration leverages my team's expertise in the politics, philosophy, law, and social science of social media governance alongside Demos's expertise in tech policy, AI, and stakeholder engagement. In this workshop, we presented and discussed our joint policy proposals to an audience including policymakers, regulators, industry professionals, and civil society organisations. Attendees included personnel from DSIT, Ofcom, Meta, TikTok, Ada Lovelace Institute, Internet Watch Foundation, Reset, People vs. Big Tech Coalition, the Campaign Lab, and the Oversight Board. The intense discussions at this workshop generated a wide range of constructive feedback on how to improve the recommendations. These recommendations will be presented and defended in a major policy report entitled "Synthetic Politics: Preparing Democracy for Generative AI," jointly authored between Demos and the UKRI-funded UCL Digital Speech Lab, to be launched in spring 2024. |
Year(s) Of Engagement Activity | 2024 |
Description | Co-organising PhilMod: A regular discussion group between philosophers and tech professionals |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Industry/Business |
Results and Impact | Starting in autumn 2023, I became a co-organiser of PhilMod -- a group dedicated to discussing philosophical issues that arise in social media content moderation. The group's aim is to foster engagement between philosophers and trust-and-safety professionals working on content moderation within tech companies. To that end, the group involves a range of philosophers (faculty and postgraduate students) from around the globe, professionals (e.g., from TikTok and Meta), and journalists (e.g., Tech Policy Press). Speakers have included leading voices in this area such as Tarleton Gillespie (Microsoft) and Kate Klonick (St Johns Law). |
Year(s) Of Engagement Activity | 2023,2024 |
URL | https://www.philmod.org/ |
Description | Counterspeech Impact Workshop |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Other audiences |
Results and Impact | An interdisciplinary group of academic and industry researchers discussed existing philosophical work on counterspeech (i.e., the countering of harmful speech, like hate speech or disinformation, with more speech) and possible avenues for policy impact, which resulted in knowledge sharing between participants. |
Year(s) Of Engagement Activity | 2023 |
Description | Dialogue with Meta Teams on Account Integrity |
Form Of Engagement Activity | A formal working group, expert panel or dialogue |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Industry/Business |
Results and Impact | Meta invited me to a meeting to offer my guidance with regard to their policies on Account Integrity, meeting with members of their policy team. The key question was: if a group on Meta is suspended because of repeatedly violations (e.g., hate speech on the group chat), what should Meta do when the members of that suspended group proceed to set up a new group? Leveraging insights from the criminal justice context, my recommendation was that such new groups should be put on a form of "probation" -- whereby its activities are monitored and it is put on notice that it will face immediate suspension after the first violation. Meta continues to review this policy area, so I am not yet aware whether it will follow my recommendation. I will revisit this topic with them in the coming months. |
Year(s) Of Engagement Activity | 2023 |
Description | Dialogue with Meta on frontier challenges |
Form Of Engagement Activity | A formal working group, expert panel or dialogue |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Industry/Business |
Results and Impact | Meta invited me to several one-on-one conversations to offer my overview of what my Lab sees as the central frontier challenges facing content moderation that social media platforms have not yet tackled, with an eye toward informing their policy development plans over the coming year. One point of emphasis was how platforms deal with "borderline speech" -- i.e., speech that does not technically violate its rules, but comes close. Currently Meta "demotes/deamplifies" this speech, while formally allowing it, but I argue that this is a mistake. We have plans to revisit this issue in future discussions. |
Year(s) Of Engagement Activity | 2023 |
Description | Discussion with Guardian staff to discuss challenges to journalism from generative AI |
Form Of Engagement Activity | A formal working group, expert panel or dialogue |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | I was invited to a closed one-on-two discussion with two senior staff at The Guardian -- the Head of Editorial Innovation and the Head of Public Policy. One purpose was to seek my research-based advice on the role of generative AI in influencing journalism, and how The Guardian could respond to the production of fabricated Guardian articles by AI chatbots; another purpose was for me to learn from practitioners' own experience and insights. I will revisit this relationship in future has part of a piece of work looking at how social media should treat news content. |
Year(s) Of Engagement Activity | 2023 |
Description | Facilitation of a 'PhilMod' Seminar |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Industry/Business |
Results and Impact | Sarah Fisher led a discussion of the 'PhilMod' group (which is a community of academic researchers and technology professionals interested in the philosophical implications of content moderation and social media platform policy) which was an industry deep-dive into Meta's Oversight Board's decision in the 'Iran Protest Slogan' case. |
Year(s) Of Engagement Activity | 2023 |
Description | Follow-up dialogue with Meta re: Self-Defence Content |
Form Of Engagement Activity | A formal working group, expert panel or dialogue |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Industry/Business |
Results and Impact | Meta invited me to multiple one-on-one follow-up dialogues with various members of their team. The aim was to hear my views on their further proposals to amend its Violence and Incitement Policy to make exceptions for speech encouraging/praising legitimate self-defence -- a change I have been advocating in response to Meta's removal of posts by Ukrainians who praised defensive action against Russian forces. |
Year(s) Of Engagement Activity | 2023 |
Description | International discussion workshops on content moderation |
Form Of Engagement Activity | A formal working group, expert panel or dialogue |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Industry/Business |
Results and Impact | I am part of a group of philosophers and tech industry professionals (mostly based in Silicon Valley) who regularly meet to discuss cutting edge questions about content moderation and online speech governance. This community has provided me with an excellent space to test my research ideas, and develop relationships with relevant decisionmakers, many of whom have reached out to learn more about my work. |
Year(s) Of Engagement Activity | 2022,2023 |
URL | https://www.philmod.org/ |
Description | Interview with BBC about Elon Musk's purchase of Twitter |
Form Of Engagement Activity | A press release, press conference or response to a media enquiry/interview |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Media (as a channel to the public) |
Results and Impact | I was interviewed by the BBC about my views on how Elon Musk's takeover of Twitter will impact content moderation on that site. The interview led to several subsequent media requests. |
Year(s) Of Engagement Activity | 2022 |
URL | https://www.bbc.co.uk/news/business-61226282 |
Description | Interview with Irish Times about new hate speech legislation in Ireland |
Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Public/other audiences |
Results and Impact | I was interviewed by the Irish Times for a story on the Republic of Ireland's plans to push ahead with a new law restricting incitement to hatred. Against the laws critics, I argued that the law poses no fundamental threat to freedom of expression. |
Year(s) Of Engagement Activity | 2023 |
URL | https://www.ucl.ac.uk/news/headlines/2023/dec/pushing-ahead-hate-offences-bill-ireland-correct-strat... |
Description | Major workshop on Moderating Misinformation |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Industry/Business |
Results and Impact | I hosted a major workshop on the topic of Moderating Misinformation, bringing together a range of philosophers, social scientists, and staff at both Meta and the Meta Oversight Board to discuss how best to deal with misinformation on the platforms. It enabled me to present some crucial research from my project, which is currently under review, as well as cultivate relationships with key players in both industry and civil society. One attendee noted that he thought it was the best cross-disciplinary and industry-engaging event he'd ever been to -- bringing together a diverse group of people with any loss of rigour. It has already led to further engagements with both Meta and the Oversight Board. |
Year(s) Of Engagement Activity | 2022 |
Description | Media interview by the fact-checker Logically Facts |
Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Public/other audiences |
Results and Impact | I was interviewed by the International Fact Checking Network (IFCN) signatory fact-checker Logically Facts to talk about "2024 Elections | Use of AI in campaigning is intensifying ahead of elections. Should we be worried?". |
Year(s) Of Engagement Activity | 2024 |
Description | Meeting with Meta Content Team on New Violent and Graphic Images Policy |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Industry/Business |
Results and Impact | I was invited to participate in a meeting with the content team at Meta deciding the policies for graphic content on Facebook and Instagram. I offered my philosophical framework, based on my research, how to distinguish graphic images/videos that ought to be allowed from ones that should not be allowed, emphasizing the importance of allowing people to raise awareness of human rights abuses. Meta reported that the input was very helpful and solicited further research from my project as it develops. |
Year(s) Of Engagement Activity | 2022 |
Description | Meeting with Meta Content Teams on Algorithmic Deamplification |
Form Of Engagement Activity | A formal working group, expert panel or dialogue |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Industry/Business |
Results and Impact | I was invited to present my co-authored research on algorithmic amplification to content teams at Meta. Meta offered a range of useful feedback on the research, indicating it was enormously helpful in clarifying how they should think about the problem. They have since followed up with requests for further information on the research, including invitations to come to their HQ to present more work to more senior colleagues. |
Year(s) Of Engagement Activity | 2022 |
Description | Meeting with Meta Content Teams on Content Standards in the Metaverse |
Form Of Engagement Activity | A formal working group, expert panel or dialogue |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Industry/Business |
Results and Impact | I was invited to offer my views on how Meta should determine its rules for the speech conduct to be allowed in the Metaverse. In particular, I offered my philosophical reflections on how to conceptualise the difference between public and private spaces, arguing that there should be more relaxed policy rules in private spaces than public ones. |
Year(s) Of Engagement Activity | 2022 |
Description | Meeting with Meta Content Teams on Exceptions to the Praising Violence Policy for Ukrainian users |
Form Of Engagement Activity | A formal working group, expert panel or dialogue |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Industry/Business |
Results and Impact | I was invited to a meeting with the team at Meta that determine the speech rules on Facebook and Instragam. (This was with the company itself, not the oversight board.) I was asked for my philosophical feedback on whether the policy of allowing Ukrainian users to promote violence against Russian soldiers was acceptable. I argued that it was acceptable, and that Meta should not change its policy as many were pressuring it to do. |
Year(s) Of Engagement Activity | 2022 |
Description | Meta Oversight Board Stakeholder Roundtable on Documenting Human Rights Abuses |
Form Of Engagement Activity | A formal working group, expert panel or dialogue |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Industry/Business |
Results and Impact | I was invited to participate in a roundtable discussion hosted by the Meta Oversight Board on the importance of protecting speech that documents human rights abuses. During the meeting, I offered a philosophical framework for explaining when sharing images/videos of violence should be allowed on social media, and when it shouldn't be. While the Oversight Board does not credit specific academics with influencing its views, the conclusions it reached with largely in line with the recommendations I advanced during that meeting |
Year(s) Of Engagement Activity | 2022 |
URL | https://www.oversightboard.com/news/1884013608451154-oversight-board-upholds-meta-s-decision-in-suda... |
Description | Meta Oversight Board Stakeholder Roundtable on Moderating Content featuring Prisoners of War |
Form Of Engagement Activity | A formal working group, expert panel or dialogue |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Industry/Business |
Results and Impact | I was invited to participate in a roundtable discussion hosted by the Meta Oversight Board on what platforms should do when people post images of prisoners of war on Facebook or Instagram. During the meeting, I offered a philosophical framework for explaining when sharing such images should be allowed on social media, and when it shouldn't be, arguing for the importance of warning screens and blurring of individual faces for such content. The conclusions the Board reached on this issue were largely in line with the recommendations I advanced during that meeting. |
Year(s) Of Engagement Activity | 2023 |
Description | Meta Oversight Board Stakeholder Roundtable on Relaxing Covid Misinformation Policy |
Form Of Engagement Activity | A formal working group, expert panel or dialogue |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Industry/Business |
Results and Impact | I was invited to and participated in a roundtable discussion by the Oversight Board on whether Meta should relax its policies on Covid misinformation. I offered a philosophical framework for how to think through this decision, arguing that as the seriousness of the pandemic abates, platforms should use deamplification, rather than outright removal, for much of the content currently banned. While the Oversight Board has not yet reported its decision on this issue to Meta, the event was helpful in clarifying my own views on the topic, which will feed into my current article in progress on misinformation. |
Year(s) Of Engagement Activity | 2022 |
Description | Meta Oversight Board Stakeholder Roundtable on Violence and Incitement in Conflict Zones |
Form Of Engagement Activity | A formal working group, expert panel or dialogue |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Industry/Business |
Results and Impact | I was invited to participate in a Meta Oversight Board Stakeholder Roundtable on Violence and Incitement in Conflict Zones. In the meeting, I explained my perspective on how to think about the ethical dilemma that Meta faces when setting its incitement policy, and whether to create a carveout for speech advocating self-defence. I received extensive positive feedback from other members of the meeting that I had accurately identified the central problem facing Meta. |
Year(s) Of Engagement Activity | 2022 |
Description | Panel discussion on Generative AI at the Oxford Institute for Ethics in AI |
Form Of Engagement Activity | A formal working group, expert panel or dialogue |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | I was part of a panel discussion on ChatGPT and other large language models that use AI to produce text. I responded to a presentation given by Seth Lazar, Professor of Philosophy at ANU, in which I argued that we need a new philosophical theory of content moderation for chatbots. On the basis of this exchange, I became convinced that my UKRI project needs to expand its scope to encompass online speech produced by AI systems, which pose enormous threats in terms of spreading misinformation. I have sketched plans for a new research paper on this topic. |
Year(s) Of Engagement Activity | 2023 |
URL | https://www.oxford-aiethics.ox.ac.uk/event/ethics-ai-colloquium-generative-ethics-and-new-bing-hybri... |
Description | Panellist at Democracy and Fundamental Rights in the Digital Age event |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Policymakers/politicians |
Results and Impact | Presentation at the panel "Regulation of networks and platforms: the Brazilian debate" at international event co-hosted by the Brazilian Internet Steering Committee (CGI.br). Other members of the panel included a Brazilian MP working on a Brazilian platform regulation bill, and representatives of civil society organisations. |
Year(s) Of Engagement Activity | 2023 |
Description | Paper presentation at 2023 Annual Conference Society for the Advancement of Socio-Economics (SASE) |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Other audiences |
Results and Impact | Presentation of a paper on digital competition at academic conference, which sparked debates and discussions. |
Year(s) Of Engagement Activity | 2023 |
Description | Participation in roundtable Making 'Digital Streets' Safe? Progress on the Online Safety Bill |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Policymakers/politicians |
Results and Impact | Participation at closed-door roundtable hosted at the House of Lords, including by circa 15 people from Parliament, academia and civil society organisations to debate the Online Safety Bill. The discussion sparked questions and discussions about potential amendments to the Bill as it progressed through the final legislative stages. |
Year(s) Of Engagement Activity | 2023 |
Description | Participation of public event Brazil: The first 100 days of the Lula administration at the Blavatnik School of Government, University of Oxford |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Postgraduate students |
Results and Impact | Beatriz Kira presented on the digital policy agenda of Lula's government, sparking questions and discussions about the Brazilian digital platform regulation bill. |
Year(s) Of Engagement Activity | 2023 |
URL | https://www.bsg.ox.ac.uk/events/brazil-first-100-days-lula-administration |
Description | Platform Punishments: Suspensions Policy on Social Media |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Industry/Business |
Results and Impact | The UCL Digital Speech Lab organised an event with staff at Meta and the Oversight Board to showcase leading research on how platforms should punish violators. Bringing together researchers from philosophy, social science, computer science, and law, we discuss normative and empirical insights on how platforms should sanction users who violate their speech rules. Staff from Meta and the Oversight Board also offered presentations, highlighting the questions on which further research is needed. Attendees reported that presentations substantially changed their views on this question, especially in relation to my proposal that insights from criminal justice (on rehabilitation and incapacitation) form the basis of platforms' policies. |
Year(s) Of Engagement Activity | 2023 |
Description | Podcast episode on 'Acts of Speech and How People Receive Them' |
Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Public/other audiences |
Results and Impact | Sarah Fisher discussed her collaborative research on speech act theory on University College London's 'Uncovering Politics' podcast. |
Year(s) Of Engagement Activity | 2023 |
URL | https://ucl-uncovering-politics.simplecast.com/episodes/acts-of-speech-and-how-people-recieve-them |
Description | Podcast episode on 'Death threats and online content moderation' |
Form Of Engagement Activity | A broadcast e.g. TV/radio/film/podcast (other than news/press) |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Public/other audiences |
Results and Impact | Sarah Fisher and Jeffrey Howard discussed their co-authored paper, 'Ambiguous threats: "Death to" statements and the moderation of online speech acts' on University College London's 'Uncovering Politics' podcast. |
Year(s) Of Engagement Activity | 2024 |
Description | Political Philosophy of Online Speech - Workshop |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Postgraduate students |
Results and Impact | On 23 March 2024, the Digital Speech Lab co-organised a daylong workshop featuring cutting edge philosophical research about the limits of speech on social media. It was co-organised with the Law School and the School of Philosophy at the University of Southern California. The even featured many of the leading philosophers working on this theme, including myself, Regina Rini (York University, Canada); Etienne Brown (San Jose State University); Chloe Bakalar (Temple University / Meta), and Erin Miller (USC). The workshop was attended by faculty, undergraduate and postgraduate students, and members of the public. One of the speakers, Chloe Bakalar, is the Head of AI Ethics for Meta, ensuring that the event also engaged with the industry perspective. The event was highly constructive; I received feedback that my presentation changed several colleagues' minds about the use of AI in content moderation. Similarly, my own views on the ethics of algorithmic amplification were substantially influenced by the exchanges on that day. |
Year(s) Of Engagement Activity | 2023 |
Description | Presentation at 2023 Computers, Privacy and Data Protection (CPDP) Conference LatAm |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | Regional |
Primary Audience | Professional Practitioners |
Results and Impact | Beatriz Kira was a panellist at the panel "Data risks - compliance challenges with overlapping risk-based regulations" which was attended by over 20 people in person and live streamed online. The debate sparked discussions and questions about the interplay between different forms of platform regulation (eg online safety, competition, data protection). |
Year(s) Of Engagement Activity | 2023 |
URL | https://cpdp.lat/ |
Description | Presentation at Bentham House Conference 2023 - Competition Law and Policy in a Data-Driven Economy |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | Presented paper on the relationship between online safety regulation and pro-competition regulation at conference organised by UCL's Faculty of Laws, which sparked debates and led to a chapter in an edited volume to be published in 2024. |
Year(s) Of Engagement Activity | 2023 |
URL | https://www.ucl.ac.uk/laws/news/2023/may/ucl-laws-hosts-bentham-house-conference-2023-competition-la... |
Description | Presentation at Dynamic Competition Applied conference, European University Institute |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | Presentation of paper at panel "Regulating dynamic innovation in digital ecosystems" which sparked questions and discussion afterwards on the fitness of existing regulatory frameworks in adequately regulating dynamic sectors. |
Year(s) Of Engagement Activity | 2023 |
Description | Presentation at The Joint Session of the Aristotelian Society and Mind Association |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Other audiences |
Results and Impact | Sarah Fisher presented a paper co-authored with Jeffrey Howard at The Joint Session of the Aristotelian Society and Mind Association, which sparked questions and discussion afterwards (especially about social media content moderation, harmful speech, free speech, and threats). |
Year(s) Of Engagement Activity | 2023 |
Description | Presentation at Trade and Public Policy Network (TaPP) 1st Annual Conference |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Other audiences |
Results and Impact | Panellist at panel "Innovation and New Technologies: Managing Change in UK Trade Policy?" discussing the effects of trade provisions on domestic regulation of technologies and digital platforms, which sparked questions and discussions afterwards. |
Year(s) Of Engagement Activity | 2023 |
URL | https://tappnetwork.org/ |
Description | Presentation at a Book Symposium on Matthias Mahlmann's 'Mind and Rights' |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Other audiences |
Results and Impact | Sarah Fisher presented a commentary at a Book Symposium on Matthias Mahlmann's 'Mind and Rights' organised by University College London's Laws Department, which sparked questions and discussion afterwards (especially about moral cognition). |
Year(s) Of Engagement Activity | 2023 |
Description | Presentation at the Legal Technologies and the Bodies conference |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Postgraduate students |
Results and Impact | Beatriz Kira presented paper on non-consensual intimate deepfakes at academic conference hosted at Sciences Po, Paris, which sparked questions and discussions about online harms and the effectiveness of the UK Online Safety Act. |
Year(s) Of Engagement Activity | 2024 |
URL | https://www.sciencespo.fr/ecole-droit/en/events/legal-technologies-and-the-bodies |
Description | Presentation at the Society for Applied Philosophy Annual Conference |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Other audiences |
Results and Impact | Sarah Fisher presented a paper co-authored with Jeffrey Howard at the Society for Applied Philosophy's 2023 Conference, which sparked questions and discussion afterwards (especially about social media content moderation, harmful speech, free speech, and threats). |
Year(s) Of Engagement Activity | 2023 |
Description | Presentation of Digital Speech Lab research agenda to Ofcom |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Policymakers/politicians |
Results and Impact | Beatriz Kira and I met with Ofcom to present an overview of the research activities undertaken by our group. It provoked a range of questions from Ofcom on the harms of various forms of online speech, and about how best to regulate social media companies under the Online Safety Act. We have remained in touch with Ofcom; for example, representatives from Ofcom attended the January 2024 workshop with Demos. We have jointed the Online Safety Act Network (led by Professor Lorna Woods and Maeve Walsh), through which academic and civil society organisations collaborate in offering recommendations on how Ofcom should enforce the Online Safety Act. We are also preparing to offer our own independent submissions in response to future calls for evidence. |
Year(s) Of Engagement Activity | 2023 |
Description | Presented paper at American Political Science Association Conference (APSA) |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | I presented two papers and a poster at the Annual Meeting of the American Political Science Association (ASPA) in Los Angeles, USA. First on fact-checker labels on Facebook at the pre-conference in Political Communication, second on the effectiveness of fact-checker labels on spread of misinformation on Facebook, and a poster on public attitudes towards and engagement with misinformation. I received appreciation for conducting and presenting cost-effective ways of studying misinformation in global south contexts, and also received feedback. |
Year(s) Of Engagement Activity | 2023 |
Description | Presented paper at Midwest American Political Science Association Conference (MPSA) |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | I presented two papers : 1) "Going Negative in Microtargeted Ads on Social Media: Comparing 2021 & 2022 in Georgia", and 2) "Political Advertising in Uttar Pradesh 2022: Microtargeting on Facebook India" at Midwest American Political Science Association. These two studies were highly recognized and appreciated by the scholars and received constructive feedback. |
Year(s) Of Engagement Activity | 2023 |
Description | Presented paper at World Association for Public Opinion Research Conference (WAPOR) |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | I presented a research paper, "Partisans Issues and Digital Media in the UK 2017 and 2019 General Election Campaigns" at the 76th Annual World Association for Public Opinion Research Conference, held in Salzburg, Austria. |
Year(s) Of Engagement Activity | 2023 |
Description | Presented paper at World Association for Public Opinion Research Conference Asia (WAPOR Asia) |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | I presented a paper, "Political Advertising on Facebook India," at the World Association for Public Opinion Research Asia Chapter (WAPOR Asia), held in Canberra, Australia, in December 2023. |
Year(s) Of Engagement Activity | 2023 |
Description | Public event on Online Safety Bill |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Public/other audiences |
Results and Impact | I co-organised and chaired an event discussing progress with the Online Safety Bill in the UK. It involved discussion with leading experts, including a member of the House of Lords who opposes the Bill, members of civil society organisations who support the Bill, and regulators from Ofcom who will be tasked with enforcing the Bill. It was a hugely stimulating discussion that also served to educate the broader university community and general public about the legislation. Many audience members approached me afterward to say that the event had changed their mind about what to do think about the legislation. |
Year(s) Of Engagement Activity | 2022 |
URL | https://www.ucl.ac.uk/political-science/events/2022/dec/policy-practice-governing-online-speech-onli... |
Description | Regulating Online Defamation: A Workshop at the Centre for Ethics, Law, and Public Affairs, University of Warwick |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Policymakers/politicians |
Results and Impact | I was invited to participate in a daylong discussion workshop at the University of Warwick dedicated to the question of how social-media platforms and other powerful institutions should deal with defamatory speech. It was attended by a group of outstanding philosophers from across the UK and Europe. The workshop productively led to several key insights about platform intermediary liability, which I am currently developing in a draft research paper. |
Year(s) Of Engagement Activity | 2022 |
Description | Research Presentation to Meta Oversight Board staff |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Industry/Business |
Results and Impact | I was invited to present my co-authored research on algorithmic amplification and deamplification to the staff of the Meta Oversight Board, the organisation created to oversee the content/speech decisions taken by Meta for the governance of Facebook and Instagram. It was attended by a range of staff in-person at the London office, withs several colleagues attending online from Silicon Valley. I received excellent feedback remarking that it was the first time they had seen an academic explore both the abstract philosophical principles on a topic, but then also trace those principles' concrete implications for what the companies should actually do. I also received feedback that the presentation inspired staff to pay greater attention to Meta's decisions on algorithmic and deamplification. |
Year(s) Of Engagement Activity | 2022 |
URL | https://www.oversightboard.com/ |
Description | Research presentation at 28th International Seminar on Competition Policy of the Brazilian Institute of Competition Law and Regulation (IBRAC) |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Policymakers/politicians |
Results and Impact | My postdoctoral fellow Beatriz Kira presented her work on digital platform regulation at the 28th International Seminar on Competition Policy of the Brazilian Institute of Competition Law and Regulation (IBRAC). This helped her improve her arguments, as she received feedback that refined her views on the topic. The relevant paper is under review. |
Year(s) Of Engagement Activity | 2022 |
Description | Research presentation at American Philosophical Association (Pacific Division) - Freedom of Speech on Social Media |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | I was invited to present my early research as part of a colloquium of leading scholars on free speech at the American Philosophical Association (Pacific Division) meeting in Vancouver, Canada. This is the premier international conference for scholars in philosophy. Nearly 60 professional philosophers and philosophy postgraduate students attended the event. After presenting my research, a scholar from Stanford University offered an extended response to my presentation, and subsequently I received extensive feedback from audience members during the Q&A. Several audience members approached me to say it was the first time they had heard a rigorous philosophical examination of the moral duties of social-media companies, and several remarked that the arguments I was advancing had persuaded them. |
Year(s) Of Engagement Activity | 2022 |
URL | https://www.apaonline.org/page/2022P_Program |
Description | Research presentation at American Political Science Association - Political Extremism and Algorithmic Injustice |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | I presented a paper "Political Extremism and Algorithmic Injustice" at the American Political Science Association annual conference, as part of a panel exploring questions of free speech on social media. I received outstanding feedback on the paper, which has improved the draft substantially. I also received feedback that I had persuaded audience members to change their views on the topic in question. |
Year(s) Of Engagement Activity | 2022 |
URL | https://connect.apsanet.org/apsa2022/apsa-2022-digital-program/ |
Description | Research presentation at International Journal of Press/Politics Conference, Loughborough |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | My postdoctoral fellow Kiran Arabaghatta Basavaraj presented a co-authored paper on Political Advertising on Facebook India at International Journal of Press/Politics Conference, Loughborough, which provided extremely helpful input into the paper, helping him and his co-authors improve their arguments. The paper is currently under review. |
Year(s) Of Engagement Activity | 2022 |
Description | Research presentation at Jean Monnet Conference on Competition Law and Beyond |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | My postdoctoral fellow Beatriz Kira presented her paper "Efficient Digital Markets Through Privacy, Taxation, and Consumer Protection". It provided extremely helpful input into the paper, helping her improve her arguments. The paper is currently under review |
Year(s) Of Engagement Activity | 2022 |
Description | Research presentation at Law and Philosophy Workshop at Pompeu Fabra University. |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | Regional |
Primary Audience | Professional Practitioners |
Results and Impact | I was invited to present my research at Pompeu Fabra University's Center for Law & Philosophy. I delivered a presentation to roughly 60 professional academics and research students working in law, philosophy, media studies, and political science, followed by an extended discussion. The level of discussion was extremely high; I took several meetings with colleagues after the session to receive further feedback. The research was definitely strengthened by the feedback, I was inspired to produce a new paper on the basis of the discussion, and I was told by several members of the audience that I persuaded them to change their views on questions of free speech on social media. |
Year(s) Of Engagement Activity | 2022 |
URL | https://www.upf.edu/web/lphi |
Description | Research presentation at London Philosophy Workshop |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | My postdoctoral fellow Sarah Fisher presented her paper "Social Affinity and the Uses of Language" at a workshop of philosophers in London, bringing together a range of scholars from the U.S. and U.K. The feedback she received has changed her mind on how to approach the argument in key places, enabling her to improve the argument before submitting for publication. |
Year(s) Of Engagement Activity | 2022 |
Description | Research presentation at Manchester Workshops in Political Theory - Counter-Speech |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | I delivered a paper as part of a workshop on "Misinformation, Hate Speech, and Counter-Speech" at the annual Manchester Workshops in Political Theory, the largest gathering of political theorists in the UK (and one of the largest in the world) with attendees from around the globe. These intensive workshops enable scholars working on similar topics to share related work in progress. My paper concerned the question of whether and why restrictions on speech are impermissible in light of the possibility of counter-speech. I received excellent feedback on the paper, as well as comments that I'd persuaded audience members to change their minds. |
Year(s) Of Engagement Activity | 2022 |
URL | https://sites.manchester.ac.uk/mancept/mancept-workshops/mancept-workshops-2022-programme/ |
Description | Research presentation at Manchester Workshops in Political Theory - Social Media |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | I delivered a paper as part of a workshop on social media at the annual Manchester Workshops in Political Theory, the largest gathering of political theorists in the UK (and one of the largest in the world) with attendees from around the globe. These intensive workshops enable scholars working on similar topics to share related work in progress. My paper concerned the question of whether and why social media companies have duties to combat harmful speech. I received excellent feedback on the paper, as well as comments that I'd persuaded audience members to change their minds. |
Year(s) Of Engagement Activity | 2022 |
URL | https://sites.manchester.ac.uk/mancept/mancept-workshops/mancept-workshops-2022-programme/ |
Description | Research presentation at NYU-Abu Dhabi Center for Cyber Security |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Postgraduate students |
Results and Impact | I was invited to present my research on the use of AI in social media content moderation to the NYU-Abu Dhabi Center for Cyber Security, one of the leading research centers in the Middle East for issues at the intersection of tech policy, corporate security, and national security. This talk enabled me to engage with leading computer scientists and security experts, testing the technical assumptions upon which my philosophical arguments were based and thereby strengthening them through the constructive feedback I received. This engagement was an example of my project's commitment to bringing philosophy to wider audiences throughout the research community, to enable fruitful cross-disciplinary fertilisation of ideas. |
Year(s) Of Engagement Activity | 2024 |
Description | Research presentation at Oxford Institute for Ethics in AI |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | I was invited to present my research on the role of AI in online content moderation to a large in-person and online audience at the Oxford Institute for Ethics in AI at Oxford University. I received excellent feedback on my research, helping me to plan for future papers concerning the ethics of AI (e.g., a chapter I am currently writing on using AI to catch hate speech as part of the Oxford Handbook on Hate Speech). I also received feedback from several audience members that I had persuaded them to change their views. |
Year(s) Of Engagement Activity | 2022 |
URL | https://www.oxford-aiethics.ox.ac.uk/event/ethics-ai-lunchtime-research-seminars-hybrid-oxford-remot... |
Description | Research presentation at Princeton University |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | Local |
Primary Audience | Postgraduate students |
Results and Impact | I was invited to discuss my work in progress with an advanced group of PhD students at Princeton University. I received excellent feedback that has helped me to improve my views; this has fed directly into work which is currently under review. |
Year(s) Of Engagement Activity | 2022 |
Description | Research presentation at World Association for Public Opinion Research Conference |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | My postdoctoral fellow Kiran Arabaghatta Basavaraj presented his paper Misinformation and Electoral Integrity Perceptions in Uttar Pradesh 2022: A Survey Experiment. This provided extremely helpful input into the paper, helping him improve his arguments. The paper is currently under review |
Year(s) Of Engagement Activity | 2022 |
Description | Research presentation at the Centre for Ethics, Law and Public Affairs at University of Warwick - Platform Justice |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | Regional |
Primary Audience | Professional Practitioners |
Results and Impact | I was invited to present my research to the Centre for Ethics, Law and Public Affairs at University of Warwick, which is among the most prestigious and rigorous venues in the UK for the discussion of research in political, moral, and legal philosophy. I pre-circulated a paper, which everyone studied in advance, and then the entirety of the two-hour session involved rigorous questions and discussion on the basis of the paper. Based on the feedback I received, I believe I have substantially strengthened the arguments; the outputs that will arise from this research will accordingly be much stronger and higher-placed. The event was attended by the broader community of scholars in political, moral, and legal philosophy at the University of Warwick. |
Year(s) Of Engagement Activity | 2022 |
URL | https://warwick.ac.uk/fac/soc/pais/research/celpa/ |
Description | Research presentation at the Centre for PPE at the University of Groningen - The Ethics of Content Moderation |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Professional Practitioners |
Results and Impact | I was invited to deliver a research presentation at the Centre for PPE (Philosophy, Politics & Economics) at the University of Groningen. I received excellent feedback on my paper "The Ethics of Content Moderation on Social Media." Several audience members indicated in persuaded them to change their views. |
Year(s) Of Engagement Activity | 2023 |
URL | https://www.rug.nl/filosofie/organization/ppe/events |
Description | Speak at Generative AI Workshop at Australian National University |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Other audiences |
Results and Impact | This event brought together philosophers, computer scientists, and industry professionals (from companies like Eleuther AI and Google) to discuss new research about generative artificial intelligence. I gave a paper on how social media platforms should govern harmful AI-generated content, arguing that the solution is not to create new rules for this content but instead to enforce existing rules more effectively. Harmful misinformation should be removed regardless of whether it was posted by machine or human, and regardless of the specific technology used to produce it. I received enormously helpful feedback on this argument, especially from computer scientists and industry professionals, who helped me identify certain mistaken assumptions I had about what is and is not technically feasible with current AI-detection systems. In turn, I received feedback that I had changed several other participants' minds about the topic. |
Year(s) Of Engagement Activity | 2024 |
Description | Submission of evidence to UK Government's Pornography Review |
Form Of Engagement Activity | A formal working group, expert panel or dialogue |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Policymakers/politicians |
Results and Impact | Submission of evidence to UK Government's Pornography Review, discussing the inadequacy of the Online Safety Act in protecting victims of non-consensual intimate deepfakes and the need for legislative change. The submission will be considered in the formal review conducted by the government and a report is expected by the end of 2024. |
Year(s) Of Engagement Activity | 2023 |
URL | https://www.onlinesafetyact.net/uploads/Pornography-Review---Written-evidence-by-Dr-Beatriz-Kira.pdf |
Description | Submission to House of Lords Communications & Digital Committee |
Form Of Engagement Activity | A formal working group, expert panel or dialogue |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Policymakers/politicians |
Results and Impact | The House of Lords Communications & Digital Committee solicited evidence for its inquiry into Large Language Models; in response my Lab submitted a response. The response argued that unconstrained LLMs may generate harmful speech, including misinformation and hate speech, which can endanger individuals, erode trust in democratic institutions, and undermine the overall health of the digital public sphere. Consequently, firms designing and releasing LLMs have ethical responsibilities to mitigate these risks. The submission unpacked these ideas and explained their implications. The Lords' final report cited our submission twice -- in relation to the risks of mis- and dis-information (especially in the form of synthetic "deepfakes") and in relation to harms caused by biases in AI training data. |
Year(s) Of Engagement Activity | 2023 |
URL | https://committees.parliament.uk/writtenevidence/124269/pdf/ |
Description | Submission to Meta Oversight Board - Electoral Misinformation Case |
Form Of Engagement Activity | A formal working group, expert panel or dialogue |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Industry/Business |
Results and Impact | The Oversight Board reviews content policy rules and decisions by Meta Platforms (Facebook, Instagram, and Threads); it has the power to reverse Meta's decisions with regard to particular pieces of content, order responses from Meta to its policy advisories, and solicit information to increase transparency. The Board solicited public comments with regard to Meta's policy an electoral misinformation (focused on particular misinformation that went viral during a recent Australian referendum). We argued in our public comment that Meta is correct that have a policy against electoral misinformation, and it was correct to enforce the policy against the content at issue in the case -- though we also recommend various improvements to Meta's electoral misinformation policy. The Board has not yet ruled on this case; we will update in the next reporting round if our submission was cited in influencing its decision. |
Year(s) Of Engagement Activity | 2024 |
Description | Submission to Meta Oversight Board - Manipulated Media Case |
Form Of Engagement Activity | A formal working group, expert panel or dialogue |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Industry/Business |
Results and Impact | The Oversight Board reviews content policy rules and decisions by Meta Platforms (Facebook, Instagram, and Threads); it has the power to reverse Meta's decisions with regard to particular pieces of content, order responses from Meta to its policy advisories, and solicit information to increase transparency. The Board solicited public comments with regard to Meta's policy an "Manipulated Media" (such as AI-generated deepfake videos), in relation to a decision regarding a doctored clip of President Biden. My Lab submitted a public comment arguing in detail that Meta's policy in this area was incoherent and unjustified. In response, the Board cited our comment three times, and delivered a ruling that broadly mirrored our own reasoning. |
Year(s) Of Engagement Activity | 2023 |
URL | https://osbcontent.s3.eu-west-1.amazonaws.com/PC-18036.pdf |
Description | UCL Workshop on Free Speech and Social Media |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | I convened and hosted this small invitation-only workshop on foundational philosophical questions about free speech and social media, bringing together leading scholars from across the U.S. and U.K. It provided an excellent opportunity to share and receive feedback on early research from my UKRI project, helping to set the stage for further research. |
Year(s) Of Engagement Activity | 2022 |
Description | Understanding Hate Speech in Ordinary, Legal and Online Language Seminar |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Other audiences |
Results and Impact | Sarah Fisher and Jeffrey Howard presented a co-authored working paper at the University of Reading's Centre for Cognition Research's seminar series on Understanding Hate Speech in Ordinary, Legal and Online Language, which sparked questions and discussion afterwards (especially about social media content moderation, harmful speech, free speech, and threats). |
Year(s) Of Engagement Activity | 2023 |
Description | University of Reading Invited Speaker Presentation |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | National |
Primary Audience | Other audiences |
Results and Impact | Sarah Fisher was invited to present a paper at the University of Reading's Department of Philosophy's visiting speaker series, which sparked questions and discussion afterwards (especially about large language models and their impact on public discourse). |
Year(s) Of Engagement Activity | 2024 |
Description | Workshop at American Political Science Association - "Democratic Speech in Divided Times" |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | I organised a session at the annual conference of the American Political Science Association exploring the latest philosophical work to address questions of democratic discourse, Maxime Lepoutre's "Democratic Speech in Divided Times." As I criticise Lepoutre's views in my manuscript in progress, this provided an excellent opportunity to test-drive my arguments. |
Year(s) Of Engagement Activity | 2022 |
URL | https://connect.apsanet.org/apsa2022/apsa-2022-digital-program/ |
Description | Workshop on Meta's Dangerous Organization and Individuals (DOI) Borderline Policy |
Form Of Engagement Activity | A formal working group, expert panel or dialogue |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Industry/Business |
Results and Impact | Sarah Fisher participated in a workshop organised jointly by Meta and Search For Common Ground to advise on the development of Meta's Dangerous Organization and Individuals (DOI) Borderline Policy. |
Year(s) Of Engagement Activity | 2023 |
Description | Workshop on my book manuscript in progress at Association for Political Thought Conference, Oxford University |
Form Of Engagement Activity | Participation in an activity, workshop or similar |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Professional Practitioners |
Results and Impact | I participated in a workshop dedicated to an in-depth discussion of the book monograph I am currently writing on freedom of speech in the digital age. I received stellar comments on my book from a leading group of philosophers, many of which are motivating fundamental improvements to the argument. I also received several encouraging comments from audience members claiming that they believed the book will be a major contribution to debates on freedom of speech. |
Year(s) Of Engagement Activity | 2023 |
URL | https://www.associationforpoliticalthought.ac.uk/conference/2022-conference-2/ |