Prebunking: Psychological "vaccines" against online misinformation, extremist recruitment and vaccine hesitancy.

Lead Research Organisation: University of Cambridge
Department Name: Psychology

Abstract

Online misinformation is recognised by many as one of the most important challenges of the 21st century. In a novel attempt to address this problem, my research has focused on developing psychological "vaccines" against misinformation, by building on an existing framework from social psychology called "inoculation theory", which focuses on how to build resistance against unwanted persuasion. It posits that it is possible to confer psychological resistance against manipulation attempts by pre-emptively exposing people to a weakened version of a deceptive argument, much like a real vaccine confers resistance against a pathogen after being injected with a severely weakened version of it.
In my research, I theorised that inoculation theory could be used to build general resistance against the strategies commonly used in the production of misinformation, as opposed to specific resistance against individual arguments as is the standard in inoculation research. To achieve this, I proposed a combination of active experiential learning and perspective-taking (an approach called "active inoculation" or "pre-bunking"). Gamification proved an excellent fit for this purpose.

After a promising pilot study with a "fake news card game" that I built, I developed a free online social impact game, Bad News, in which players become a fake news "tycoon", rooted in insights from social psychology and media studies. The game has since been translated to 15 languages and has so far been played by more than a million people. To test the effectiveness of Bad News as an anti-misinformation "vaccine", I created a "fake news detection scale" and developed a system for collecting survey data within the game to test if people's ability to spot misinformation improved after playing. The results were highly robust and replicated in randomised controlled trials as well as in 4 other language versions of the game. This work has so far resulted in 6 peer-reviewed publications.

During this fellowship, I will finalise a number of papers that explore important unanswered questions about inoculation theory in the context of misinformation: how long do the inoculation effects last, how strong are the effects when exposing people to various types of misinformation, and can we detect so-called "post-inoculation" effects? I will also co-develop a psychometrically validated "fake news detection" scale, to be used in misinformation research.

The second goal of this fellowship is to explore whether the above approach of using "active" inoculation interventions can be applied in other domains in which online misinformation is a threat. Concretely, I will investigate this question in three key issue domains: online extremist recruitment, vaccine hesitancy, and misinformation surrounding the recent Coronavirus epidemic. This fellowship will thus allow me to investigate whether inoculation theory can be leveraged as a scalable solution to the pervasive problem of misinformation across issue domains: can we create "broad-spectrum vaccines" against misinformation, and how can we scale these "vaccines" so as to maximise their uptake?

This fellowship thus presents an excellent opportunity to expand on my current work, both academically and in practice. I anticipate that by the end of this fellowship, I will be in an excellent position to become a leading researcher within the domain of online misinformation and persuasion research. After this fellowship, I hope to find a position as university lecturer. In addition, I will collaborate and engage with external partners (e.g. social media companies such as WhatsApp) to ensure that the insights developed during this fellowship will be applied in practice, much like we did previously with the Bad News game. After all, a vaccine only works if many people make use of it.