Radicalization and Polarisation in New Media: The Case of YouTube

Lead Research Organisation: University of Bristol
Department Name: Economics

Abstract

Having more users than any other social networking site (Auxier and Anderson, 2021), YouTube is one of the most influential forms of new media in modern society. As a platform it is a playground for entertainment and an agora in which ideas can be exchanged and new opinions can be developed. However, YouTube has caused much controversy with its role as an agora. It has been suggested that the platform can contribute to radicalization and polarisation of opinions. It is also blamed to be a hotbed of fake news. Unfortunately, there is little empirical evidence to substantiate these claims. The interaction between misinformation and radicalisation or polarisation on the platform hasn't been thoroughly explored.

This project uses novel text and visual data, along with recent advances in machine learning (ML) and natural language processing (NLP) to formally assess these claims.

Using the categories assigned to YouTube channels in Ribeiro et al. (2019) as labels, I will use ML algorithms to predict radicalisation measures of each video. I will augment thumbnail images to increase predictive power. The use of video content as a variable is a novel introduction in this area, whereas previous studies have mainly focused on using comments. Adding video content to this analysis improves upon previously used methods to study radicalization.

The first part of the project concerns radicalisation. YouTube's recommendation system is a key aspect of the way in which users explore videos on the platform. Anecdotal evidence has suggested that YouTube's algorithm tends to lead viewers to content that is more extreme (Tufekci, 2018). In spite of this, few studies have investigated the effects of the algorithm on users' exposure to more extreme content.

This research will begin by identifying seed videos across a number of topics (e.g., vaccination, global warming, elections) and designing an algorithm to navigate through the recommendation network provided by the YouTube algorithm. Thumbnail images and comments will be collected from each video. From this text and visual data, measures of radicalization will be constructed. These will then be used to formally quantify how radicalization changes as we move through the network.

The second part of the project concerns polarisation. Using text data, we seek to construct measures of polarisation on YouTube. This will allow us to identify how polarisation has evolved over time, both within existing channels and through the creation of new channels. The analysis on changes across the extensive and intensive margins enable me to paint a picture of how polarisation has evolved over time.

In order to measure polarisation we follow a method similar to that used in Gentzkow et al. (2019), who used US congressional speech to measure political polarisation. We introduce a choice model in order to capture content choices made by video creators. Transcript text data is used to measure video content. Polarisation measures are then constructed by estimating choice probabilities from this model. These can be interpreted as the ease with which an observer can guess the ideological origin of a video, given the creator's single choice of phrase. We generalise Gentzkow et al.'s measure, allowing it to incorporate multidimensional ideological origins, which are more suited to YouTube.

Earlier polarisation measures are argued to be biased (Gentzkow et al., 2019). This is because the choice set is large relative to the choice of phrases that is observed. In order to address this, we will use strategies such as leave-out and regularised estimators.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
ES/P000630/1 30/09/2017 29/09/2028
2722475 Studentship ES/P000630/1 30/09/2022 29/09/2025 Oska Sheldon-Fentem