Centre for Cyberhate Research & Policy: Real-Time Scalable Methods & Infrastructure for Modelling the Spread of Cyberhate on Social Media

Lead Research Organisation: Cardiff University
Department Name: Sch of Social Sciences

Abstract

The UK Government's Hate Crime Action Plan (Home Office 2016) stresses the need to tackle hate speech on social media by bringing together policymakers with academics to improve the analysis and understanding of the patterns and drivers of cyberhate and how these can be addressed. Furthermore, the recent Home Affairs Select Committee Inquiry (2016) 'Hate Crime and its Violent Consequences' highlighted the role of social media in the propagation of hate speech (on which the proposers were invited to provide evidence). This proposal acknowledges the migration of hate to social media is non-trivial, and that empirically we know very little about the utility of Web based forms data for measuring online hate speech and counter hate speech at scale and in real-time. This became particularly apparent following the referendum on the UK's future in the European Union, where an inability to classify and monitor hate speech and counter speech on social media in near-real-time and at scale hindered the use of these new forms of data in policy decision making in the area of hate crime. It was months later that small-scale grey literature emerged providing a 'snap-shot' of the problem (Awan & Zempi 2016, Miller et al. 2016). In partnership with the UK Head of the Cross-Government Hate Crime Programme at the Department for Communities and Local Government (DCLG), and the London Mayor's Office for Policing and Crime's (MOPAC) new Online Hate Crime Hub, the proposed project will co-produce evidence on how social media data, harnessed by new Social Data Science methods and scalable infrastructure, can inform policy decision making. We will achieve this by taking the social media reaction to the referendum on the UK's future in the European Union as a demonstration study, and will co-develop with the Policy CI transformational New Forms of Data Capability contributions including: (i) semi-automated methods that monitor the production and spread of cyberhate around the case study and beyond; (ii) complementary methods to study and test the effectiveness of counter speech in reducing the propagation of cyberhate, and (iii) a technical system that can support real time analysis of hate and counter speech on social media at scale following 'trigger events', integrated into existing policy evidence-based decision-making processes. The system, by estimating the propagation of cyberhate interactions within social media using machine learning techniques and statistical models, will assist policymakers in identifying areas that require policy attention and better targeted interventions in the field of online hate and antagonistic content.

Planned Impact

In line with the drive behind the call, this project will co-produce a strong evidence base on the utility of social media data to inform policy development, intervention and decision making. The project will provide a case study that will demonstrate how these data, when effectively and efficiently collected, transformed and repurposed using Social Data Science tools and methods, can have a transformative impact on how governments work to address contemporary pressing social problems. We have selected cyberhate in the aftermath of the referendum on the UK's future in the EU as a case study for understanding the relationship between social media data and policymaking.

We will work closely with the Policy CI, the UK Head of the Cross-Government Hate Crime Programme at the Department for Communities and Local Government, and the London Mayor's Office for Policing and Crime Online Hate Crime Hub, to co-produce an evidence base on the utility of social media data for policy and decision making. We will achieve this by:

--Involving the UK Head of the Cross-Government Hate Crime Programme and the MOPAC Online Hate Crime Hub in the design, testing, analysis and implementation phases of the project, to ensure maximum buy-in at a policy level

--Running requirements gathering workshops with policymakers for tool and system development

--Testing the system developed in WP6 in a policy environment and writing lessons-learned report

--Conducting post-hoc interviews with policymakers to inform an ESRC Policy Evidence Briefing and an Ethics Guide for Policymakers

--Providing free access to new Lab social media hate and counter speech classification tools for not-for-profit use
 
Description HateLab set out to demonstrate how new and emerging forms of data related to online hate speech could be effectively marshalled to benefit policy and operational decision making. All five objectives of the original HateLab grant were met and 8 of the 10 objectives of the costed grant extensions were met (see detail below). Several delays to Brexit pushed back 2 objectives.

Objective 1: Develop and integrate real-time scalable machine classification tools for social media data to detect cyberhate and counter-speech in the aftermath of the referendum on the UK's future in the EU.

Status: Complete - updated machine learning classifiers for race, religion, sexual orientation, disability, national identity (Polish) and counter-speech developed, tested and published in 4* journals and conferences. The development of the anti-Polish classifier stemmed from our partnership with the start-up Samurai Labs, based in Poland and California. Our hate speech classifiers won first place in the Hate Speech measurement task at the International Workshop on Semantic Evaluation, sponsored by SIGLEX and Microsoft.

Objective 2: Design and implement online experiments to test the effectiveness of counter hate speech to inform model building and policy decision making.

Status: Complete - Stage one models showed that counter-speech was effective at stemming the length of hate speech threads. Stage two models using data from Stop Hate UK counter-speech volunteers (funded by the MHCLG) are being developed to confirm these results.

Objective 3: Build statistical models of the propagation of post-EU referendum hate speech that identify the enablers and inhibiters of hate.

Status: Complete - Brexit hate speech models showed that national newspaper headlines (e.g. the Telegraph Brexit 'Mutineers' headline) were significant predictors of online hate speech. Results formed part of the ITV Exposure documentary 'Brexit Online Uncovered', viewed by over 3.4 million.

Objective 4: Integrate the statistical models into a real-time scalable technical system for policymakers that refreshes results with 'live' 'big' data and provides visualisations of hate and counter speech outputs.

Status: Complete - Phase 1 of the HateLab Dashboard was developed in Q2 2018.

Objective 5: Expand and refine existing ethical guidance on the use of new forms of data for policy and decision making in the area of online hate speech.

Status: Complete - New guidelines developed and published (Towards an ethical framework for publishing Twitter data in social research: taking into account users' views, online context and algorithmic estimation. Sociology, 51(6), pp. 1149-1168; Users' view's of ethics in social media research: informed consent, anonymity and harm. In: Woodfield, K. ed. The Ethics of Online Research., Vol. 2. Advances in Research Ethics and Integrity Emerald Publishing, pp. 27-51; Linking survey and Twitter data: informed consent, disclosure, security and archiving. Journal of Empirical Research on Human Research Ethics; Linking Twitter and survey data: The impact of survey mode and demographics on consent rates across three UK studies. Social Science Computer Review)

HateLab Costed Extensions:

Objective 1: Gather detailed technical, practical and legal requirements prior to Dashboard installation.

Status: Complete - Meetings with National Cyber Crime Hub held and requirements have informed the development of the dashboard.

Objective 2: Purchase the Twitter Firehose for the evaluation period to ensure a census of hate tweets are captured by the dashboard.

Status: Complete - Firehose purchased for a 12-month period.

Objective 3: Develop a cloud backend for the dashboard, allowing for enhanced data storage, faster data processing and visualisation required by the additional demands of the Twitter Firehose.

Status: Complete - Software company appointed to develop cloud backend.

Objective 4: Re-implement the Dashboard front-end (required for compatibility with the new cloud backend), that will provide an opportunity to add further functionality (requested by policy stakeholder in Sept 18 meeting).

Status: Ongoing: Software development company appointed to develop Dashboard frontend. Dashboard went live in Q3 2019. Evaluation in Q3/Q4 2020.

Objective 5: Install equipment and Dashboard.

Status: Complete - Quad-screen HateLab Dashboard installed and running within Hub in Greater Manchester Police.

Objective 6: Face-to-face training of Hub staff in use of Dashboard.

Status: Complete - Multiple training sessions held at GMP.

Objective 7: Monitor online hate speech in Operational Case Study 1 - UK's Exit from EU, provide extended on-site and remote support in the aftermath of Brexit and technical evaluation of Dash following Case Study via workshops with staff:

Status: Ongoing. Several smaller case studies (e.g. abuse of MPs in lead up to Brexit) have been conducted by the Hub with support from HateLab.

Objective 8: First round of technical updates to Dashboard.

Status: Technical updates ongoing.

Objective 9: Test updated Dash in Operational Case Study 2 (to be determined at the time, but might be related to Syria, Gaza, terrorist threat), and technical evaluation of Dash following Case Study 2 via workshops with staff

Status: Ongoing throughout 2020.

Objective 10: Write evaluation and lessons learnt report on use of Dash in operational setting and make recommendations on continued use of Dash in Hub.

Status: Ongoing.
Exploitation Route During development of the HateLab Dashboard in 2018/19 multiple organisations came forward requesting the technology for use in case studies ranging from monitoring community tensions in Wales to detecting far- and extreme-right wing content on UK social media. As our existing funding only allows for the provision of the Dashboard to the National Cyber Hate Crime Hub up to Dec 2020, we have been unable to take up these impact opportunities. We are seeking an ESRC Data Service grant that will allow the HateLab to continue its work with the National Cyber Hate Crime Hub, and to expand the provision of the Dashboard to 3 new policy and operational areas, greatly increasing the reach of our impact and creating an opportunity for a self-sustaining spin-out or social enterprise. The proposal is to tailor, install and evaluate the HateLab Dashboard in new settings that have a remit for monitoring online hate speech, including:

• Welsh Government (public sector) - monitoring of community tensions across Wales' local authority regions
• National Counter Terrorism Policing/Wales Extremism and Counter Terrorism Unit (security services sector) - monitoring far- and extreme- right wing content across the UK
• Galop, the LGBT+ anti-violence charity (civil society sector) - monitoring anti-LGBT+ hate speech internationally

Evaluating the use of the HateLab Dashboard across various sectors and regions (local, national and international) will allow for a fuller understanding of its capabilities and benefits for policy, security and civil society. We have engaged in discussions with each organisation regarding the provision of the Dashboard for a pilot and evaluation. Our vision ties in with the ESRC Data Infrastructure and Delivery Plan 2019 in several ways: i) by advancing the frontiers of social science through applied world-leading interdisciplinary research in NEFD that develops skills and creates new data resources (in line with Archives of the Future); ii) by providing next generation public services via the provision of cutting edge AI to government, addressing the AI and Data Grand Challenge in the Industrial Strategy and building confidence in the use of NEFD for the public good; and iii) by contributing to productivity, prosperity and growth via a possible HateLab spin-out and related jobs growth. HateLab featured as a case study in the UKRI Infrastructure Landscape analysis and through this Data Service grant will continue to provide world-leading data infrastructure within and outside of the academy by providing insight into a pressing global challenge, that if left unchecked may result in increasing polarisation, the continuing rise of populism across the globe, and acts of mass violence against minority groups.
Sectors Communities and Social Services/Policy,Digital/Communication/Information Technologies (including Software),Government, Democracy and Justice,Security and Diplomacy

URL http://HateLab.net
 
Description In England & Wales police recorded hate crimes are at their highest levels since records began. The migration of hate to the Internet requires the police to address the problem on two fronts. The ESRC funded HateLab (public name for the Cyberhate project) is the first to address the problem both offline and online, generating vital evidence on prevalence, impact and prevention. Lab technologies have been embedded within HMG's NPCC National Online Hate Crime Hub, allowing policymakers and police to prevent hate crime and speech. The HateLab has: v) Innovated by combining social science and computer science research techniques to examine online forms of data to develop an evidence base on online hate speech. Findings revealed that anti-Muslim hate speech spiked in the first 24 hours following terror attacks in 2013 and 2017, and rapidly deescalated, indicating a 'half-life' of 'cyberhate'. In the aftermath of these events social media information flows from police were the second longest lasting within the first 36 hours, indicating that law enforcement online communications might be an effective channel to inform the public, solicit information, and counter rumour, speculation and hate speech; vi) Analysed survey and new forms of data to provide evidence showing hate crime and online hate speech spiked in the final weeks of the Vote Leave and Leave.EU campaigns, following the Brexit vote and at subsequent moments in the Brexit process. These results underpinned the BBC One Panorama documentary 'Hate on the Streets' in 2018; vii) Provided evidence that Brexit related information from Twitter linked to the Russian Internet Research Agency were between 20-40% more likely to be retweeted, compared to UK media, government and public figure/celebrity accounts; viii) Generated evidence that the online abuse of MPs supposedly working against Brexit (so called "mutineers") was organised by a clandestine right-wing group based in London, the results of which appeared on an ITV documentary 'Brexit Online Uncovered' in early 2019. ix) Created an online dashboard that monitors the spread of online hate speech. Using an innovative blend of machine learning (a form of Artificial Intelligence) and social science statistical modelling techniques, the dashboard automatically classifies hateful content in real-time and at a scale hitherto unrealisable. The Online Hate Speech Dashboard was integrated into HMG's National Online Hate Crime Hub (2019). The Hub is the point of contact for all victims of online hate crime, and produces intelligence reports (using the dashboard) for police, senior civil servants and MPs. HateLab results on the spread of online hate speech around events allowed the Hub to better understand the dynamics of propagation, leading to improved response times, better support for victims and more effective allocation of resources. The Director of the Hub states our research has resulted in economic savings of ~£500,000 via the provision of the Dashboard, cloud and data services and an implementation evaluation. HateLab was invited to the Home Affairs Select Committee's inquiry on Hate Crime and its Violent Consequences in November 2016, set-up in response to the murder of Jo Cox MP and the rising levels of hate speech and crimes against the general public and MPs. HateLab evidence was cited in the committee's summary report showing that online hate speech could be detected at scale and in real-time with AI developed at Cardiff. As a result, the inquiry criticised social media companies for not using such technology to counter the spread of hate. HateLab and the Silver Circle law firm, Mischon de Reya, established a partnership in 2018 to publish high-profile reports on the topic of online hate speech, containing legal advice for victims, solicitors and police. The first report was published in early 2019. A co-branded online hate speech 'tracker', available to the public, launched in mid 2019.
First Year Of Impact 2017
Sector Digital/Communication/Information Technologies (including Software),Government, Democracy and Justice,Security and Diplomacy
Impact Types Societal,Economic,Policy & public services

 
Description Evidence cited in Home Affairs Select Committee's inquiry on Hate Crime and its Violent Consequences
Geographic Reach National 
Policy Influence Type Gave evidence to a government review
Impact HateLab project staff were invited to the Home Affairs Select Committee's inquiry on Hate Crime and its Violent Consequences, set-up in response to the murder of Jo Cox MP and the rising levels of hate speech and crimes against the general public and MPs. HateLab evidence was cited in the committee's summary report showing that online hate speech could be detected at scale and in real-time with AI developed with ESRC funding. As a result, the inquiry criticised social media companies for not using such technology to counter the spread of hate.
URL https://publications.parliament.uk/pa/cm201617/cmselect/cmhaff/609/609.pdf
 
Description Operational impact within National Cyber Hate Crime Hub
Geographic Reach National 
Policy Influence Type Influenced training of practitioners or researchers
Impact Impact Letter RE: HateLab Impact on Policing Online Hate in England and Wales Dear Professor Williams, I write this letter to detail the significant impact your HateLab has had on tackling online hate in England and Wales. As a previous senior operational Police Officer and current National Police Chiefs' Council's National Policing Advisor for Hate Crime, I am responsible for coordinating policy and operational activity across all Government departments and criminal justice agencies in their response to hate crime. I also act as the UK Government's 'National Point of Contact' for hate crime to The Organisation for Security and Cooperation in Europe and manage the National Cyber Hate Crime Hub. The Hub was established by the Home Secretary in 2017 to tackle online forms of hate crime, that increased dramatically in the aftermath of the 2016 referendum vote on the future of the UK in the EU. It acts as the point of contact for all victims of online hate crime, and produces intelligence reports for police, senior civil servants and MPs. I sit on the Police and Cross-Government Hate Crime Programme, consisting of all relevant ministers, which coordinates responses to and oversees activity around Government hate crime action plans. While I have seen the benefits from the maintenance of strong relationships between the police and government departments in tackling hate crime, I recognise the need to also engage with academia to tap into the latest evidence and scientific advances in tackling hate. The most recent hate crime action plan 'Action Against Hate: The UK Government's plan for tackling hate crime (2016)' highlights the need for academics and policymakers to come together to tackle the growing problem of hate crime post Brexit. I first became aware of your novel research on online hate at Cardiff University's Hate Crime Symposium in 2012. Since that time we have established a ground-breaking collaboration, involving at the national level, the police, government departments and Cardiff University, to tackle the intractable and growing problem of online hate crime. I have had the pleasure of being a Co-Investigator on several of your ESRC funded research projects, and have experienced the benefit of working closely with your team in co-creating new tools, based on cutting-edge Artificial Intelligence, that generate novel forms of evidence on the nature of the online hate problem. These new technologies, data and insights have transformed the way my team of police officers and civilian staff at the National Cyber Hate Crime Hub monitor and tackle online hate crime. Specifically, the Hub has benefitted from your research on online hate, that uniquely blends computer science with social science methods, and your provision of the Dashboard for monitoring online hate speech in real-time. I have used your research findings on the enablers and inhibiters of online hate speech, the network dynamics of online hate and the effect of counter-narratives on stemming the spread of hate at national and international government and policing events. The analysis you conducted for me on the production and spread of online anti-Muslim hate speech around 'Punish a Muslim Day' transformed my understanding of the problem, and has fed into operational decision making during live police operations. The provision of the HateLab Dashboard, co-created with the Hub, has fundamentally changed the way we monitor the spread of hate speech during national events. Prior to the Dashboard, Hub staff relied on the Twitter platform interface to gather evidence on the ebb and flow of hate speech around events, such as the referendum vote on the future of the UK in the EU. This proved to be an inadequate method of generating the required insights to track and respond to the problem. During live operations, we were quickly inundated with irrelevant information and failed to capture hate speech in a systematic and reliable way. Through our close collaboration with HateLab, we have co-created technical solutions to overcome these problems. The Dashboard employs sophisticated Machine Learning algorithms to automatically classify hate speech across recognised characteristics at scale and speed, and displays results via a range visualisation tools (frequency chart, top hashtags, topic clusters, geo-location, networks etc). This ensures the Hub can monitor the production and spread of hate speech around events in robust and reliable way. To date, we have used the Dashboard to monitor hate speech around key moments of the Brexit process, including the abuse of MPs, and around the Terror attack on London bridge in November 2019. During these events the Dashboard has allowed the staff in the Hub to better understand the dynamics of hate speech propagation, leading to improved response times, better support for victims and more effective allocation of resources. I estimate the Dashboard has saved the police/national government ~£500,000 that would have been spent on a similar solution had our collaboration not materialised. I understand the provision of the HateLab Dashboard is being extended to the public sector (Welsh Government), security sector (UK Counter Terrorism Network) and the civil society sector (Galop, the LGBT+ anti-violence charity) via a significant UKRI Data Service grant. I am certain that each organisation will benefit significantly from their collaboration with the HateLab, and that in the short to long-term, your research and technology will have a positive impact not only on policy and practice, but also on the lives of victims and their families. Yours sincerely, Paul Giannasi, OBE National Policing Advisor for Hate Crime National Police Chiefs' Council
 
Description Centre for Cyberhate Research & Policy: Real-Time Scalable Methods & Infrastructure for Modelling the Spread of Cyberhate on Social Media
Amount £383,983 (GBP)
Funding ID ES/P010695/1 
Organisation Economic and Social Research Council 
Sector Public
Country United Kingdom
Start 04/2017 
End 12/2019
 
Title Online Hate Speech Dashboard 
Description The Dashboard This tool allows users to access to all open social media feeds (including the Twitter firehose) using a keyword search to identify variation in hate orientated text contained within posts. The dashboard currently allows for the classification of posts containing text that is antagonistic or hateful based on race (anti-black), religion (anti-Muslim), sexual orientation (anti-gay male and female), disability (anti-physical disability), and Jewish identity. Once posts are classified they can be visualised via a suite of tools: a. Real-time and historic modes, allowing end-user to monitor hate speech as it unfolds, and to search back over periods of user data collection for post-hoc analysis b. An interactive hate line chart displaying frequency of tweets, with customisable scale (raw, percentage, log etc). c. An interactive tool for network analysis of hate tweets (where nodes can be selected for further inspection and the production of sub-networks, such as Twitter @mentions, retweets, followers etc.) d. Red/Amber/Green real-time alert system for anomalous spikes in online hate speech above a baseline (defined by user or inferred from average number of hate posts in a given time-frame) e. Tool to identify top N hate hashtags f. Tool to identify top N hate influencers (e.g. top N accounts responsible for N% of hate speech) g. Tool to identify when a top hate user's account is deleted/suspended h. Tool to identify top victim targets (e.g. top N accounts targeted with hate using @mentions) i. Tool to identify Bot accounts with functionality to remove all suspected bots from the analysis and visualization j. Tool to identify links between social media platforms in posts (e.g. frequency of links to far-right open Facebook pages in tweets, far right post on reddit etc.) k. Topic clustering tool, displaying topics detected in posted text and proportion of topics over whole corpus l. Tool to display simple Wordclouds of hate tweets (in addition to topic detection) m. Export tool (sections of dash can be exported) to PDF, image file, bespoke format for end-user n. Demographic estimation of users at an aggregate level (e.g. gender, age) o. Aggregate (e.g. town, city, PFA) geo-location inference plotted on a scalable map (using Lat/Long, user specified location, location name specified in bio etc. - user can specify which are displayed, with all being selectable at once). Individual visualisation tools can be resized and 'toggled' in and out of the view, allowing the user to select the preferred Dashboard set-up for the monitoring task. The suite of tools can also be split over multiple screens to provide the most complete Dashboard set-up. 
Type Of Material Improvements to research infrastructure 
Year Produced 2019 
Provided To Others? Yes  
Impact The Dashboard will be used by HMG's National Online Hate Crime Hub to collect posts from all open social media feeds at set times around predicted and scheduled landmark events, such as the UK's planned exit from the European Union on 29th March 2019. The Purpose The purpose of the Dashboard along with the results and products it produces, is to assist in the identification of 'anomalous' increases in online hate speech in time and space (where geographical information is available) across multiple open social media sources. Results from the Dashboard will be triangulated with other data and intelligence available to the Hub to determine if any increases in online hate speech may be indicative of a rise in community tensions within offline communities or groups. Where offline community tensions can be verified by multiple data sources (including those beyond the Dashboard) the relevant local authorities will be notified. The Data Collection The Dashboard does not permit the identification of individual offending or offenders. Information produced by the Dashboard can only be used for analytical purposes. The outputs of this analysis will be used to inform policy, strategy and decision making with the overall aim of promoting community cohesion. The Dashboard cannot be used to collect evidence for the purpose of criminal proceedings and its use will not to be disclosed and used as evidence. 
 
Title Online Hate Speech Dashboard 
Description The Dashboard This tool allows users to access to all open social media feeds (including the Twitter firehose) using a keyword search to identify variation in hate orientated text contained within posts. The dashboard currently allows for the classification of posts containing text that is antagonistic or hateful based on race (anti-black), religion (anti-Muslim), sexual orientation (anti-gay male and female), disability (anti-physical disability), and Jewish identity. Once posts are classified they can be visualised via a suite of tools: a. Real-time and historic modes, allowing end-user to monitor hate speech as it unfolds, and to search back over periods of user data collection for post-hoc analysis b. An interactive hate line chart displaying frequency of tweets, with customisable scale (raw, percentage, log etc). c. An interactive tool for network analysis of hate tweets (where nodes can be selected for further inspection and the production of sub-networks, such as Twitter @mentions, retweets, followers etc.) d. Red/Amber/Green real-time alert system for anomalous spikes in online hate speech above a baseline (defined by user or inferred from average number of hate posts in a given time-frame) e. Tool to identify top N hate hashtags f. Tool to identify top N hate influencers (e.g. top N accounts responsible for N% of hate speech) g. Tool to identify when a top hate user's account is deleted/suspended h. Tool to identify top victim targets (e.g. top N accounts targeted with hate using @mentions) i. Tool to identify Bot accounts with functionality to remove all suspected bots from the analysis and visualization j. Tool to identify links between social media platforms in posts (e.g. frequency of links to far-right open Facebook pages in tweets, far right post on reddit etc.) k. Topic clustering tool, displaying topics detected in posted text and proportion of topics over whole corpus l. Tool to display simple Wordclouds of hate tweets (in addition to topic detection) m. Export tool (sections of dash can be exported) to PDF, image file, bespoke format for end-user n. Demographic estimation of users at an aggregate level (e.g. gender, age) o. Aggregate (e.g. town, city, PFA) geo-location inference plotted on a scalable map (using Lat/Long, user specified location, location name specified in bio etc. - user can specify which are displayed, with all being selectable at once). Individual visualisation tools can be resized and 'toggled' in and out of the view, allowing the user to select the preferred Dashboard set-up for the monitoring task. The suite of tools can also be split over multiple screens to provide the most complete Dashboard set-up. 
Type Of Technology Webtool/Application 
Year Produced 2019 
Impact The Dashboard will be used by HMG's National Online Hate Crime Hub to collect posts from all open social media feeds at set times around predicted and scheduled landmark events, such as the UK's planned exit from the European Union on 29th March 2019. The Purpose The purpose of the Dashboard along with the results and products it produces, is to assist in the identification of 'anomalous' increases in online hate speech in time and space (where geographical information is available) across multiple open social media sources. Results from the Dashboard will be triangulated with other data and intelligence available to the Hub to determine if any increases in online hate speech may be indicative of a rise in community tensions within offline communities or groups. Where offline community tensions can be verified by multiple data sources (including those beyond the Dashboard) the relevant local authorities will be notified. The Data Collection The Dashboard does not permit the identification of individual offending or offenders. Information produced by the Dashboard can only be used for analytical purposes. The outputs of this analysis will be used to inform policy, strategy and decision making with the overall aim of promoting community cohesion. The Dashboard cannot be used to collect evidence for the purpose of criminal proceedings and its use will not to be disclosed and used as evidence. 
 
Description BBC One Panorama 'Hate on the Streets' 
Form Of Engagement Activity A broadcast e.g. TV/radio/film/podcast (other than news/press)
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Media (as a channel to the public)
Results and Impact We participated in BBC One's Panorama 'Hate on the Streets'. The project supplied key evidence on the trends in offline hate crimes following the Brexit vote. The documentary was watched by over 3.4 million.
Year(s) Of Engagement Activity 2018
URL https://www.youtube.com/watch?v=w-jOhDbrQjQ&feature=youtu.be
 
Description ITV Exposure 'Brexit Online Uncovered' 
Form Of Engagement Activity A broadcast e.g. TV/radio/film/podcast (other than news/press)
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Media (as a channel to the public)
Results and Impact We provided key evidence to ITV's Exposure documentary 'Brexit Online Uncovered' that showed the links between Twitter users who were abusing MPs online, and how press headlines were statistically associated with increases in general online hate speech related to Brexit. The documentary had its premiere in the Houses of Parliament hosted by the Rt Hon Antoinette Sandbach MP for Eddisbury. It was viewed by over 4.1 million.
Year(s) Of Engagement Activity 2019
URL https://www.youtube.com/watch?v=BcMYxP9zfVU&feature=youtu.be
 
Description ITV NEWS Special Report on the rise of online hate speech 
Form Of Engagement Activity A broadcast e.g. TV/radio/film/podcast (other than news/press)
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Public/other audiences
Results and Impact We provided key evidence to an ITV NEWS special report on the rise of online hate speech. It was broadcast nationally on the lunchtime and evening ITV NEWS shows in early March 2020. Estimated audience over both shows ~6 million.
Year(s) Of Engagement Activity 2020
URL https://www.youtube.com/watch?v=sBHclogub6M&feature=youtu.be
 
Description Paper Presented at The Web Conference 2019, San Francisco, CA, USA 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact Liu, H.et al. 2019. Fuzzy multi-task learning for hate speech type identification. Presented at: The Web Conference 2019, San Francisco, CA, USA, 13-17 May 2019Proceedings of the 2019 World Wide Web Conference. ACM, (10.1145/3308558.3313546)
Year(s) Of Engagement Activity 2019
 
Description Paper presented at 1st International Conference on Cyber Deviance Detection (CyberDD) in conjunction with 10th ACM International Conference on Web Search and Data Mining (WSDM 2017), Cambridge, UK 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Burnap, P. and Williams, M. L. 2017. Classifying and modeling cyber hate speech: research and opportunities for practical intervention. Presented at: 1st International Conference on Cyber Deviance Detection (CyberDD) in conjunction with 10th ACM International Conference on Web Search and Data Mining (WSDM 2017), Cambridge, UK, 10 Feb 2017.
Year(s) Of Engagement Activity 2017
 
Description Paper presented at Cambridge Institute of Criminology Seminar Series, University of Cambridge 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Public/other audiences
Results and Impact Williams, M. L. and Burnap, P. 2017. Social data science & criminology: machine classification and modelling of cyberhate in online social networks. Presented at: Cambridge Institute of Criminology Seminar Series, University of Cambridge, UK, 9 February 2017.
Year(s) Of Engagement Activity 2017
 
Description Paper presented at Data Science and Government Conference, Oxford, UK 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Burnap, P. and Williams, M. L. 2016. Computational human and cyber security analytics for government and policy. Presented at: Data Science and Government Conference, Oxford, UK, 22 June 2016.
Year(s) Of Engagement Activity 2016
 
Description Paper presented at Home Office Crime and Policing Analysis Unit Seminar Series, Westminster, London, UK 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Policymakers/politicians
Results and Impact Williams, M. L. and Burnap, P. 2017. Detecting crime events using social media. Presented at: Home Office Crime and Policing Analysis Unit Seminar Series, Westminster, London, UK, July, 2017.
Year(s) Of Engagement Activity 2017
 
Description Paper presented at International Conference on Machine Learning and Cybernetics, Chengdu, China 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact Alorainy, W.et al. 2018. Suspended accounts: A source of Tweets with disgust and anger emotions for augmenting hate speech data sample. Presented at: International Conference on Machine Learning and Cybernetics, Chengdu, China, 15-18 July 2018.
Year(s) Of Engagement Activity 2018
 
Description Paper presented at Internet Leadership Academy, Oxford Internet Institute, University of Oxford 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Williams, M. and Burnap, P. 2017. Online extremism and hate speech: definition, measurement & regulation. Presented at: Internet Leadership Academy, Oxford Internet Institute, University of Oxford, UK, 26 September 2017.
Year(s) Of Engagement Activity 2017
 
Description Paper presented at Jensen Lecture Series, Duke University, NC, US 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact Williams, M. L. 2016. Crime sensing with big data: the affordances and limitations of using open source communications to estimate crime patterns. Presented at: Jensen Lecture Series, Duke University, NC, US, 2016.
Year(s) Of Engagement Activity 2016
 
Description Paper presented at UK Government Data Science Community Interest Workshop, ONS Data Science Campus, Newport, Wales, UK 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Professional Practitioners
Results and Impact Williams, M. L. and Burnap, P. 2017. Data science solutions for detecting and monitoring Brexit related online hate speech. Presented at: UK Government Data Science Community Interest Workshop, ONS Data Science Campus, Newport, Wales, UK, 4 September 2017.
Year(s) Of Engagement Activity 2017
 
Description Paper presented at: SERENE-RISC Workshop, Université de Montréal, Montreal, QC, Canada 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Williams, M. L. 2017. Big Data and criminology: Research from the UK. Presented at: SERENE-RISC Workshop, Université de Montréal, Montreal, QC, Canada, 26 April 2017.
Year(s) Of Engagement Activity 2017