📣 Help Shape the Future of UKRI's Gateway to Research (GtR)

We're improving UKRI's Gateway to Research and are seeking your input! If you would be interested in being interviewed about the improvements we're making and to have your say about how we can make GtR more user-friendly, impactful, and effective for the Research and Innovation community, please email gateway@ukri.org.

CIMPLE: Countering Creative Information Manipulation with Explainable AI

Lead Research Organisation: The Open University
Department Name: Faculty of Sci, Tech, Eng & Maths (STEM)

Abstract

Explainability is of significant importance in the move towards trusted, responsible and ethical AI, yet remains in infancy. Most relevant efforts focus on the increased transparency of AI model design and training data, and on statistics-based interpretations of resulting decisions. The understandability of such explanations and their suitability to particular users and application domains received very little attention so far. Hence there is a need for an interdisciplinary and drastic evolution in XAI methods, to design more understandable, reconfigurable and personalisable explanations.

Knowledge Graphs offer significant potential to better structure the core of AI models, and to use semantic representations when producing explanations for their decisions. By capturing the context and application domain in a granular manner, such graphs offer a much needed semantic layer that is currently missing from typical brute-force machine learning approaches.

Human factors are key determinants of the success of relevant AI models. In some contexts, such as misinformation detection, existing XAI technical explainability methods do not suffice as the complexity of the domain and the variety of relevant social and psychological factors can heavily influence users' trust in derived explanations. Past research has shown that presenting users with true / false credibility decisions is inadequate and ineffective, particularly when a black-box algorithm is used. To this end, CIMPLE aims to experiment with innovative social and knowledge-driven AI explanations, and to use computational creativity techniques to generate powerful, engaging, and easily and quickly understandable explanations of rather complex AI decisions and behaviour. These explanations will be tested in the domain of detection and tracking of manipulated information, taking into account social, psychological and technical explainability needs and requirements.
 
Description We produced evidence that the publication of fact-checks influences the spread of the false claims they debunk. We also showed, through a real-world experiment, that correcting people on social media can have varying impacts depending on the popularity of individuals on those platforms, as well as on the style of writing these corrections.
Exploitation Route Influence future research in the field. Provide evidence for the value of fact-checks. Shows social media platforms and others how best to correct people online to maximise the possibility of generating a positive reactions.
Sectors Digital/Communication/Information Technologies (including Software)

 
Description The work from this grant led to multiple media engagements that reported on various learnings from the project, and especially with regards to the processing of fact checking data and their impact on the spread of misinformation online. The dataset produced by the project, in collaboration with international partners, is freely available and constitutes one of the largest and most update datasets of misinformation and their corresponding fact checks in multiple language and countries.
First Year Of Impact 2024
Sector Digital/Communication/Information Technologies (including Software)
Impact Types Societal

 
Description Interviewed by the UK Government's Open Innovation Team
Geographic Reach National 
Policy Influence Type Contribution to a national consultation/review
 
Title CimpleKG 
Description Knowledge base of misinformation and corresponding fact-checks 
Type Of Material Database/Collection of data 
Year Produced 2024 
Provided To Others? Yes  
Impact Used in several applications 
URL https://github.com/CIMPLE-project/knowledge-base
 
Title The Fact-Checking Observatory Misinformation and Fact-checks Dataset 
Description The Fact-checking Observatory (FCO) is a website that automatically generates human-readable weekly reports about the spread of misinformation and fact-checks on Twitter/X was created to help public organisations, journalists and authorities to understand how useful fact-checking articles are for fighting COVID-related misinformation. The COVID-19-FCO dataset contains the COVID-related FCO data used by the FCO for generating its weekly COVID-19 reports. The COVID-19-FCO dataset tracks the co-spread of misinformation and fact-check URLs over 3 years on Twitter/X and contains metadata about the fact-checkers, topics and the demographic of the accounts that spread the tracked URLs. Contrary to existing Twitter/X datasets, our dataset only references known fact-checked misinforming content and includes the corresponding spread of fact-checked content as well as additional metadata. The specificity of the COVID-19-FCO dataset allows for advanced analyses about the impact of fact-checking on the spread of misinformation on social media and helps identify the type of misinformation that can be effectively opposed with fact-checks. 
Type Of Material Database/Collection of data 
Year Produced 2024 
Provided To Others? Yes  
Impact not observed yet since the dataset has been released this month 
URL https://github.com/evhart/fco-covid19-data
 
Description EURECOM 
Organisation EURECOM Institute
Country France 
Sector Academic/University 
PI Contribution Collaboration on the creation of a knowledge base of misinformation which grew to one of the largest and most detailed of its kind in existence today.
Collaborator Contribution EURECOME designed the knowledge base and populated it with our data. They also produced a web-based search page for this data.
Impact https://github.com/CIMPLE-project/knowledge-base a paper is under development, to be submitted to the International Semantic Web Conference 2024.
Start Year 2023
 
Title MisinfoMe 
Description Misinformation is a persistent problem that threatens societies at multiple levels. In spite of the intensified attention given to this problem by scientists, governments, and media, the lack of awareness of how someone has interacted with or is being exposed to, misinformation remains a challenge. This web application is called MisinfoMe, which collects ClaimReview annotations and source-level validations from numerous sources, and provides an assessment of a given Twitter account with regards to how much they interact with reliable or unreliable information. The first version of MisinfoMe was released in 2019, and was updated and extended in the CIMPLE project. 
Type Of Technology Webtool/Application 
Year Produced 2021 
Impact Data is now used by https://iffy.news/index/ as part of their reliability index 
URL https://misinfo.me/frontend-v2/home
 
Description BBC Ideas film 
Form Of Engagement Activity A broadcast e.g. TV/radio/film/podcast (other than news/press)
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Media (as a channel to the public)
Results and Impact Prof Alani was featured in the BBC Ideas film titled "Can you spot digital lies?". The film has been viewed by 94K at the time of writing. In this film, Prof Alani talked about the risks of misinformation, how to spot them, and the current and future challenges in detecting them.
Year(s) Of Engagement Activity 2021
URL https://www.bbc.co.uk/ideas/videos/can-you-spot-digital-lies/p09hbzz6
 
Description Dagstuhl Seminar on Challenges and Opportunities of Democracy in the Digital Society 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Abstract of my talk at this seminar:
As long as there has been information, there has been misinformation. During the last few years, a lot of attention has been paid to developing tools that can detect which information is reliable and which is likely to be fake or misinforming. However, we are still learning how, when, and where such advanced technologies or the work of fact-checkers around the world can help in stopping misinformation from spreading. My goal in this talk is to demonstrate that we also hold false or unreliable beliefs and argue that we need technologies that can assess the information we and others share over time. Additionally, I will discuss the benefits, challenges, and risks of using automated methods for correcting people when they share misinformation.
Year(s) Of Engagement Activity 2022
URL https://www.dagstuhl.de/en/seminars/seminar-calendar/seminar-details/22361
 
Description European Language Technology conference 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact The event announce the release of the European Language Grid and raised awareness of the platform and its capabilities. We made contact with one of the technical teams working on this platform and investigated its potential for supporting our processing of misinformation and fact-checks in various international languages.
Year(s) Of Engagement Activity 2022
URL https://lr-coordination.eu/node/468
 
Description International Focus Group with IFCN signatory Fact-checkers 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Other audiences
Results and Impact In this activity, we were investigating the challenging aspects of fact-checking and how fact-checkers and journalists explain their fact-checking process to their stakeholders. Particularly, we are interested in how fact-checkers and journalists would use automated approaches to identifying or assessing misinformation. Two challenges we had already identified were:

1. Dealing with "misleading" misinformation vs. Fabricated content
2. Dealing with disinformation and how we understand the difference between disinformation and misinformation

We decided that a heterogenous focus group with different types of fact-checking organizations and journalists would provide the rich, qualitative data needed to explore this complex topic. Our focus group had the following objectives:

- Explore with fact-checkers and journalists on how misleading and disinformation-related news items are best fact-checked and explained
- Discuss unique contextual considerations for engaging with the public about misleading news and disinformation
- Identify common challenges and pitfalls, as well as best practices
- Use practitioner experiences to distill requirements for technology to support challenging explanations

We contacted all of the signatories to the International Fact Checking Network (97 organizations) to tell them about the CIMPLE project, direct them to our website, and ask them if they would like to contribute to research on explaining fact checks to users. We had 10 responses and had 7 participants in the focus group, 5 of which were fact-checking organizations working in different regions of the world. We met on Microsoft Teams for approximately 2 hours.

For the participants, they were able to exchange their different experiences of fact-checking, depending on the level of freedom in the media or the availability of verifiable information. This feature did not yet play a role in our computational approaches to understanding the spread of misinformation and fact-checking. This outcome suggests a pathway for future research in the project.

Another worrying insight regarded how polarisation and political conflict over vaccines bleeds over into other vaccine-related issues, leading to the return of polio, for example, as a result of anti-vaccination. Fact-checkers cited the importance of government distrust in creating vulnerability to misinformation about health and security. Connecting topics of misinformation to some of the premediating features of public trust in the government represents another contextual feature (along with the freedom of the press) that should be utilized in understanding the proliferation of misinformation or corrective information.

In addition, fact-checkers were able to exchange on how religion and religious figures play a role in the information environment, in comparison to scientific thought or scientists. That knowledge evolves and changes over time is sometimes difficult to grasp. In addition, science is sometimes complicated and involves processes that lay people have difficulty summarizing. Misinformation around numbers and statistics are a general challenge that fact-checkers and journalists experience. Other general challenges included how to present many "nested" facts, where the facts tell a coherent story, and how to deal with the perceived short attention span of those engaging with their content. These challenges help us to prioritize some of the ways in which a technological approach could assist fact-checkers.

The variable experiences we identified around the media environment and public relationship to the government prompted us to conduct further studies with fact-checkers outside of Western Europe and North America. We are currently in the process of recruitment for those follow up activities. These focus groups will form the basis of an analysis we will publish to a wider audience at relevant conferences, for example the Truth and Trust Online (TTO) conference in the UK. TTO has become an important conference for those involved in academia, the third sector, industry and policy-making.
Year(s) Of Engagement Activity 2022
 
Description Keynote at ACM Hypertext Conference, 2023 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Other audiences
Results and Impact The keynote was delivered to over 100 conference participants and led to a discussion about the impact of long- vs short-term exposure to misinformation, and on the value of real-world experiments to assess misinformation interventions. Slides of talk are here: https://www.slideshare.net/halani/misinformation-vs-factchecks-the-ongoing-battle
Year(s) Of Engagement Activity 2023
URL https://ht.acm.org/ht2023/programme/keynotes/
 
Description Media interview 
Form Of Engagement Activity A press release, press conference or response to a media enquiry/interview
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Media (as a channel to the public)
Results and Impact Interviewed by a journalist for joe.co.uk who was writing about misinformation during the Russian-Ukraine war. I described our research into misinformation in general and our findings with regard to misinformation related to this conflict.
Year(s) Of Engagement Activity 2022
URL https://www.joe.co.uk/news/i-messaged-100-pro-russia-commenters-to-see-if-they-were-bots-heres-what-...
 
Description Panel on ADDRESSING MISINFORMATION AND DISINFORMATION IN CRISIS MANAGEMENT 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Harith Alani participated in a panel discussion on misinformation and disinformation in crisis management, concerning the utilisation of social media and crowdsourcing during disaster situations. The panel was set up by the LINKS project, funded by the EU to strengthen technologies and society for European disaster resilience. The project had its final event on 16-17 October 2023, which hosted the panel. LINKS was set out to understand and assess the effects of social media and crowd-sourcing (SMCS) on European disaster resilience.

The event was hosted by the Save the Children organisation, who launched during the event their "Feel Safe" initiative that aims to bring a "Child-Centred approach to disaster and risk reduction".
Year(s) Of Engagement Activity 2023
URL https://links-project.eu/final-conference-media/
 
Description magazine article written by a journalist following an interview 
Form Of Engagement Activity A magazine, newsletter or online publication
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Policymakers/politicians
Results and Impact The title of this piece is "AI & Misinformation", published by Government Business, issue 31.1, Jan 2024. pp98-100. https://issuu.com/psi-media/docs/government_business_31.1
the purpose of this article is to shed more light on the role of AI in countering misinformation, including positive as well as negative roles. The article was produced during the AI Safety summit in the UK, Nov 2023.
Year(s) Of Engagement Activity 2024
URL https://issuu.com/psi-media/docs/government_business_31.1
 
Description media coverage 
Form Of Engagement Activity A press release, press conference or response to a media enquiry/interview
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact The Telegraph published an article titled "Musk scraps Twitter's Covid misinformation policy" to cover the removal of safeguards by Twitter that were in place to curb the spread of misinformation. The article included a quote by Prof Alani on the likelihood of such actions to encourage some account to increase their sharing of misinformation.
Year(s) Of Engagement Activity 2022
URL https://www.telegraph.co.uk/business/2022/11/29/musk-scraps-twitters-covid-misinformation-policy/