CIMPLE: Countering Creative Information Manipulation with Explainable AI

Lead Research Organisation: The Open University
Department Name: Faculty of Sci, Tech, Eng & Maths (STEM)

Abstract

Explainability is of significant importance in the move towards trusted, responsible and ethical AI, yet remains in infancy. Most relevant efforts focus on the increased transparency of AI model design and training data, and on statistics-based interpretations of resulting decisions. The understandability of such explanations and their suitability to particular users and application domains received very little attention so far. Hence there is a need for an interdisciplinary and drastic evolution in XAI methods, to design more understandable, reconfigurable and personalisable explanations.

Knowledge Graphs offer significant potential to better structure the core of AI models, and to use semantic representations when producing explanations for their decisions. By capturing the context and application domain in a granular manner, such graphs offer a much needed semantic layer that is currently missing from typical brute-force machine learning approaches.

Human factors are key determinants of the success of relevant AI models. In some contexts, such as misinformation detection, existing XAI technical explainability methods do not suffice as the complexity of the domain and the variety of relevant social and psychological factors can heavily influence users' trust in derived explanations. Past research has shown that presenting users with true / false credibility decisions is inadequate and ineffective, particularly when a black-box algorithm is used. To this end, CIMPLE aims to experiment with innovative social and knowledge-driven AI explanations, and to use computational creativity techniques to generate powerful, engaging, and easily and quickly understandable explanations of rather complex AI decisions and behaviour. These explanations will be tested in the domain of detection and tracking of manipulated information, taking into account social, psychological and technical explainability needs and requirements.
 
Title MisinfoMe 
Description Misinformation is a persistent problem that threatens societies at multiple levels. In spite of the intensified attention given to this problem by scientists, governments, and media, the lack of awareness of how someone has interacted with or is being exposed to, misinformation remains a challenge. This web application is called MisinfoMe, which collects ClaimReview annotations and source-level validations from numerous sources, and provides an assessment of a given Twitter account with regards to how much they interact with reliable or unreliable information. The first version of MisinfoMe was released in 2019, and was updated and extended in the CIMPLE project. 
Type Of Technology Webtool/Application 
Year Produced 2021 
Impact Data is now used by https://iffy.news/index/ as part of their reliability index 
URL https://misinfo.me/frontend-v2/home
 
Description BBC Ideas film 
Form Of Engagement Activity A broadcast e.g. TV/radio/film/podcast (other than news/press)
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Media (as a channel to the public)
Results and Impact Prof Alani was featured in the BBC Ideas film titled "Can you spot digital lies?". The film has been viewed by 94K at the time of writing. In this film, Prof Alani talked about the risks of misinformation, how to spot them, and the current and future challenges in detecting them.
Year(s) Of Engagement Activity 2021
URL https://www.bbc.co.uk/ideas/videos/can-you-spot-digital-lies/p09hbzz6
 
Description Dagstuhl Seminar on Challenges and Opportunities of Democracy in the Digital Society 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Abstract of my talk at this seminar:
As long as there has been information, there has been misinformation. During the last few years, a lot of attention has been paid to developing tools that can detect which information is reliable and which is likely to be fake or misinforming. However, we are still learning how, when, and where such advanced technologies or the work of fact-checkers around the world can help in stopping misinformation from spreading. My goal in this talk is to demonstrate that we also hold false or unreliable beliefs and argue that we need technologies that can assess the information we and others share over time. Additionally, I will discuss the benefits, challenges, and risks of using automated methods for correcting people when they share misinformation.
Year(s) Of Engagement Activity 2022
URL https://www.dagstuhl.de/en/seminars/seminar-calendar/seminar-details/22361
 
Description European Language Technology conference 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact The event announce the release of the European Language Grid and raised awareness of the platform and its capabilities. We made contact with one of the technical teams working on this platform and investigated its potential for supporting our processing of misinformation and fact-checks in various international languages.
Year(s) Of Engagement Activity 2022
URL https://lr-coordination.eu/node/468
 
Description International Focus Group with IFCN signatory Fact-checkers 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Other audiences
Results and Impact In this activity, we were investigating the challenging aspects of fact-checking and how fact-checkers and journalists explain their fact-checking process to their stakeholders. Particularly, we are interested in how fact-checkers and journalists would use automated approaches to identifying or assessing misinformation. Two challenges we had already identified were:

1. Dealing with "misleading" misinformation vs. Fabricated content
2. Dealing with disinformation and how we understand the difference between disinformation and misinformation

We decided that a heterogenous focus group with different types of fact-checking organizations and journalists would provide the rich, qualitative data needed to explore this complex topic. Our focus group had the following objectives:

- Explore with fact-checkers and journalists on how misleading and disinformation-related news items are best fact-checked and explained
- Discuss unique contextual considerations for engaging with the public about misleading news and disinformation
- Identify common challenges and pitfalls, as well as best practices
- Use practitioner experiences to distill requirements for technology to support challenging explanations

We contacted all of the signatories to the International Fact Checking Network (97 organizations) to tell them about the CIMPLE project, direct them to our website, and ask them if they would like to contribute to research on explaining fact checks to users. We had 10 responses and had 7 participants in the focus group, 5 of which were fact-checking organizations working in different regions of the world. We met on Microsoft Teams for approximately 2 hours.

For the participants, they were able to exchange their different experiences of fact-checking, depending on the level of freedom in the media or the availability of verifiable information. This feature did not yet play a role in our computational approaches to understanding the spread of misinformation and fact-checking. This outcome suggests a pathway for future research in the project.

Another worrying insight regarded how polarisation and political conflict over vaccines bleeds over into other vaccine-related issues, leading to the return of polio, for example, as a result of anti-vaccination. Fact-checkers cited the importance of government distrust in creating vulnerability to misinformation about health and security. Connecting topics of misinformation to some of the premediating features of public trust in the government represents another contextual feature (along with the freedom of the press) that should be utilized in understanding the proliferation of misinformation or corrective information.

In addition, fact-checkers were able to exchange on how religion and religious figures play a role in the information environment, in comparison to scientific thought or scientists. That knowledge evolves and changes over time is sometimes difficult to grasp. In addition, science is sometimes complicated and involves processes that lay people have difficulty summarizing. Misinformation around numbers and statistics are a general challenge that fact-checkers and journalists experience. Other general challenges included how to present many "nested" facts, where the facts tell a coherent story, and how to deal with the perceived short attention span of those engaging with their content. These challenges help us to prioritize some of the ways in which a technological approach could assist fact-checkers.

The variable experiences we identified around the media environment and public relationship to the government prompted us to conduct further studies with fact-checkers outside of Western Europe and North America. We are currently in the process of recruitment for those follow up activities. These focus groups will form the basis of an analysis we will publish to a wider audience at relevant conferences, for example the Truth and Trust Online (TTO) conference in the UK. TTO has become an important conference for those involved in academia, the third sector, industry and policy-making.
Year(s) Of Engagement Activity 2022
 
Description media coverage 
Form Of Engagement Activity A press release, press conference or response to a media enquiry/interview
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact The Telegraph published an article titled "Musk scraps Twitter's Covid misinformation policy" to cover the removal of safeguards by Twitter that were in place to curb the spread of misinformation. The article included a quote by Prof Alani on the likelihood of such actions to encourage some account to increase their sharing of misinformation.
Year(s) Of Engagement Activity 2022
URL https://www.telegraph.co.uk/business/2022/11/29/musk-scraps-twitters-covid-misinformation-policy/