Learning from COVID-19: An AI-enabled evidence-driven framework for claim veracity assessment during pandemics

Lead Research Organisation: University of Warwick
Department Name: Computer Science

Abstract

The term 'infodemic' coined by the WHO refers to misinformation during pandemics that can create panic, fragment social response, affect rates of transmission; encourage trade in untested treatments that put people's lives in danger. The WHO and government agencies have to divert significant resources to combat infodemics. Their scale makes it essential to employ computational techniques for claim veracity assessment. However, existing approaches largely rely on supervised learning. Present accuracy levels fall short of that required for practical adoption as training data is small and performance tends to degrade significantly on claims/topics unseen during training: current practices are unsuitable for addressing the scale and complexity of the COVID-19 infodemic.

This project will research novel supervised/unsupervised methods for veracity assessment of claims unverified at the time of posting, by integrating information from multiple sources and building a knowledge network that enables cross verification. Key originating sources/agents will be identified through patterns of misinformation propagation and results will be presented via a novel visualisation interface for easy interpretation by users.

This high-level aim gives rise to the following objectives:
RO1. Collect COVID-19 related data from social media platforms and authoritative resources.
RO2. Develop automated methods to extract key information on COVID-19 from scientific publications and other relevant sources.
RO3. Develop novel unsupervised/supervised approaches for veracity assessment by incorporating evidence from external sources.
RO4. Analyse dynamic spreading-patterns of rumour in social media; identify the key sources/agents and develop effective containment strategies.
RO5. Validate the methods via a set of new visualisation interfaces.
 
Description Our key findings are summarised below:

- Our project has created new datasets and developed a number of novel approaches for fact checking and claim verification that have been applied through different experiments to these datasets. The experiments have provided new knowledge on the architecture of such models and how different elements of information (as inference relationships, similarity scores, etc) can be combined for a more efficient veracity assessment, as well as a more deeper knowledge on generalisability of fact verification models opening promising paths for improvement as few-shot learning or updating of embeddings. These findings have advanced our knowledge to claim veracity assessment specifically related to COVID.

- We have developed a web-based misinformation detection system PANACEA on COVID-19 related claims, which has two modules, fact-checking and rumour detection. Our fact-checking module, which is supported by novel natural language inference methods with a self-attention network, outperforms state-of-the-art approaches. It is also able to give automated veracity assessment and ranked supporting evidence with the stance towards the claim to be checked. In addition, PANACEA adapts the bi-directional graph convolutional networks model, which is able to detect rumours based on comment networks of related tweets, instead of relying on the knowledge base. This rumour detection module assists by warning the users in the early stages when a knowledge base may not be available.
Exploitation Route We have organised interviews with BCC Monitoring and Fullfact to gain a better understanding of the fact checking process currently adopted by the journalists in BBC and by the fact checking company. The findings from the interview guided the development of a fact checking tool to meet the requirements of the journalists and fact checkers.
Sectors Digital/Communication/Information Technologies (including Software),Government, Democracy and Justice,Other

URL https://panacea2020.github.io/index.html
 
Title BERT-based Text and Image multimodal model with Contrasting learning (BTIC) 
Description The BERT-based Text and Image multimodal model with Contrasting learning (BTIC) has been developed for unreliable multimodal news detection. It captures both textual and visual information from unreliable articles utilising the contrastive learning strategy. The contrastive learner interacts with the unreliable news classifier to push similar credible news (or similar unreliable news) closer while moving news articles with similar content but opposite credibility labels away from each other in the multimodal embedding space. 
Type Of Material Computer model/algorithm 
Year Produced 2021 
Provided To Others? Yes  
Impact NA 
URL https://github.com/WenjiaZh/BTIC
 
Title COVID-RV - a novel COVID-19 dataset of false claims and relevant Twitter conversations 
Description To facilitate generalisability evaluation of rumour verification models, we introduce the COVID-RV (COVID-Rumour Verification) dataset. It extends CovidLies (Hossain et al., 2020), a manually curated dataset of claims on COVID-19, by associating claims with social media conversations from Twitter. COVID-RV is carefully curated and manually annotated in two stages for tweet relevance and stance towards the claim. Unlike datasets containing only individual posts, COVID-RV matches claims with relevant tweets which are sources of a conversation, as well as associated conversations. This makes it possible to evaluate rumour verification models making use of conversation threads. 
Type Of Material Database/Collection of data 
Year Produced 2021 
Provided To Others? No  
Impact This dataset will be released publicly and used to evaluate existing models for rumour verification and develop novel ones. 
URL https://panacea2020.github.io/
 
Title Dynamic Brand-Topic Model (dBTM) 
Description Monitoring online customer reviews is important for business organisations to measure customer satisfaction and better manage their reputations. We propose a novel dynamic Brand-Topic Model (dBTM) which is able to automatically detect and track brand-associated sentiment scores and polarity-bearing topics from product reviews organised in temporally-ordered time intervals. dBTM models the evolution of the latent brand polarity scores and the topic-word distributions over time by Gaussian state space models. It also incorporates a meta learning strategy to control the update of the topic-word distribution in each time interval in order to ensure smooth topic transitions and better brand score predictions. It has been evaluated on a dataset constructed from MakeupAlley reviews and a hotel review dataset. Experimental results show that dBTM outperforms a number of competitive baselines in brand ranking, achieving a good balance of topic coherence and uniqueness, and extracting well-separated polarity-bearing topics across time intervals. 
Type Of Material Computer model/algorithm 
Year Produced 2022 
Provided To Others? Yes  
Impact The model is described in a paper accepted by the Transactions of the Association for Computational Linguistics (TACL). 
URL https://github.com/BLPXSPG/dBTM
 
Title NLI-SAN 
Description Veracity assessment approaches for automated fact-checking of claims. It uses Natural Language Inference (NLI) and contextualised representations of the claims and evidence. NLI-SAN combines the inference relation between claims and evidence with attention techniques. 
Type Of Material Computer model/algorithm 
Year Produced 2022 
Provided To Others? No  
Impact The description of the approach as well as an online platform implementing it will be soon published (material currently under review). 
URL https://panacea2020.github.io
 
Title PANACEA Dataset 
Description The dataset aggregates a heterogeneous set of COVID-19 claims categorised as True or False. Aggregation of heterogeneous sources involved a careful deduplication process to ensure dataset quality. Fact-checking sources are provided for veracity assessment, as well as additional information sources for True claims. Additionally, claims are labelled with sub-types (Multimodal, Social Media, Questions, Numerical, and Named Entities). The LARGE version of the dataset contains 5,143 claims and the SMALL version 1,709 claims. 
Type Of Material Database/Collection of data 
Year Produced 2022 
Provided To Others? No  
Impact The dataset will be soon published (currently under review) in order to be used by the research community. 
URL https://panacea2020.github.io
 
Title PANACEA dataset - Heterogeneous COVID-19 Claims 
Description The peer-reviewed publication for this dataset has been presented in the 2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL), and can be accessed here: https://arxiv.org/abs/2205.02596. Please cite this when using the dataset. This dataset contains a heterogeneous set of True and False COVID claims and online sources of information for each claim. The claims have been obtained from online fact-checking sources, existing datasets and research challenges. It combines different data sources with different foci, thus enabling a comprehensive approach that combines different media (Twitter, Facebook, general websites, academia), information domains (health, scholar, media), information types (news, claims) and applications (information retrieval, veracity evaluation). The processing of the claims included an extensive de-duplication process eliminating repeated or very similar claims. The dataset is presented in a LARGE and a SMALL version, accounting for different degrees of similarity between the remaining claims (excluding respectively claims with a 90% and 99% probability of being similar, as obtained through the MonoT5 model). The similarity of claims was analysed using BM25 (Robertson et al., 1995; Crestani et al., 1998; Robertson and Zaragoza, 2009) with MonoT5 re-ranking (Nogueira et al., 2020), and BERTScore (Zhang et al., 2019). The processing of the content also involved removing claims making only a direct reference to existing content in other media (audio, video, photos); automatically obtained content not representing claims; and entries with claims or fact-checking sources in languages other than English. The claims were analysed to identify types of claims that may be of particular interest, either for inclusion or exclusion depending on the type of analysis. The following types were identified: (1) Multimodal; (2) Social media references; (3) Claims including questions; (4) Claims including numerical content; (5) Named entities, including: PERSON - People, including fictional; ORGANIZATION - Companies, agencies, institutions, etc.; GPE - Countries, cities, states; FACILITY - Buildings, highways, etc. These entities have been detected using a RoBERTa base English model (Liu et al., 2019) trained on the OntoNotes Release 5.0 dataset (Weischedel et al., 2013) using Spacy. The original labels for the claims have been reviewed and homogenised from the different criteria used by each original fact-checker into the final True and False labels. The data sources used are: - The CoronaVirusFacts/DatosCoronaVirus Alliance Database. https://www.poynter.org/ifcn-covid-19-misinformation/ - CoAID dataset (Cui and Lee, 2020) https://github.com/cuilimeng/CoAID - MM-COVID (Li et al., 2020) https://github.com/bigheiniu/MM-COVID - CovidLies (Hossain et al., 2020) https://github.com/ucinlp/covid19-data - TREC Health Misinformation track https://trec-health-misinfo.github.io/ - TREC COVID challenge (Voorhees et al., 2021; Roberts et al., 2020) https://ir.nist.gov/covidSubmit/data.html The LARGE dataset contains 5,143 claims (1,810 False and 3,333 True), and the SMALL version 1,709 claims (477 False and 1,232 True). The entries in the dataset contain the following information: - Claim. Text of the claim. - Claim label. The labels are: False, and True. - Claim source. The sources include mostly fact-checking websites, health information websites, health clinics, public institutions sites, and peer-reviewed scientific journals. - Original information source. Information about which general information source was used to obtain the claim. - Claim type. The different types, previously explained, are: Multimodal, Social Media, Questions, Numerical, and Named Entities. Funding. This work was supported by the UK Engineering and Physical Sciences Research Council (grant no. EP/V048597/1, EP/T017112/1). ML and YH are supported by Turing AI Fellowships funded by the UK Research and Innovation (grant no. EP/V030302/1, EP/V020579/1). References - Arana-Catania M., Kochkina E., Zubiaga A., Liakata M., Procter R., He Y.. Natural Language Inference with Self-Attention for Veracity Assessment of Pandemic Claims. NAACL 2022 https://arxiv.org/abs/2205.02596 - Stephen E Robertson, Steve Walker, Susan Jones, Micheline M Hancock-Beaulieu, Mike Gatford, et al. 1995. Okapi at trec-3. Nist Special Publication Sp,109:109. - Fabio Crestani, Mounia Lalmas, Cornelis J Van Rijsbergen, and Iain Campbell. 1998. "is this document relevant?. . . probably" a survey of probabilistic models in information retrieval. ACM Computing Surveys (CSUR), 30(4):528-552. - Stephen Robertson and Hugo Zaragoza. 2009. The probabilistic relevance framework: BM25 and beyond. Now Publishers Inc. - Rodrigo Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin. 2020. Document ranking with a pre-trained sequence-to-sequence model. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: Findings, pages 708-718. - Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, and Yoav Artzi. 2019. Bertscore: Evaluating text generation with bert. In International Conference on Learning Representations. - Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. - Ralph Weischedel, Martha Palmer, Mitchell Marcus, Eduard Hovy, Sameer Pradhan, Lance Ramshaw, Nianwen Xue, Ann Taylor, Jeff Kaufman, Michelle Franchini, et al. 2013. Ontonotes release 5.0 ldc2013t19. Linguistic Data Consortium, Philadelphia, PA, 23. - Limeng Cui and Dongwon Lee. 2020. Coaid: Covid-19 healthcare misinformation dataset. arXiv preprint arXiv:2006.00885. - Yichuan Li, Bohan Jiang, Kai Shu, and Huan Liu. 2020. Mm-covid: A multilingual and multimodal data repository for combating covid-19 disinformation. - Tamanna Hossain, Robert L. Logan IV, Arjuna Ugarte, Yoshitomo Matsubara, Sean Young, and Sameer Singh. 2020. COVIDLies: Detecting COVID-19 misinformation on social media. In Proceedings of the 1st Workshop on NLP for COVID-19 (Part 2) at EMNLP 2020, Online. Association for Computational Linguistics. - Ellen Voorhees, Tasmeer Alam, Steven Bedrick, Dina Demner-Fushman, William R Hersh, Kyle Lo, Kirk Roberts, Ian Soboroff, and Lucy Lu Wang. 2021. Trec-covid: constructing a pandemic information retrieval test collection. In ACM SIGIR Forum, volume 54, pages 1-12. ACM New York, NY, USA. 
Type Of Material Database/Collection of data 
Year Produced 2022 
Provided To Others? Yes  
Impact The dataset has been downloaded 255 times since its release in July 2022. 
URL https://zenodo.org/record/6493847
 
Title Stance-Augmented VAE Disentanglement model (SAVED) 
Description The SAVED model has been proposed for Twitter rumour veracity assessment. It incorporates a Variational Auto Encoder (VAE) with adversarial learning to disentangle topics which are informative for stance classification from those which are not. Tweet representations are derived based on the word representations learned in the latent stance-dependent topic space, which are then used to train a veracity classifier to classify whether the veracity of an input tweet is true, false or unverified. The model achieves the state-of-the-art accuracy scores on the commonly used PHEME dataset for Twitter veracity assessment. 
Type Of Material Computer model/algorithm 
Year Produced 2021 
Provided To Others? Yes  
Impact The developed SAVED model achieves the state-of-the-art accuracy scores on the commonly used PHEME dataset for Twitter veracity assessment. 
URL https://github.com/JohnNLP/SAVED
 
Title Stance-Aware Evidence Reasoning and Stance-Aware Aggregation model (TARSA) 
Description TARSA was proposed for more accurate fact verification, with the following four key properties: 1) checking topical consistency between the claim and evidence; 2) maintaining topical coherence among multiple pieces of evidence; 3) ensuring semantic similarity between the global topic information and the semantic representation of evidence; 4) aggregating evidence based on their implicit stances to the claim. 
Type Of Material Computer model/algorithm 
Year Produced 2021 
Provided To Others? Yes  
Impact Since the paper was published in 2021, it has been cited by authors from Checkstep Research, Sofia University in Bulgaria, University of Copenhagen in Denmark, Qatar Computing Research Institute, Fudan University in China, ByteDance AI Lab and the University of California, Santa Barbara in the US. 
URL https://github.com/jasenchn/TARSA
 
Title Vaccine Attitude Detection (VAD) model 
Description We propose a novel semi-supervised approach for vaccine attitude detection, called VADET. A variational autoencoding architecture based on language models is employed to learn from unlabelled data the topical information of the domain. Then, the model is fine-tuned with a few manually annotated examples of user attitudes. We validate the effectiveness of VADET on our annotated data and also on an existing vaccination corpus annotated with opinions on vaccines. Our results show that VADET is able to learn disentangled stance and aspect topics, and outperforms existing aspect-based sentiment analysis models on both stance de- tection and tweet clustering. 
Type Of Material Computer model/algorithm 
Year Produced 2022 
Provided To Others? Yes  
Impact The model is proposed in a paper published in NAACL 2022. 
URL https://github.com/somethingx1202/VADet
 
Description Collaboration with University of California, Irvine (UCI) 
Organisation University of California, Irvine
Department Donald Bren School of Information and Computer Sciences (ICS)
Country United States 
Sector Academic/University 
PI Contribution With the goal of creating dataset of Twitter conversations discussing misconceptions about COVID-19, we have been collecting the Twitter conversations related to the given claims and provided manual relevance annotations. We then performed experiments on methods for claim-tweet matching as well as generalisability of rumour verification models to the new dataset.
Collaborator Contribution We have been collaborating with researchers from University of California, Irvine (UCI), Tamanna Hossain, Robert L. Logan IV, Arjuna Ugarte and Sameer Singh, authors of the COVIDLies dataset. They collaborated studying the stance detection task, posing the question whether the tweet spreads a known misconception about COVID-19. They have provided manually assessed set of misconceptions. They have annotated relevant instances in the dataset that we have created for stance towards the claim, as either supporting, denying or discussing.
Impact The manually annotated dataset of Twitter conversations discussing misconceptions about COVID-19, relevant publication is under review
Start Year 2020
 
Title Code for CIKM Short Paper "Supervised Contrastive Learning for Multimodal Unreliable News Detection in COVID-19 Pandemic" 
Description The code for CIKM short paper "Supervised Contrastive Learning for Multimodal Unreliable News Detection in COVID-19 Pandemic". In this work, we propose a BERT-based multimodal unreliable news detection framework, which captures both textual and visual information from unreliable articles utilising the contrastive learning strategy. The contrastive learner interacts with the unreliable news classifier to push similar credible news (or similar unreliable news) closer while moving news articles with similar content but opposite credibility labels away from each other in the multimodal embedding space. 
Type Of Technology Software 
Year Produced 2021 
Open Source License? Yes  
Impact NA 
URL https://zenodo.org/record/6342230
 
Title PANACEA: An Automated Misinformation Detection System on COVID-19 
Description Our web-based misinformation detection system PANACEA on COVID-19 related claims has two modules, fact-checking and rumour detection. The fact-checking module, which is supported by novel natural language inference methods with a self-attention network, outperforms state-of-the-art approaches. It is also able to give automated veracity assessment and ranked supporting evidence with the stance towards the claim to be checked. In addition, PANACEA adapts the bi-directional graph convolutional networks model, which is able to detect rumours based on comment networks of related tweets, instead of relying on the knowledge base. This rumour detection module assists by warning the users in the early stages when a knowledge base may not be available. 
Type Of Technology Webtool/Application 
Year Produced 2023 
Open Source License? Yes  
Impact Our developed system has been summarised in a demo paper accepted in EACL 2023, a leading conference in NLP. It has also been selected to present in AI UK 2023. 
 
Description Featured in Futurum, an online magazine 
Form Of Engagement Activity A magazine, newsletter or online publication
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Schools
Results and Impact Yulan He was featured in Futurum Careers, an online magazine, discussing her work on teaching computers to understand human language and offering guidance to young people interested in AI and NLP. Futurum Careers is a free online resource and magazine aimed at introducing 14-19-year-olds worldwide to the world of work in science, tech, engineering, maths, medicine, social sciences, humanities and the arts for people and the economy.
Year(s) Of Engagement Activity 2022
URL https://futurumcareers.com/teaching-computers-to-understand-our-language
 
Description Interview by New Scientist 
Form Of Engagement Activity A press release, press conference or response to a media enquiry/interview
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Media (as a channel to the public)
Results and Impact I have been interviewed by New Scientist to comment on the ChatGPT detector.
Year(s) Of Engagement Activity 2023
URL https://www.newscientist.com/article/2355035-chatgpt-detector-could-help-spot-cheaters-using-ai-to-w...
 
Description Invited talk at AI UK 2022 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Presented my work on machine reasoning for natural language understanding in AI UK 2022. My talk led to a collaborative project with AQA and joint research proposals with a few UK universities.
Year(s) Of Engagement Activity 2022
URL https://www.turing.ac.uk/node/7396
 
Description Invited talk at the University of Durham 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Postgraduate students
Results and Impact I present our recent work for (1) multimodal unreliable news detection using supervised contrastive learning; (2) COVID-related claim veracity assessment with a self-attention network built on natural language inference; and (3) vaccine attitude detection in social media through disentangled learning of stance and aspect topics. The talk sparked further discussions on potential collaborations.
Year(s) Of Engagement Activity 2022
URL https://aihs.webspace.durham.ac.uk/seminars/
 
Description Invited talk, AI4Media Workshop on "Human- and Society-centred AI", online event 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Workshop included presentations from AI4Media partners and two invited speakers. I, as an invited speaker, have presented a review of existing methods and challenges related to automated rumour verification. The presentation sparked questions and discussion afterwards.
Year(s) Of Engagement Activity 2021
URL https://www.vision4ai.eu/ai4media-workshop-human-society-ai/
 
Description Invited talk, USC Viterbi, Information Sciences Institute, online event 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact I was invited to be a guest speaker at ISI AI seminar. The title of my talk was "STATE OF THE ART AND CHALLENGES OF AUTOMATED RUMOUR VERIFICATION IN SOCIAL MEDIA CONVERSATIONS". The seminar had approx 30-40 participants and the talk sparked questions and discussion afterwards. Additionally I had personal meetings with members of their department.
Year(s) Of Engagement Activity 2021
 
Description Mediate workshop 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact We have organised a Mediate workshop (https://digitalmediasig.github.io/Mediate2021/) as part of the International AAAI Conference on Web and Social Media (ICWSM) on the topic of Misinformation: automation, uptake, and digital governance. The main goal of the workshop is to bring together media practitioners and technologists to discuss new opportunities and obstacles that arise in the modern era of information diffusion, which include the challenges and discoveries related to COVID-19 misinformation.
Year(s) Of Engagement Activity 2021,2022
URL https://digitalmediasig.github.io/Mediate2021/
 
Description Mediate workshop "Misinformation: automation, uptake, and digital governance" at the International AAAI Conference on Web and Social Media (ICWSM), 202 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact I was a co-organiser of the second Mediate workshop which was be held virtually on June 7, as part of the International AAAI Conference on Web and Social Media (ICWSM). The main goal of the workshop was to bring together media practitioners and technologists to discuss new opportunities and obstacles that arise in the modern era of information diffusion. We had six invited keynote speakers who shared their perspectives on the three main themes of the workshop. We had two contributions on automated methods tackling misinformation and three contributions on the uptake of automation, discussing potential solutions that can be implemented by social media platforms to combat the spread of misinformation.
Year(s) Of Engagement Activity 2021
URL https://digitalmediasig.github.io/Mediate2021/
 
Description PANACEA workshop 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Professional Practitioners
Results and Impact We presented in the workshop our main findings of the EPSRC funded project PANACEA, in which, we have developed novel methods for veracity assessment of claims unverified at the time of posting by integrating information from multiple sources and a novel visualisation interface for easy interpretation by users. The PANACEA workshop brought together experts from academia, industry and government agencies to discuss the open challenges in claim veracity assessment and feasible mitigation strategies to combat the propagation of misinformation. The event resulted in further collaboration opportunities.
Year(s) Of Engagement Activity 2022
URL https://panacea2020.github.io/workshop.html
 
Description Talk at QMUL EECS department event 
Form Of Engagement Activity Participation in an open day or visit at my research institution
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Postgraduate students
Results and Impact The EECS department held an event in which everyone gave a small talk about their research to foster collaboration.
Year(s) Of Engagement Activity 2021
 
Description Truth and Trust Online Conference (TTO) 2021 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact I was a publicity chair for the TTO 2021 conference. The annual Conference for Truth and Trust Online is organised as a unique collaboration between practitioners, technologists, academics and platforms, to share, discuss, and collaborate on useful technical innovations and research in the space.
Year(s) Of Engagement Activity 2021
URL https://truthandtrustonline.com/tto-2021/conference-for-truth-and-trust-online-2021/
 
Description Tutorial in the Oxford Machine Learning Summer School 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact I delivered a tutorial on recent developments in sentiment analysis in the Oxford Machine Learning Summer School, targeting postgraduate students and researchers working in AI and machine learning.
Year(s) Of Engagement Activity 2022
URL https://www.oxfordml.school/oxml2022
 
Description Tutorial in the Oxford Machine Learning Summer School 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact Yulan He was invited to give a tutorial on recent developments in sentiment analysis in the Oxford Machine Learning Summer School which was held in August 2021. The tutorial has attracted over 200 participants. As participants highly praised the tutorial, Yulan was invited to give a tutorial again in the Summer School in August 2022.
Year(s) Of Engagement Activity 2021
URL https://www.oxfordml.school/2021