📣 Help Shape the Future of UKRI's Gateway to Research (GtR)

We're improving UKRI's Gateway to Research and are seeking your input! If you would be interested in being interviewed about the improvements we're making and to have your say about how we can make GtR more user-friendly, impactful, and effective for the Research and Innovation community, please email gateway@ukri.org.

Twenty20Insight

Lead Research Organisation: University of Warwick
Department Name: Computer Science

Abstract

Abstracts are not currently available in GtR for all funded research. This is normally because the abstract was not required at the time of proposal submission, but may be because it included sensitive information such as personal details.

Publications

10 25 50

Related Projects

Project Reference Relationship Related To Start End Award Value
EP/T017112/1 31/08/2020 29/09/2022 £305,864
EP/T017112/2 Transfer EP/T017112/1 30/09/2022 30/08/2023 £90,127
 
Description Our key findings are summarised below:

(1) We have proposed a novel singular value transformation function to address the token uniformity problem of the widely-used Transformer architecture, where different tokens share a large proportion of similar information after going through stacked multiple layers in a transformer. We propose to use the distribution of singular values of outputs of each transformer layer to characterise the phenomenon of token uniformity and empirically illustrate that a less skewed singular value distribution can alleviate the token uniformity problem. Based on our observations, we define several desirable properties of singular value distributions and propose a novel transformation function for updating the singular values. We show that apart from alleviating token uniformity, the transformation function should preserve the local neighbourhood structure in the original embedding space.

(2) We have developed a new explainable AI (XAI) approach for providing hierarchical interpretations for neural text classifiers. Most existing XAI approaches aim at identifying input features such as words or phrases important for model predictions. Neural models developed in NLP however often compose word semantics in a hierarchical manner. Interpretation by words or phrases only thus cannot faithfully explain model decisions. We have proposed a Hierarchical Interpretable Neural Text classifier, called Hint, which is able to identify the latent semantic factors and their compositions which contribute to the model's final decisions. This is often beyond what word-level interpretations could capture.

(3) We have developed an explainable recommender system by simultaneously considering both implicit user-item interactions and users' reviews on certain items. We can infer latent semantic factors from user-item reviews, which can be used for both recommendation and explanation generation. We have shown that our model significantly improves the interpretability of existing recommender systems built on variational autoencoder while achieving performance comparable to existing content-based recommender systems in terms of recommendation behaviours.
Exploitation Route The transformer architecture is widely used in pre-trained language models such as BERT, ALBERT, RoBERTa, DistilBERT, GPT, etc., and has been extensively employed in tackling various tasks in Natural Language Processing and computer vision. Our proposed singular value transformation function will thus have a great potential to address the token uniformity problem in models built on the transformer architecture.

Our developed XAI approach for neural text classification and interpretable recommender systems can be applied in a wide range of tasks such as sentiment analysis, topic classification, rumour veracity assessment, and produce recommendation.
Sectors Digital/Communication/Information Technologies (including Software)

Education

Financial Services

and Management Consultancy

Healthcare

Pharmaceuticals and Medical Biotechnology

URL https://sites.google.com/view/yulanhe/trustworthy-ai
 
Description The impacts of our research are evident in the following areas: (1) We have proposed a series of novel approaches addressing the interpretability concerns surrounding neural models in language understanding. This includes a hierarchical interpretable text classifier going beyond word-level interpretations, uncertainty interpretation of text classifiers built on pre-trained language models, explainable recommender systems by harnessing information across diverse modalities, and explainable student answer scoring by leveraging rationales generated by ChatGPT. Our approaches and findings shed light into potential advancements in interpretable language understanding. (2) Our proposed explainable student answer scoring system is currently under further development funded by the EPSRC's Impact Acceleration Account, with the aim of deployment by AQA.
First Year Of Impact 2024
Sector Digital/Communication/Information Technologies (including Software),Education,Other
 
Title Addressing token uniformity in transformers using the singular value transformation function (SoftDecay) 
Description Token uniformity is commonly observed in transformer-based models, in which different tokens share a large proportion of similar information after going through stacked multiple self-attention layers in a transformer. We propose to use the distribution of singular values of outputs of each transformer layer to characterise the phenomenon of token uniformity and empirically illustrate that a less skewed singular value distribution can alleviate the token uniformity problem. Base on our observations, we define several desirable properties of singular value distributions and propose a novel transformation function for updating the singular values. We show that apart from alleviating token uniformity, the transformation function should preserve the local neighbourhood structure in the original embedding space. Our proposed singular value transformation function is applied to a range of transformer-based language models such as BERT, ALBERT, RoBERTa and DistilBERT, and improved performance is observed in semantic textual similarity evaluation and a range of GLUE tasks 
Type Of Material Computer model/algorithm 
Year Produced 2022 
Provided To Others? Yes  
Impact The proposed approach is described in a paper published in UAI 2022. 
URL https://github.com/hanqi-qi/tokenUni
 
Title CUE: a text Classifier Uncertainty Explanation model 
Description CUE aims to interpret uncertainties inherent in the predictions of text classifiers built on Pre-trained Language Models (PLMs). In particular, we first map PLM-encoded representations to a latent space via a variational auto-encoder. We then generate text representations by perturbing the latent space which causes fluctuation in predictive uncertainty. By comparing the difference in predictive uncertainty between the perturbed and the original text representations, we are able to identify the latent dimensions responsible for uncertainty and subsequently trace back to the input features that contribute to such uncertainty. 
Type Of Material Computer model/algorithm 
Year Produced 2023 
Provided To Others? Yes  
Impact The model is proposed in a paper published in UAI 2023. 
URL https://github.com/lijiazheng99/CUE
 
Title DIVA - the Disentangling Interaction of VAriables framework proposed for causal inference from text 
Description Adjusting for latent covariates is crucial for estimating causal effects from observational textual data. Most existing methods only account for confounding covariates that affect both treatment and outcome, potentially leading to biased causal effects. This bias arises from insufficient consideration of non-confounding covariates, which are relevant only to either the treatment or the outcome. Our proposed framework DIVA can mitigate the bias by unveiling interactions between different variables to disentangle the non-confounding covariates when estimating causal effects from text. The disentangling process ensures covariates only contribute to their respective objectives, enabling independence between variables. Additionally, we impose a constraint to balance representations from the treatment group and control group to alleviate selection bias. 
Type Of Material Computer model/algorithm 
Year Produced 2023 
Provided To Others? Yes  
Impact The approach is presented in a paper accepted to the Findings of EMNLP 2023 (https://aclanthology.org/2023.findings-emnlp.709.pdf). 
URL https://github.com/zyxnlp/DIVA
 
Title Hierarchical Interpretable Neural Text classifier (HINT) 
Description Recent years have witnessed increasing interest in developing interpretable models in Natural Language Processing (NLP). Most existing models aim at identifying input features such as words or phrases important for model predictions. Neural models developed in NLP, however, often compose word semantics in a hierarchical manner. As such, interpretation by words or phrases only cannot faithfully explain model decisions in text classification.We propose a novel Hierarchical Interpretable Neural Text classifier, called HINT, which can automatically generate explanations of model predictions in the form of label-associated topics in a hierarchical manner. Model interpretation is no longer at the word level, but built on topics as the basic semantic unit. Experimental results on both review datasets and news datasets show that our proposed approach achieves text classification results on par with existing state-of-the-art text classifiers, and generates interpretations more faithful to model predictions and better understood by humans than other interpretable neural text classifiers 
Type Of Material Computer model/algorithm 
Year Produced 2022 
Provided To Others? Yes  
Impact The approach is described in a paper published in the Computational Linguistics journal. 
URL https://github.com/hanqi-qi/HINT
 
Title MATTE -- a doMain AdapTive counTerfactual gEneration model 
Description Counterfactual generation lies at the core of various machine learning tasks. Existing disentangled methods crucially rely on oversimplified assumptions, such as assuming independent content and style variables, to identify the latent variables, even though such assumptions may not hold for complex data distributions. This problem is exacerbated when data are sampled from multiple domains since the dependence between content and style may vary significantly over domains. We proposed the doMain AdapTive counTerfactual gEneration model, called (MATTE), which addresses the domain-varying dependence between the content and the style variables inherent in the counterfactual generation task. 
Type Of Material Computer model/algorithm 
Year Produced 2023 
Provided To Others? Yes  
Impact The model is presented in a paper published in NeurIPS 2023 (https://openreview.net/pdf?id=cslnCXE9XA). 
URL https://github.com/hanqi-qi/Matte
 
Title MOBO: The MOvie and BOok reviews dataset 
Description The MOBO dataset. The MOvie and BOok reviews dataset is a collection made up of movie and book reviews, paired with their related plots. The reviews come from different publicly available datasets: the Stanford's IMDB movie reviews [1], the GoodReads [2] and the Amazon reviews dataset [3]. With the help of 15 annotators, we further labeled more than 18,000 reviews' sentences (~6000 per corpus), marking the sentence polarity (Positive, Negative), or whether a sentence describes its corresponding movie/book Plot, or none of the above (None). In the dataset folder, we have shared an excerpt of the annotated sentences for each dataset. 
Type Of Material Database/Collection of data 
Year Produced 2022 
Provided To Others? Yes  
Impact Until March 2024, the dataset has received 177 downloads. 
URL https://zenodo.org/record/6348893
 
Title MOBO: The MOvie and BOok reviews dataset 
Description The MOvie and BOok reviews dataset is a collection made up of movie and book reviews, paired with their related plots. The reviews come from different publicly available datasets: the Stanford's IMDB movie reviews, the GoodReads and the Amazon reviews dataset. With the help of 15 annotators, we further labeled more than 18,000 reviews' sentences (~6000 per corpus), marking the sentence polarity (Positive, Negative), or whether a sentence describes its corresponding movie/book plot, or none of the above (None). 
Type Of Material Database/Collection of data 
Year Produced 2021 
Provided To Others? Yes  
Impact Since the dataset was published in 2021, it has been cited by authors from Baidu Research in the US, the Institute for Research in Biomedicine (IRB) in Spain, the Universitat Politècnica de València in Spain, and the University of Sao Paulo in Brazil. 
URL https://zenodo.org/record/6348894#.Yix8pBDP1f0
 
Title A Disentangled Adversarial Neural Topic Model for Separating Opinions from Plots in User Reviews 
Description This is the code of the DIATOM model presented in the NAACL 2021 paper: A Disentangled Adversarial Neural Topic Model for Separating Opinions from Plots in User Reviews, G. Pergola, L. Gui, Y. He, NAACL 2021 [link] Abstract: "The flexibility of the inference process in Variational Autoencoders (VAEs) has recently led to revising traditional probabilistic topic models giving rise to Neural Topic Models (NTM). Although these approaches have achieved significant results, surprisingly very little work has been done on how to disentangle the latent topics. Existing topic models when applied to reviews may extract topics associated with writers' subjective opinions mixed with those related to factual descriptions such as plot summaries in movie and book reviews. It is thus desirable to automatically separate opinion topics from plot/neutral ones enabling a better interpretability. In this paper, we propose a neural topic model combined with adversarial training to disentangle opinion topics from plot and neutral ones. We conduct an extensive experimental assessment introducing a new collection of movie and book reviews paired with their plots, namely MOBO dataset, showing an improved coherence and variety of topics, a consistent disentanglement rate, and sentiment classification performance superior to other supervised topic models." 
Type Of Technology Software 
Year Produced 2022 
Open Source License? Yes  
URL https://zenodo.org/record/6349199
 
Title A Disentangled Adversarial Neural Topic Model for Separating Opinions from Plots in User Reviews 
Description This is the code of the DIATOM model presented in the NAACL 2021 paper: A Disentangled Adversarial Neural Topic Model for Separating Opinions from Plots in User Reviews, G. Pergola, L. Gui, Y. He, NAACL 2021 [link] Abstract: "The flexibility of the inference process in Variational Autoencoders (VAEs) has recently led to revising traditional probabilistic topic models giving rise to Neural Topic Models (NTM). Although these approaches have achieved significant results, surprisingly very little work has been done on how to disentangle the latent topics. Existing topic models when applied to reviews may extract topics associated with writers' subjective opinions mixed with those related to factual descriptions such as plot summaries in movie and book reviews. It is thus desirable to automatically separate opinion topics from plot/neutral ones enabling a better interpretability. In this paper, we propose a neural topic model combined with adversarial training to disentangle opinion topics from plot and neutral ones. We conduct an extensive experimental assessment introducing a new collection of movie and book reviews paired with their plots, namely MOBO dataset, showing an improved coherence and variety of topics, a consistent disentanglement rate, and sentiment classification performance superior to other supervised topic models." 
Type Of Technology Software 
Year Produced 2022 
Open Source License? Yes  
URL https://zenodo.org/record/6349198
 
Title A Neural Generative Model for Joint Learning Topics and Topic-Specific Word Embeddings 
Description topical_wordvec_models You first need to create a save folder for training. Download the [saved model](https://topicvecmodels.s3.eu-west-2.amazonaws.com/save/47/model) and place it in ./save/47/ to run the trained model. To construct the training set, refer to https://github.com/somethingx02/topical_wordvec_model please. Trained [wordvecs](https://topicvecmodels.s3.eu-west-2.amazonaws.com/save/47/aggrd_all_wordrep.txt). 
Type Of Technology Software 
Year Produced 2022 
Open Source License? Yes  
URL https://zenodo.org/record/6352450
 
Title A Neural Generative Model for Joint Learning Topics and Topic-Specific Word Embeddings 
Description topical_wordvec_models You first need to create a save folder for training. Download the [saved model](https://topicvecmodels.s3.eu-west-2.amazonaws.com/save/47/model) and place it in ./save/47/ to run the trained model. To construct the training set, refer to https://github.com/somethingx02/topical_wordvec_model please. Trained [wordvecs](https://topicvecmodels.s3.eu-west-2.amazonaws.com/save/47/aggrd_all_wordrep.txt). 
Type Of Technology Software 
Year Produced 2022 
Open Source License? Yes  
URL https://zenodo.org/record/6352449
 
Title Code for EMNLP paper "Extracting Event Temporal Relations via Hyperbolic Geometry" 
Description This is the code of EMNLP 2021 main track long paper "Extracting Event Temporal Relations via Hyperbolic Geometry". The paper proposed two hyperbolic-based approaches for the event temporal relation extraction task, which is an Event-centric Natural Language Understanding task. 
Type Of Technology Software 
Year Produced 2022 
Open Source License? Yes  
URL https://zenodo.org/record/6349213
 
Title Code for EMNLP paper "Extracting Event Temporal Relations via Hyperbolic Geometry" 
Description This is the code of EMNLP 2021 main track long paper "Extracting Event Temporal Relations via Hyperbolic Geometry". The paper proposed two hyperbolic-based approaches for the event temporal relation extraction task, which is an Event-centric Natural Language Understanding task. 
Type Of Technology Software 
Year Produced 2022 
Open Source License? Yes  
URL https://zenodo.org/record/6349212
 
Title Topic-Driven and Knowledge-Aware Transformer for Dialogue Emotion Detection 
Description Transformer encoder-decoder for emotion detection in dialogues 
Type Of Technology Software 
Year Produced 2022 
Open Source License? Yes  
URL https://zenodo.org/record/6352566
 
Title Topic-Driven and Knowledge-Aware Transformer for Dialogue Emotion Detection 
Description Transformer encoder-decoder for emotion detection in dialogues 
Type Of Technology Software 
Year Produced 2022 
Open Source License? Yes  
URL https://zenodo.org/record/6352567
 
Title Understanding patient reviews with minimum supervision 
Description The code for paper: Understanding patient reviews with minimum supervision. L Gui, Y He. Artificial Intelligence in Medicine 120, 102160 'read.py': extract the clinical reviews from Yelp dataset, which can be downloaded at: https://www.yelp.com/dataset/download In 'read.py', you can modify the keywords list in line 34-100 for your task. Due to the size limitation, we only upload small training and testing samples as 'train' and 'test'. Hence, the performance might be slightly lower than what we reported in our paper. bibtex: @article{gui2021understanding, title={Understanding patient reviews with minimum supervision}, author={Gui, Lin and He, Yulan}, journal={Artificial Intelligence in Medicine}, volume={120}, pages={102160}, year={2021}, publisher={Elsevier} } 
Type Of Technology Software 
Year Produced 2022 
Open Source License? Yes  
Impact Understanding patient opinions expressed towards healthcare services in online platforms could allow healthcare professionals to respond to address patients' concerns in a timely manner. Extracting patient opinion towards various aspects of health services is closely related to aspect-based sentiment analysis (ABSA) in which we need to identify both opinion targets and target-specific opinion expressions. The lack of aspect-level annotations however makes it difficult to build such an ABSA system. This paper proposes a joint learning framework for simultaneous unsupervised aspect extraction at the sentence level and supervised sentiment classification at the document level. It achieves 98.2% sentiment classification accuracy when tested on the reviews about healthcare services collected from Yelp, outperforming several strong baselines. Moreover, our model can extract coherent aspects and can automatically infer the distribution of aspects under different polarities without requiring aspect-level annotations for model learning. 
URL https://zenodo.org/record/6350564
 
Title Understanding patient reviews with minimum supervision 
Description The code for paper: Understanding patient reviews with minimum supervision. L Gui, Y He. Artificial Intelligence in Medicine 120, 102160 'read.py': extract the clinical reviews from Yelp dataset, which can be downloaded at: https://www.yelp.com/dataset/download In 'read.py', you can modify the keywords list in line 34-100 for your task. Due to the size limitation, we only upload small training and testing samples as 'train' and 'test'. Hence, the performance might be slightly lower than what we reported in our paper. bibtex: @article{gui2021understanding, title={Understanding patient reviews with minimum supervision}, author={Gui, Lin and He, Yulan}, journal={Artificial Intelligence in Medicine}, volume={120}, pages={102160}, year={2021}, publisher={Elsevier} } 
Type Of Technology Software 
Year Produced 2022 
Open Source License? Yes  
URL https://zenodo.org/record/6350563
 
Description Featured in Futurum, an online magazine 
Form Of Engagement Activity A magazine, newsletter or online publication
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Schools
Results and Impact Yulan He was featured in Futurum Careers, an online magazine, discussing her work on teaching computers to understand human language and offering guidance to young people interested in AI and NLP. Futurum Careers is a free online resource and magazine aimed at introducing 14-19-year-olds worldwide to the world of work in science, tech, engineering, maths, medicine, social sciences, humanities and the arts for people and the economy.
Year(s) Of Engagement Activity 2022
URL https://futurumcareers.com/teaching-computers-to-understand-our-language
 
Description Interview by New Scientist 
Form Of Engagement Activity A press release, press conference or response to a media enquiry/interview
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Media (as a channel to the public)
Results and Impact I have been interviewed by New Scientist to comment on the ChatGPT detector.
Year(s) Of Engagement Activity 2023
URL https://www.newscientist.com/article/2355035-chatgpt-detector-could-help-spot-cheaters-using-ai-to-w...
 
Description Invited talk at AI UK 2022 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Presented my work on machine reasoning for natural language understanding in AI UK 2022. My talk led to a collaborative project with AQA and joint research proposals with a few UK universities.
Year(s) Of Engagement Activity 2022
URL https://www.turing.ac.uk/node/7396
 
Description Invited talk at LSEG 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Invited talk on "Advancing FinTech through NLP Research" at the London Stock Exchange Group in January 2024.
Year(s) Of Engagement Activity 2023
 
Description Invited talk at Zebra Technologies 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Professional Practitioners
Results and Impact Invited talk on "Interactive Narrative Understanding" at Zebra Technologies in November 2023.
Year(s) Of Engagement Activity 2023
 
Description Invited talk at the University of Cambridge 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Postgraduate students
Results and Impact Yulan He was invited to give a talk on "Hierarchical Interpretation of Neural Text Classification" in the Language Technology Lab (LTL) at the University of Cambridge, headed by Anna Korhonen and Nigel Collier. A follow-up discussion was held in March between Anna and Yulan to explore potential future collaborations.
Year(s) Of Engagement Activity 2022
URL http://131.111.150.181/talk/index/170564
 
Description Keynote at CIKM 2023 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Delivered a keynote presentation on "Interpretable Natural Language Understanding" at the 32nd ACM International Conference on Information and Knowledge Management (CIKM), which was held in Birmingham, UK in October 2023.
Year(s) Of Engagement Activity 2023
URL https://uobevents.eventsair.com/cikm2023/yulan-he
 
Description Keynote at INLG 2024 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact A keynote on Enhancing LLM Reasoning through Reflection and Refinement was given at the 17th International Natural Language Generation Conference held in Tokyo, Japan in September 2024.
Year(s) Of Engagement Activity 2024
URL https://2024.inlgmeeting.org/keynotes.html
 
Description Keynote at MATHMOD 2025 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact Gave a plenary talk on "Advanced in Interpretable Language Modelling" at the 11th Vienna International Conference on Mathematical Modelling (MATHMOD 2025).
Year(s) Of Engagement Activity 2024
URL https://www.mathmod.at/
 
Description Keynote at NLDB 2023 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Delivered a keynote on "Interpretable Language Understanding" at the 28th International Conference on Natural Language & Information Systems (NLDB), held in Derby, UK in June 2023.
Year(s) Of Engagement Activity 2023
URL https://www.derby.ac.uk/events/latest-events/nldb-2023/
 
Description Tutorial in the Oxford Machine Learning Summer School 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact I delivered a tutorial on recent developments in sentiment analysis in the Oxford Machine Learning Summer School, targeting postgraduate students and researchers working in AI and machine learning.
Year(s) Of Engagement Activity 2022
URL https://www.oxfordml.school/oxml2022