Twenty20Insight

Lead Research Organisation: University of Warwick
Department Name: Computer Science

Abstract

Abstracts are not currently available in GtR for all funded research. This is normally because the abstract was not required at the time of proposal submission, but may be because it included sensitive information such as personal details.

Publications

10 25 50
publication icon
Yan H. (2022) Addressing Token Uniformity in Transformers via Singular Value Transformation in Proceedings of the 38th Conference on Uncertainty in Artificial Intelligence, UAI 2022

publication icon
Cavallaro M (2023) Bayesian inference of polymerase dynamics over the exclusion process. in Royal Society open science

publication icon
Zhou Y. (2023) Causal Inference from Text: Unveiling Interactions between Variables in Findings of the Association for Computational Linguistics: EMNLP 2023

publication icon
Li J. (2023) Distilling ChatGPT for Explainable Automated Student Answer Assessment in Findings of the Association for Computational Linguistics: EMNLP 2023

publication icon
Liang B (2023) Embedding Refinement Framework for Targeted Aspect-Based Sentiment Analysis in IEEE Transactions on Affective Computing

publication icon
Yan H (2024) Explainable Recommender With Geometric Information Bottleneck in IEEE Transactions on Knowledge and Data Engineering

 
Description Our key findings are summarised below:

(1) We have proposed a novel singular value transformation function to address the token uniformity problem of the widely-used Transformer architecture, where different tokens share a large proportion of similar information after going through stacked multiple layers in a transformer. We propose to use the distribution of singular values of outputs of each transformer layer to characterise the phenomenon of token uniformity and empirically illustrate that a less skewed singular value distribution can alleviate the token uniformity problem. Based on our observations, we define several desirable properties of singular value distributions and propose a novel transformation function for updating the singular values. We show that apart from alleviating token uniformity, the transformation function should preserve the local neighbourhood structure in the original embedding space.

(2) We have developed a new explainable AI (XAI) approach for providing hierarchical interpretations for neural text classifiers. Most existing XAI approaches aim at identifying input features such as words or phrases important for model predictions. Neural models developed in NLP however often compose word semantics in a hierarchical manner. Interpretation by words or phrases only thus cannot faithfully explain model decisions. We have proposed a Hierarchical Interpretable Neural Text classifier, called Hint, which is able to identify the latent semantic factors and their compositions which contribute to the model's final decisions. This is often beyond what word-level interpretations could capture.

(3) We have developed an explainable recommender system by simultaneously considering both implicit user-item interactions and users' reviews on certain items. We can infer latent semantic factors from user-item reviews, which can be used for both recommendation and explanation generation. We have shown that our model significantly improves the interpretability of existing recommender systems built on variational autoencoder while achieving performance comparable to existing content-based recommender systems in terms of recommendation behaviours.
Exploitation Route The transformer architecture is widely used in pre-trained language models such as BERT, ALBERT, RoBERTa, DistilBERT, GPT, etc., and has been extensively employed in tackling various tasks in Natural Language Processing and computer vision. Our proposed singular value transformation function will thus have a great potential to address the token uniformity problem in models built on the transformer architecture.

Our developed XAI approach for neural text classification and interpretable recommender systems can be applied in a wide range of tasks such as sentiment analysis, topic classification, rumour veracity assessment, and produce recommendation.
Sectors Digital/Communication/Information Technologies (including Software)

Education

Financial Services

and Management Consultancy

Healthcare

Pharmaceuticals and Medical Biotechnology

URL https://sites.google.com/view/yulanhe/trustworthy-ai
 
Description The impacts of our research are evident in the following areas: (1) We have proposed a series of novel approaches addressing the interpretability concerns surrounding neural models in language understanding. This includes a hierarchical interpretable text classifier going beyond word-level interpretations, uncertainty interpretation of text classifiers built on pre-trained language models, explainable recommender systems by harnessing information across diverse modalities, and explainable student answer scoring by leveraging rationales generated by ChatGPT. Our approaches and findings shed light into potential advancements in interpretable language understanding. (2) Our proposed explainable student answer scoring system is currently under further development funded by the EPSRC's Impact Acceleration Account, with the aim of deployment by AQA.
First Year Of Impact 2024
Sector Digital/Communication/Information Technologies (including Software),Education,Other
 
Title Addressing token uniformity in transformers using the singular value transformation function (SoftDecay) 
Description Token uniformity is commonly observed in transformer-based models, in which different tokens share a large proportion of similar information after going through stacked multiple self-attention layers in a transformer. We propose to use the distribution of singular values of outputs of each transformer layer to characterise the phenomenon of token uniformity and empirically illustrate that a less skewed singular value distribution can alleviate the token uniformity problem. Base on our observations, we define several desirable properties of singular value distributions and propose a novel transformation function for updating the singular values. We show that apart from alleviating token uniformity, the transformation function should preserve the local neighbourhood structure in the original embedding space. Our proposed singular value transformation function is applied to a range of transformer-based language models such as BERT, ALBERT, RoBERTa and DistilBERT, and improved performance is observed in semantic textual similarity evaluation and a range of GLUE tasks 
Type Of Material Computer model/algorithm 
Year Produced 2022 
Provided To Others? Yes  
Impact The proposed approach is described in a paper published in UAI 2022. 
URL https://github.com/hanqi-qi/tokenUni
 
Title CUE: a text Classifier Uncertainty Explanation model 
Description CUE aims to interpret uncertainties inherent in the predictions of text classifiers built on Pre-trained Language Models (PLMs). In particular, we first map PLM-encoded representations to a latent space via a variational auto-encoder. We then generate text representations by perturbing the latent space which causes fluctuation in predictive uncertainty. By comparing the difference in predictive uncertainty between the perturbed and the original text representations, we are able to identify the latent dimensions responsible for uncertainty and subsequently trace back to the input features that contribute to such uncertainty. 
Type Of Material Computer model/algorithm 
Year Produced 2023 
Provided To Others? Yes  
Impact The model is proposed in a paper published in UAI 2023. 
URL https://github.com/lijiazheng99/CUE
 
Title DIVA - the Disentangling Interaction of VAriables framework proposed for causal inference from text 
Description Adjusting for latent covariates is crucial for estimating causal effects from observational textual data. Most existing methods only account for confounding covariates that affect both treatment and outcome, potentially leading to biased causal effects. This bias arises from insufficient consideration of non-confounding covariates, which are relevant only to either the treatment or the outcome. Our proposed framework DIVA can mitigate the bias by unveiling interactions between different variables to disentangle the non-confounding covariates when estimating causal effects from text. The disentangling process ensures covariates only contribute to their respective objectives, enabling independence between variables. Additionally, we impose a constraint to balance representations from the treatment group and control group to alleviate selection bias. 
Type Of Material Computer model/algorithm 
Year Produced 2023 
Provided To Others? Yes  
Impact The approach is presented in a paper accepted to the Findings of EMNLP 2023 (https://aclanthology.org/2023.findings-emnlp.709.pdf). 
URL https://github.com/zyxnlp/DIVA
 
Title Hierarchical Interpretable Neural Text classifier (HINT) 
Description Recent years have witnessed increasing interest in developing interpretable models in Natural Language Processing (NLP). Most existing models aim at identifying input features such as words or phrases important for model predictions. Neural models developed in NLP, however, often compose word semantics in a hierarchical manner. As such, interpretation by words or phrases only cannot faithfully explain model decisions in text classification.We propose a novel Hierarchical Interpretable Neural Text classifier, called HINT, which can automatically generate explanations of model predictions in the form of label-associated topics in a hierarchical manner. Model interpretation is no longer at the word level, but built on topics as the basic semantic unit. Experimental results on both review datasets and news datasets show that our proposed approach achieves text classification results on par with existing state-of-the-art text classifiers, and generates interpretations more faithful to model predictions and better understood by humans than other interpretable neural text classifiers 
Type Of Material Computer model/algorithm 
Year Produced 2022 
Provided To Others? Yes  
Impact The approach is described in a paper published in the Computational Linguistics journal. 
URL https://github.com/hanqi-qi/HINT
 
Title MATTE -- a doMain AdapTive counTerfactual gEneration model 
Description Counterfactual generation lies at the core of various machine learning tasks. Existing disentangled methods crucially rely on oversimplified assumptions, such as assuming independent content and style variables, to identify the latent variables, even though such assumptions may not hold for complex data distributions. This problem is exacerbated when data are sampled from multiple domains since the dependence between content and style may vary significantly over domains. We proposed the doMain AdapTive counTerfactual gEneration model, called (MATTE), which addresses the domain-varying dependence between the content and the style variables inherent in the counterfactual generation task. 
Type Of Material Computer model/algorithm 
Year Produced 2023 
Provided To Others? Yes  
Impact The model is presented in a paper published in NeurIPS 2023 (https://openreview.net/pdf?id=cslnCXE9XA). 
URL https://github.com/hanqi-qi/Matte
 
Description Featured in Futurum, an online magazine 
Form Of Engagement Activity A magazine, newsletter or online publication
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Schools
Results and Impact Yulan He was featured in Futurum Careers, an online magazine, discussing her work on teaching computers to understand human language and offering guidance to young people interested in AI and NLP. Futurum Careers is a free online resource and magazine aimed at introducing 14-19-year-olds worldwide to the world of work in science, tech, engineering, maths, medicine, social sciences, humanities and the arts for people and the economy.
Year(s) Of Engagement Activity 2022
URL https://futurumcareers.com/teaching-computers-to-understand-our-language
 
Description Interview by New Scientist 
Form Of Engagement Activity A press release, press conference or response to a media enquiry/interview
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Media (as a channel to the public)
Results and Impact I have been interviewed by New Scientist to comment on the ChatGPT detector.
Year(s) Of Engagement Activity 2023
URL https://www.newscientist.com/article/2355035-chatgpt-detector-could-help-spot-cheaters-using-ai-to-w...
 
Description Invited talk at AI UK 2022 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Presented my work on machine reasoning for natural language understanding in AI UK 2022. My talk led to a collaborative project with AQA and joint research proposals with a few UK universities.
Year(s) Of Engagement Activity 2022
URL https://www.turing.ac.uk/node/7396
 
Description Invited talk at LSEG 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Invited talk on "Advancing FinTech through NLP Research" at the London Stock Exchange Group in January 2024.
Year(s) Of Engagement Activity 2023
 
Description Invited talk at Zebra Technologies 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Professional Practitioners
Results and Impact Invited talk on "Interactive Narrative Understanding" at Zebra Technologies in November 2023.
Year(s) Of Engagement Activity 2023
 
Description Invited talk at the University of Cambridge 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Postgraduate students
Results and Impact Yulan He was invited to give a talk on "Hierarchical Interpretation of Neural Text Classification" in the Language Technology Lab (LTL) at the University of Cambridge, headed by Anna Korhonen and Nigel Collier. A follow-up discussion was held in March between Anna and Yulan to explore potential future collaborations.
Year(s) Of Engagement Activity 2022
URL http://131.111.150.181/talk/index/170564
 
Description Keynote at CIKM 2023 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Delivered a keynote presentation on "Interpretable Natural Language Understanding" at the 32nd ACM International Conference on Information and Knowledge Management (CIKM), which was held in Birmingham, UK in October 2023.
Year(s) Of Engagement Activity 2023
URL https://uobevents.eventsair.com/cikm2023/yulan-he
 
Description Keynote at NLDB 2023 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Delivered a keynote on "Interpretable Language Understanding" at the 28th International Conference on Natural Language & Information Systems (NLDB), held in Derby, UK in June 2023.
Year(s) Of Engagement Activity 2023
URL https://www.derby.ac.uk/events/latest-events/nldb-2023/
 
Description Tutorial in the Oxford Machine Learning Summer School 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact I delivered a tutorial on recent developments in sentiment analysis in the Oxford Machine Learning Summer School, targeting postgraduate students and researchers working in AI and machine learning.
Year(s) Of Engagement Activity 2022
URL https://www.oxfordml.school/oxml2022