📣 Help Shape the Future of UKRI's Gateway to Research (GtR)

We're improving UKRI's Gateway to Research and are seeking your input! If you would be interested in being interviewed about the improvements we're making and to have your say about how we can make GtR more user-friendly, impactful, and effective for the Research and Innovation community, please email gateway@ukri.org.

Controllable text generation: toward non-toxic, unbiased and factual language models for sensitive applications

Lead Research Organisation: Imperial College London
Department Name: Computing

Abstract

we will investigate the following methodologies as a starting point:
1. Parameter-e_cient paradigms - which allow fine-tuning a pretrained model for a
downstream task without modifying all of its parameters - are not only more computationally efficient: they have also been found to outperform full model tuning in many cases [12, 13, 14]. These paradigms include prompt tuning [12], adapters [13] and prefix tuning [14]. It is worth noting that recent papers have investigated all three approaches in relation to the detoxification and debiasing of language models, with encouraging results [15, 16]. We aim to investigate these paradigms and build upon the existing research.

2. Retrieval-augmented generation has been shown to achieve state-of-the-art-performance
in a number of benchmarks [17]. This approach augments the knowledge implicitly stored in a model's parameters with a knowledge retriever that attends to an external corpus, such as Wikipedia or a knowledge graph [18, 19]. Since these corpora can encode facts with historical and scientific accuracy as well as societal norms and values [20, 21, 22], it is worth investigating whether retrieval-augmented approaches can result in less biased and less toxic models.

People

ORCID iD

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/T51780X/1 30/09/2020 29/09/2025
2902174 Studentship EP/T51780X/1 01/01/2022 29/06/2025