A Lebesgue Integral based Approximation for Language Modelling

Lead Research Organisation: King's College London
Department Name: Computer Science

Abstract

Deep learning (DL) based Natural Language Processing (NLP) technologies have attracted significant interest in recent years. The current SOTA language models, a.k.a. transformer-based language models, typically assume that the representation of a given word can be captured by the interpolation of its related context in a convex hull. However, it has recently been shown that in high-dimensional spaces, the interpolation almost surely never occurs regardless of the underlying intrinsic dimension of the data manifold. The representations generated by such transformer-based language models will converge into a dense cone-like hyperspace which is often discontinuous with many nonadjacent clusters. To overcome the limitation of current methods in most DL-based NLP models, this project aims to deploy Lebesgue integral, which can be defined as an ensemble of integrals among partitions (i.e., discontinuous feature clusters), to approximate the posterior distributions of clusters given input word features in finite measurable sets by automatically identifying the boundary of such discontinuous set, which in turn could help to generate better interpretations and quantify the uncertainty. By our proposed Lebesgue integral based approximation, the input text will be characterised by two properties: an indicator vector encoding its membership in clusters (i.e., measurable sets), and another continuous feature representation for better capturing its semantic meaning for downstream tasks. This not only allows for a more faithful approximation of commonly observed countably discontinuities in distributions of input text in NLP, but also enables learning text representations that are better understood by humans.

Publications

10 25 50

publication icon
Li H. (2023) Distinguishability Calibration to In-Context Learning in EACL 2023 - 17th Conference of the European Chapter of the Association for Computational Linguistics, Findings of EACL 2023

publication icon
Lu J. (2023) NapSS: Paragraph-level Medical Text Simplification via Narrative Prompting and Sentence-matching Summarization in EACL 2023 - 17th Conference of the European Chapter of the Association for Computational Linguistics, Findings of EACL 2023

publication icon
Tan X. (2023) Event Temporal Relation Extraction with Bayesian Translational Model in EACL 2023 - 17th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference

publication icon
Wang, X. (2023) Document-Level Multi-Event Extraction with Event Proxy Nodes and Hausdorff Distance Minimization in Proceedings of the Annual Meeting of the Association for Computational Linguistics