eNeMILP: Non-Monotonic Incremental Language Processing

Lead Research Organisation: University of Sheffield
Department Name: Computer Science

Abstract

Research in natural language processing (NLP) is driving advances in many applications such as search engines and personal digital assistants, e.g. Apple's Siri and Amazon's Alexa. In many NLP tasks the output to be predicted is a graph representing the sentence, e.g. a syntax tree in syntactic parsing or a meaning representation in semantic parsing. Furthermore, in other tasks such as natural language generation and machine translation the predicted output is text, i.e. a sequence of words. Both types of NLP tasks have been tackled successfully with incremental modelling approaches in which prediction is decomposed into a sequence of actions constructing the output.

Despite its success, a fundamental limitation in incremental modelling is that the actions considered typically construct the output monotonically, e.g. in natural language generation each action adds a word to the output but never removes or changes a previously predicted one. Thus, relying exclusively on monotonic actions can decrease accuracy, since the effect of incorrect actions cannot be amended. Furthermore, these actions will be used to predict the following ones, likely to result in an error cascade.

We propose an 18-month project to address this limitation and learn non-monotonic incremental language processing models, i.e. incremental models that consider actions that can "undo" the outcome of previously predicted ones. The challenge in incorporating non-monotonic actions is that, unlike their monotonic counterparts, they are not straightforward to infer from the labelled data typically available for training, thus rendering standard supervised learning approaches inapplicable. To overcome this issue we will develop novel algorithms under the imitation learning paradigm to learn non-monotonic incremental models without assuming action-level supervision, relying instead on instance-level loss functions and the model's own predictions in order to learn how to recover from incorrect actions to avoid error cascades.

To succeed in this goal, this proposal has the following research objectives:

1) To model non-monotonic incremental prediction of structured outputs in a generic way that can be applied to a variety of tasks with natural language text as output

2) To learn non-monotonic incremental predictors using imitation learning and improve upon the accuracy of monotonic incremental models both in terms of automatic measures such as BLEU and human evaluation.

3) To extend the proposed approach to structured prediction tasks with graph as output.

4) To release software implementations of the proposed methods to facilitate reproducibility and wider adoption by the research community.

The research proposed focuses on a fundamental limitation in incremental language processing models, which have been successfully applied to a variety of natural language processing tasks, thus we anticipate the proposal to have a wide academic impact. Furthermore, the tasks we will evaluate it on, namely natural language generation and semantic parsing, are essential components to natural language interfaces and personal digital assistants. Improving these technologies will enhance accessibility to digital information and services. We will demonstrate the benefits of our approach through our collaboration with our project partners Amazon who are supporting the proposal both in terms of cloud computing credits but also by hosting the research associate in order to apply the outcomes of the project to industry-scale datasets.

Planned Impact

- Economy

The two applications we will focus on in the project, natural language generation and semantic parsing, are key technologies in a variety of commercial products which require generating and understanding language. In particular, personal digital assistants such as Google Now, Microsoft's Cortana, Amazon's Alexa and Apple's Siri are used by millions of users at home or on their mobile devices and are of great importance to these companies since they act as gateways to many of the services and products offered by them.

- Society

Personal digital assistants and natural language interfaces are used by a large number of users. Thus improving technologies of language generation and semantic parsing through non-monotonic incremental language processing is likely to affect these end users by improving their experience. We will explore this during the research visit of the RA at Amazon and test our approach in the context of Alexa.

- Knowledge

The project aims to address a fundamental limitation in an approach successfully applied to a variety of natural language processing tasks. Thus we anticipate that we will publish our results in high profile natural language processing conferences. Furthermore, we will accompany the paper publications with open source implementation of our approach on the project github repository.

- People

The project will have a positive impact on the careers of both the PI and the RA. It will enable the PI to build on his success and expertise he has developed in incremental language processing using imitation learning, and thus solidify his position in the field while simultaneously addressing a fundamental shortcoming in the approach. An EPSRC first grant would be of great significance to the PI as it will be his first time proposing and delivering a project on his own, which will provide him with useful experience and strengthen his profile in applying for further funding. Finally, the named RA has been working in language generation throughout his career and most recently with the PI in applying imitation learning to this task achieving state-of-the-art results.

Publications

10 25 50

publication icon
Fisher J. (2020) Merge and label: A novel neural network architecture for nested NER in ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference

publication icon
Hardy (2020) Highres: Highlight-based reference-less evaluation of summarization in ACL 2019 - 57th Annual Meeting of the Association for Computational Linguistics, Proceedings of the Conference

publication icon
Hardy (2018) Guided neural language generation for abstractive summarization using abstract meaning representation in Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, EMNLP 2018

publication icon
Mabona A. (2019) Neural generative rhetorical structure parsing in EMNLP-IJCNLP 2019 - 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference

 
Description We proposed a way of improving incremental text generation in the context of summarization by incorporating words from the document being summarized directly. This was achieved by modifying the generation process of the sequence-to-sequence model. This resulted in improved summaries according to automatic evaluation measures and human judgments of the fluency.

Note that the grant moved together with the PI to the university of Cambridge under code EP/R021643/2 so the final results will be reported there.
Exploitation Route This idea can be used in other tasks that are formuiated as monolingual text transformations, e.g. text simplifcation, post editing of machine translation, etc.
Sectors Digital/Communication/Information Technologies (including Software)

 
Title Model for improving summarization with source document predictions 
Description This code implements our proposal for improving the output of summarization with information from the original document. 
Type Of Material Improvements to research infrastructure 
Year Produced 2018 
Provided To Others? Yes  
Impact It achieved state of the art results on a well studied dataset. 
URL https://github.com/sheffieldnlp/AMR2Text-summ
 
Title Software implementing incremental text prediction for summarization with side information 
Description It allows to edit the predictions of incremental models to take into account side information to improve their outputs. 
Type Of Technology Software 
Year Produced 2018 
Open Source License? Yes  
Impact Achieved state of the art results on a well known dataset. 
URL https://github.com/sheffieldnlp/AMR2Text-summ
 
Description Talk at Amazon Research Day in Cambridge 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Industry/Business
Results and Impact About 80 Amazon employees attended my talk which resulted in increased interactions and exploration of possible collaborations.
Year(s) Of Engagement Activity 2018
URL https://ard.amazon-ml.com/cambridge/
 
Description Talk at Technische Universita¨t Darmstadt 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact Gave a talk on imitation learning research supported by this grant. Audience reported improved understanding of imitation learning.
Year(s) Of Engagement Activity 2018
 
Description Talk at the Institute for Logic, Language and Computation, University of Amsterdam 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact Gave a talk on imitation learning research supported by this grant.
Year(s) Of Engagement Activity 2018
 
Description Talk at the NLP group at the Department of Computer Science at the University of Copenhagen 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact Gave a talk on imitation learning research supported by this grant. Audience reported improved understanding of imitation learning.
Year(s) Of Engagement Activity 2018