Generation Challenges 2010

Lead Research Organisation: University of Brighton
Department Name: Sch of Computing, Engineering & Maths


Natural Language Generation (NLG) is the subfield of Natural LanguageProcessing (NLP) that is concerned with developing computationalmethods for automatically producing language. Possible applicationsof NLG technology include economising text-production processes andimproving access to non-verbal information, as well as machinetranslation, text summarisation and human-computer dialogue. NLG is afield with vast but so far largely unrealised potential. In other NLPfields, shared data, shared core technology and organised shared-taskcompetitions have been seen to galvanise research communities, andlead to rapid technological progress. Among NLG researchers, growinginterest in comparative forms of evaluation led to a discussion whichhas now resulted in the first three shared-task evaluationcompetitions in NLG: in 2007, we organised a pilot NLG shared-taskevaluation event, the Attribute Selection for Generating ReferringExpressions (ASGRE'07) Challenge; in 2008, we organised the ReferringExpression Generation Challenge (REG'08) which saw participationdouble and was co-located with the leading international conference inNLG (INLG'08 in Columbus, Ohio); and this year, we organised the firstGeneration Challenges event which saw the number of tasks, organisingteams and participants increase further and was co-located with the leadingEuropean conference in NLG (ENLG'09 in Athens). ASGRE'07, REG'08 andGeneration Challenges 2009 have met with enthusiasm among NLGresearchers, have resulted in the creation of new technology and dataresources, and have drawn new researchers into the field. In order tocontinue and increase these beneficial effects, we are organising afourth NLG evaluation event, Generation Challenges 2010, once again inconjunction with the leading NLG conference (INLG'10). GenerationChallenges 2010 comprises two tasks that have grown out of tasks fromthe previous year, two new tasks, and three working groups on new taskscurrently under development which are expected to run in 2011. Inorder to put the Generation Challenges initiative on a more permanentand representative footing, we have recently founded the GenerationChallenges Steering Committee whose members include many of theleading researchers in the NLG field. Our aim for 2010 is to putmechanisms in place that ensure the long-term continuation of theGeneration Challenges initiative and the inclusion of growing numbersof researchers organising and running their own tasks. Unlike leadingevaluation initiatives in Machine Translation and DocumentSummarisation, which are funded and directed by US governmentagencies, ASGRE'07, REG'08, GenChal'09 and now Generation Challenges2010 are community-based, UK-led evaluation initiatives. Thisproposal requests funding for umbrella organisation and administrativeactivities, as well as data preparation and evaluation experiments forone of the shared tasks in Generation Challenges 2010, to enable us tocarry out the full range of planned activities, and to keep thisinitiative community-based and UK-led.

Planned Impact

Natural language generation (NLG) is the branch of language processing that maps non-language representations of information to language that expresses the information. Apart from being a subtask in Machine Translation (MT), document summarisation, and human-computer dialogue, NLG can help economise text-production processes and make information available in verbal form that would otherwise be inaccessible or more time-consuming to process. The number of potential applications of data-to-text NLG technology is vast, yet computational methods for generating language are lagging behind computational methods for analysing language in several ways, most obviously in that they have rarely been used commercially. Until recently, NLG was characterised by a lack of comparative evaluation, hence consolidation of research results, and isolation from the rest of Natural Language Processing (NLP) where comparative evaluation has long been the norm. It was moreover shrinking fast as a field (MT, summarisation and to some extent dialogue having gone their own separate ways) and lacked the kind of funding and participation that Natural Language Understanding (NLU) fields have attracted. In order to ultimately fulfill its great potential (including commercial applications), NLG needs to achieve substantial technological progress; in order to achieve that, it needs more critical mass (in terms of numbers of people, projects and events), establish comparative forms of evaluation as standard in order to consolidate and incrementally improve research results, and build bridges to neighbouring areas of research where language is generated (in order to benefit from technological advances and available resources there). These very considerable impacts have been our overarching goals in organising the Generation Challenges initiative since 2007.
Description 1. We created a benchmark data set for the grand challenge research competitions that formed part of Generation Challenges 2010.

2. We produced and evaluated the first sets of directly comparable results for the application tasks in the competitions.

3. We investigated in particular different interfaces and methods for evaluation of automatically generated text by human assessors, producing results and reports comparing alternatives for the first time.
Exploitation Route As with all our grand challenge competitions, the benchmark data set, evaluation methods and associated software are freely available for future use in research.

The papers and data from GenChal'10 are cited and used frequently in the NLG community.
Sectors Digital/Communication/Information Technologies (including Software)