An Edinburgh Speech Production Facility

Lead Research Organisation: Queen Margaret University Edinburgh
Department Name: Clinical Audiology Speech &Lang Res Cen

Abstract

The proposal is for a facility designed to record and analyse the movements of the lips, tongue, and jaw during spoken dialogue. This facility will be the first of its kind in the UK, and will be useful for applications in speech recognition and speech synthesis, as well as for developing theories of the cognitive representations and processes involved in normal and impaired speech production. The first output of the facility will be a database of recorded dialogue that will be useful for researchers interested in the relationships between speech movement and acoustics (important for speech technology applications), as well as in the particular types of pronunciations that speakers use during spontaneous dialogue.
 
Description We created a publicly available corpus of speech recordings that includes synchronized articulatory and acoustic records of speech in dialogue, for free use. Our facility is available for further funded use; we offer calibration, gluing, recording, and data post-processing services. We commissioned the development of data analysis software, available through Articulate Instruments Ltd.

The DoubleTalk articulatory speech corpus includes synchronised audio and articulatory trajectories for 12 speakers of English. The corpus was collected at the Edinburgh Speech Production Facility (ESPF) using two synchronized Carstens AG500 electromagnetic articulometers. The first release of the corpus comprises orthographic transcriptions aligned at phrasal level to EMA and audio data for each of 6 mixed-dialect speaker pairs. It is available from the ESPF online archive (http://espf.ppls.ed.ac.uk/frontend.php/project/espf-doubletalk). A variety of tasks were used to elicit a wide range of speech styles, including monologue (a modified Comma Gets a Cure and spontaneous story-telling), structured spontaneous dialogue (Map Task and Diapix), a wordlist task, a memory-recall task, and a shadowing task.
To enable wider use of EMA data from the ESPF facility, Articulate Instruments Ltd produced an extra component of the AAA software programme specifically to handle EMA data. This commerically-available software, designed for articulatory speech analysis, therefore provides the opportunity for users familiar with other articulatory data to access and analyse EMA data without having to learn new software. In addition, the company's contribution to the design of the facility enables synchronised collection of EPG (electropalatography) data.
Exploitation Route http://www.speech-graphics.com/
Speech Graphics have used EMA data for underpinning their acoustically lip-synched facial animation.
Sectors Creative Economy,Digital/Communication/Information Technologies (including Software),Other

URL http://www.lel.ed.ac.uk/projects/ema/
 
Description http://www.speech-graphics.com/ Speech Graphics have used EMA data as part of the underpinnings for their lip-synch animation, used in gaming and other creative industries.
First Year Of Impact 2013
Sector Creative Economy,Digital/Communication/Information Technologies (including Software)
Impact Types Cultural,Economic