Environment and Listener Optimised Speech Processing for Hearing Enhancement in Real Situations (ELO-SPHERES)

Lead Research Organisation: Imperial College London
Department Name: Electrical and Electronic Engineering

Abstract

Abstracts are not currently available in GtR for all funded research. This is normally because the abstract was not required at the time of proposal submission, but may be because it included sensitive information such as personal details.
 
Title Materials used in "Measuring audio-visual speech intelligibility under dynamic listening conditions using virtual reality" 
Description The materials in this record are the audio and video files, together with various configuration files, used by the "SEAT" software in the study described in Moore, Green, Brookes & Naylor (2022) "Measuring audio-visual speech intelligibility under dynamic listening conditions using virtual reality" They are shared in this form so that the experiment may be reproduced. For any other use please contact the authors to obtain the original database(s) from which these materials are derived. The materials were created to be compatible with v0.3 of SEAT, which is available from GitHub. Note that the materials must be placed at C:\seat_experiments\cafe_AV. 
Type Of Art Film/Video/Animation 
Year Produced 2022 
URL https://zenodo.org/record/6889159
 
Description The intelligibility of binaural speech can be estimated using a machine learning model with an accuracy comparable to, and in some cases better than, the classical metric.
Exploitation Route Dissemination of our results has been done and, together with sharing our software, will enable others to estimate measures of binaural intelligibility better than before.
Sectors Communities and Social Services/Policy

Digital/Communication/Information Technologies (including Software)

Electronics

Healthcare

 
Description A subsequent collaboration was established between Imperial College and Meta for research on hearing devices linked to augmented reality. The principal contact at Meta is Thomas Lunner.
First Year Of Impact 2022
Sector Digital/Communication/Information Technologies (including Software)
Impact Types Societal

Economic

 
Title HearVR - Virtual Reality Video database for audiovisual speech intelligibility assessment 
Description British English matrix sentences were recorded in our anechoic chamber against a green screen, using a 360° camera and a high-quality condenser microphone. These sentences contain 5 slots with 10 choices in each slot. Different sets of 200 sentences were recorded by 5 male and 5 female talkers sitting at a table. The individual talkers have been cropped from the videos and the green screen and table replaced by a transparent background to allow compositing of speakers in new 360° scenes. The individual matrix sentences have been extracted as 11s videos together with level-normalised monophonic audio. We have developed scripts that build multi-talker videos from these elements in a format that can be played on VR Headsets such as the Oculus 2. We have also developed a Unity application for the headsets, which will play the 360° composite videos and create spatialised sound sources for the talkers together with background noise delivered from multiple virtual loudspeakers. 
Type Of Material Database/Collection of data 
Year Produced 2023 
Provided To Others? Yes  
Impact This database facilitates experiments into audiovisual listening behaviour with multiple speakers in realistic environments involving spatialised audio. 
URL https://speechandhearing.net/hearvr/
 
Title Materials used in "Measuring audio-visual speech intelligibility under dynamic listening conditions using virtual reality" 
Description The materials in this record are the audio and video files, together with various configuration files, used by the "SEAT" software in the study described in Moore, Green, Brookes & Naylor (2022) "Measuring audio-visual speech intelligibility under dynamic listening conditions using virtual reality" They are shared in this form so that the experiment may be reproduced. For any other use please contact the authors to obtain the original database(s) from which these materials are derived. The materials were created to be compatible with v0.3 of SEAT, which is available from GitHub. Note that the materials must be placed at C:\seat_experiments\cafe_AV. 
Type Of Material Database/Collection of data 
Year Produced 2022 
Provided To Others? Yes  
Impact These data were used in the publication: "Moore, Green, Brookes & Naylor (2022) "Measuring audio-visual speech intelligibility under dynamic listening conditions using virtual reality"" 
URL https://zenodo.org/record/6889160
 
Title Moore, Alastair H; Green, Tim The materials in this record are the audio and video files, together with various configuration files, used by the "SEAT" software in the study described in Moore, Green, Brookes & Naylor (2022) "Measuring audio-visual 
Description Materials of data and methodology 
Type Of Material Database/Collection of data 
Year Produced 2022 
Provided To Others? Yes  
Impact Research paper 
URL https://zenodo.org/record/6889160#.Y_za6C-l3GA
 
Title eBrIRD - ELOSPHERES binaural room impulse response database 
Description The ELOSPHERES binaural room impulse response database (eBrIRD) is a resource for generating audio for binaural hearing-aid (HA) experiments. It allows testing the performance of new and existing audio processing algorithms under ideal as well as real-life auditory scenes in different environments: an anechoic chamber, a restaurant, a kitchen and a car cabin. The database consists of a collection of binaural room impulse responses (BRIRs) measured with six microphones. Two microphones represent the listener's eardrums. Four microphones are located at the front and back of two behind-the-ear hearing aids placed over the listener's pinnae. The database allows simulations of head movement in the transverse plane. [Updated July 2021 to include Car cabin BRIR]. 
Type Of Material Database/Collection of data 
Year Produced 2021 
Provided To Others? Yes  
Impact Materials to use for construction of listening tests exploiting spatial audio. 
URL https://www.phon.ucl.ac.uk/resource/ebrird/
 
Description Clarity Challenges for Machine Learning for Hearing Devices 
Organisation University of Nottingham
Department School of Medicine
Country United Kingdom 
Sector Academic/University 
PI Contribution The ELO-SPHERES project team are taking part in the Clarity Enhancement and Prediction Challenges.
Collaborator Contribution The Clarity challenges are a series of machine learning challenges to enhance hearing-aid signal processing and to better predict how people perceive speech-in-noise. The ELO-SPHERES project has contributed an enhancement system to the first enhancement challenge and is taking part in future challenges.
Impact Moore, A, Sina Hafezi, Rebecca Vos, Mike Brookes, Patrick A. Naylor, Mark Huckvale, Stuart Rosen... Gaston Hilkhuysen. (2021). A binaural MVDR beamformer for the 2021 Clarity Enhancement Challenge: ELO-SPHERES consortium system description.
Start Year 2021
 
Description Clarity Challenges for Machine Learning for Hearing Devices 
Organisation University of Nottingham
Department School of Medicine
Country United Kingdom 
Sector Academic/University 
PI Contribution The ELO-SPHERES project team are taking part in the Clarity Enhancement and Prediction Challenges.
Collaborator Contribution The Clarity challenges are a series of machine learning challenges to enhance hearing-aid signal processing and to better predict how people perceive speech-in-noise. The ELO-SPHERES project has contributed an enhancement system to the first enhancement challenge and is taking part in future challenges.
Impact Moore, A, Sina Hafezi, Rebecca Vos, Mike Brookes, Patrick A. Naylor, Mark Huckvale, Stuart Rosen... Gaston Hilkhuysen. (2021). A binaural MVDR beamformer for the 2021 Clarity Enhancement Challenge: ELO-SPHERES consortium system description.
Start Year 2021
 
Description Clarity Challenges for Machine Learning for Hearing Devices 
Organisation University of Salford
Country United Kingdom 
Sector Academic/University 
PI Contribution The ELO-SPHERES project team are taking part in the Clarity Enhancement and Prediction Challenges.
Collaborator Contribution The Clarity challenges are a series of machine learning challenges to enhance hearing-aid signal processing and to better predict how people perceive speech-in-noise. The ELO-SPHERES project has contributed an enhancement system to the first enhancement challenge and is taking part in future challenges.
Impact Moore, A, Sina Hafezi, Rebecca Vos, Mike Brookes, Patrick A. Naylor, Mark Huckvale, Stuart Rosen... Gaston Hilkhuysen. (2021). A binaural MVDR beamformer for the 2021 Clarity Enhancement Challenge: ELO-SPHERES consortium system description.
Start Year 2021
 
Description Clarity Challenges for Machine Learning for Hearing Devices 
Organisation University of Salford
Country United Kingdom 
Sector Academic/University 
PI Contribution The ELO-SPHERES project team are taking part in the Clarity Enhancement and Prediction Challenges.
Collaborator Contribution The Clarity challenges are a series of machine learning challenges to enhance hearing-aid signal processing and to better predict how people perceive speech-in-noise. The ELO-SPHERES project has contributed an enhancement system to the first enhancement challenge and is taking part in future challenges.
Impact Moore, A, Sina Hafezi, Rebecca Vos, Mike Brookes, Patrick A. Naylor, Mark Huckvale, Stuart Rosen... Gaston Hilkhuysen. (2021). A binaural MVDR beamformer for the 2021 Clarity Enhancement Challenge: ELO-SPHERES consortium system description.
Start Year 2021
 
Description Clarity Challenges for Machine Learning for Hearing Devices 
Organisation University of Sheffield
Department Department of Computer Science
Country United Kingdom 
Sector Academic/University 
PI Contribution The ELO-SPHERES project team are taking part in the Clarity Enhancement and Prediction Challenges.
Collaborator Contribution The Clarity challenges are a series of machine learning challenges to enhance hearing-aid signal processing and to better predict how people perceive speech-in-noise. The ELO-SPHERES project has contributed an enhancement system to the first enhancement challenge and is taking part in future challenges.
Impact Moore, A, Sina Hafezi, Rebecca Vos, Mike Brookes, Patrick A. Naylor, Mark Huckvale, Stuart Rosen... Gaston Hilkhuysen. (2021). A binaural MVDR beamformer for the 2021 Clarity Enhancement Challenge: ELO-SPHERES consortium system description.
Start Year 2021
 
Description Clarity Challenges for Machine Learning for Hearing Devices 
Organisation University of Sheffield
Department Department of Computer Science
Country United Kingdom 
Sector Academic/University 
PI Contribution The ELO-SPHERES project team are taking part in the Clarity Enhancement and Prediction Challenges.
Collaborator Contribution The Clarity challenges are a series of machine learning challenges to enhance hearing-aid signal processing and to better predict how people perceive speech-in-noise. The ELO-SPHERES project has contributed an enhancement system to the first enhancement challenge and is taking part in future challenges.
Impact Moore, A, Sina Hafezi, Rebecca Vos, Mike Brookes, Patrick A. Naylor, Mark Huckvale, Stuart Rosen... Gaston Hilkhuysen. (2021). A binaural MVDR beamformer for the 2021 Clarity Enhancement Challenge: ELO-SPHERES consortium system description.
Start Year 2021
 
Description Thomas Simm Littler Lecture to the British Society of Audiology (BSA) 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Professional Practitioners
Results and Impact In October 2020, Stuart Rosen gave the Thomas Simm Littler Lectureship to the British Society of Audiology (BSA) This is a biennial prize awarded 'in recognition of a sustained academic contribution to hearing science and audiology. This is the BSA's most prestigious award and consists of a certificate and honorarium ...'.

Title: 'How do background sounds interfere with speech?'
Year(s) Of Engagement Activity 2020