Audiovisual integration of speech in noise: role of behavioural and neural mechanism

Lead Research Organisation: University College London
Department Name: Speech Hearing and Phonetic Science

Abstract

Communication often occurs in noise. When the audio signal is degraded, visual cues from the speaker's facial and mouth movements help to improve our speech perception, especially for those with hearing loss. However, the neural mechanisms supporting audiovisual integration of speech in noise are largely unknown. In particular, the causal correlation between areas near the auditory cortex and audiovisual integration of speech in noise for individuals with and without hearing loss is not yet established. The current project aims to address this gap and clarify the behavioural and neural mechanisms supporting audiovisual speech in noise. In Experiment 1, participants' gaze will be monitored with an eye-tracker when listening to audiovisual speech. We aim to establish the relationship between visual attention, individual cognitive abilities, and speech perception. Experiment 2 will have a similar design, except participants will repeat the task after receiving Transcranial Magnetic Stimulation (TMS) to an area nearby the auditory cortex to uncover the supporting neural mechanisms. Experiment 3 repeats experiment 1 and 2 with transcranial Direct Current Stimulation (tDCS) to quantify the correlation between visual attention, individual cognitive abilities, and tDCS responsiveness with machine learning techniques.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
ES/P000592/1 01/10/2017 30/09/2027
2716458 Studentship ES/P000592/1 01/11/2022 30/09/2025 Rongru Chen