Neural oscillator network modelling of auditory stream segregation

Lead Research Organisation: UNIVERSITY OF EXETER
Department Name: Mathematics

Abstract

Imagine yourself at a dinner party. Five conversations are going on around the table, music hums in the background, cutlery rattles and glasses clink. Caught in a dull discussion about stock options, you're wishing you were with the people over your shoulder reminiscing about skiing in the Alps. No matter where you are, your brain is constantly buzzing in a world of sound. How are we able to choose what to tune in to?

Although individual sound sources (e.g. a voice, a humming fridge) change over time some features remain constant, like where they're coming from, whether they are high or low in pitch and how often they repeat. In a situation like the one described above we have some control over what we hear, we can focus on an individual voice or conversation whilst pushing other sounds into the background. There is evidence that the brain uses two strategies to differentiate sound sources:

1) by features (e.g. sources are high or low pitch, sources come from different locations), and

2) by timing and rhythm (people talk at different speeds and start/stop stop talking at different times).

The brain's sound processing pathways separate sounds by their features, resulting in, for example, different groups of neurons responding to high and low pitch sounds. Ongoing brain rhythms (oscillations in brain activity) can synchronise with specific sound sources in order to track them. Combined together this allows the brain to follow specific sound sources with groups of neurons tracking features and synchronisation of their activity tracking these features over time.

Dynamical systems is a field of mathematics describing processes (such as brain rhythms) that change over time. Amongst other types of dynamics, it has helped us understand oscillations and synchronisation across biology, physics, and chemistry to social networks and technological applications. Oscillations in the activity of neurons is important for lots of cognitive functions like making decisions, forming memories and enjoying music. The research here focuses on using mathematical theory about oscillations to develop a computer model of the brain regions involved in processing and segregating sound sources. The current state of the art in computer modelling has focused on the first strategy (separating by features). Here, the focus will be on integrating this with the second strategy, paying specific attention to how the brain uses timing and rhythm to segregate sounds. This is based on the hypothesis that synchronisation of oscillations is crucial for tracking sounds over time. Indeed, when we listen to repetitive sounds (like a simple musical rhythm) neurons in both auditory and motor regions of the brain start to fire in time to the beat, even when we aren't moving ourselves. The research aims to reveal how both of these regions working together enable us simultaneously keep track of both the sound source we're focusing on (stock options) and the one we'd like to really be paying attention to (skiing in the Alps).

Planned Impact

The potential beneficiaries of this research are:

- Clinicians working on treatment of hearing disorders and their patients
- Companies and industry developing hearing technology
- Academics researching hearing disorders
- General public and students with an interest in brain research

The ability to segregate auditory objects is crucial for normal day to day communications (e.g. focusing on one voice in a noisy place), an ability that is compromised by hearing impairment and auditory system dysfunction. The proposed research will help to understand how auditory streaming functions with normal hearing, a necessary step for building a complete picture of the link between hearing impairment and comprised communication abilities. Therefore, future research stemming from this proposal could be used improve treatment decisions and treatment strategies, resulting in better outcomes for patients. Through links with a local hearing clinic, project staff will also seek to understand the experiences of patients with hearing disorders and discuss research with them. Project staff will attend and present findings at an industry conference attended by hearing healthcare professionals in order to identify opportunities for the research to be exploited in these ways.

The research could also be useful for companies that develop hearing technology such as cochlear implants and hearing aids. Again, a better understanding of auditory streaming will be informative for future improvements to the design and efficiency of such technology. Project staff will meet and discuss the research with a local company in order to better understand how the research could be applied in the hearing technology sector. The outcome of these discussions will feed in to interactions with healthcare professionals and the dissemination of research findings at an hearing technology industry conference.

Researchers working on understanding hearing loss and auditory dysfunction will benefit from the results of this research. A clearer understanding of both where and how auditory streams are encoded in the brain will feed into research in these areas. The principal investigator will benefit in this way, as they aim to develop future projects geared toward understanding how damage or dysfunction in the auditory system translates to difficulties with segregating auditory objects and communication. To achieve these aims the research will be disseminated in academic journals and at international conferences that are read and attended by a broad cross section of auditory scientists.

Members of the public and students with an interest in science will benefit from this project. Its staff will participate in public science events, giving interactive presentations on research both directly from the project and from related areas. The aim is to engage the public, and get them thinking about perception as an active process and how what you hear depends on computations in the brain, not just the sound waves arriving at the ear and cochlea. Many young adults who may be thinking about going to university or a career in science do not realise the breadth of opportunities available and are not aware of connections between subjects like mathematics and neuroscience. With the aim of inspiring the next generation of scientists and researchers, project staff will give talks on these topics at schools.
 
Description This program of research is still active. Some of the key findings produced so far are discussed here.

Difficulty in separating out single voices in a busy room is a common frustration. Our ability to separate sound sources can be explored in an idealised setting - the auditory streaming paradigm - where sequences of tones can be separated by features like pitch, rhythmic pattern or spatial location.

One strand of this research work investigated how slow changes to the pitch sound sources affect how they are perceived. A simulation using a model developed in earlier research predicted that periodic variation of the pitch would lead to regular switches between grouping sound sources together and segregating them into separate streams. We confirmed this prediction in experiments. Based on the variation of pitch we can reliably predict what the person listening will perceive at a given point in time. These findings are important as they allow similar experiments to be carried out without the participants needing to continually report what they perceive.

Another strand of research investigated how a simulated network of neurons in the brain's auditory system can represent grouped or segregated sound sources. We identified a simple neural circuit that could achieve this based on assumptions about how information about different sources not being represented multiple times. These findings will help explain brain activity recorded in experiments by other research groups.
Exploitation Route The outcomes of this research provide valuable information for researchers exploring how the separation of sound sources is achieved by the brain. Our findings predicting what people perceive as properties of a sound source change will allow for improvements to brain imaging experiments where button presses to indicate perception can interfere significantly with the recorded data. Our findings about the neural networks encoding sounds provide valuable context from which to understand the data recorded in brain imaging experiments investigating how multiple sound sources are perceived.
Sectors Digital/Communication/Information Technologies (including Software),Healthcare,Pharmaceuticals and Medical Biotechnology

 
Description CoA extension to grant EP/R03124X/1
Amount £40,000 (GBP)
Funding ID 115251R 
Organisation United Kingdom Research and Innovation 
Sector Public
Country United Kingdom
Start 04/2021 
End 08/2021
 
Description Using touch to enhance auditory perception
Amount £560,000 (GBP)
Funding ID EP/W032422/1 
Organisation Engineering and Physical Sciences Research Council (EPSRC) 
Sector Public
Country United Kingdom
Start 09/2022 
End 09/2025
 
Title Auditory streaming emerges from fast excitation and slow delayed inhibition 
Description The model developed in Ferrario, A., & Rankin, J. (2021). Auditory streaming emerges from fast excitation and slow delayed inhibition. The Journal of Mathematical Neuroscience, 11(1), 1-32. is available for use by other researchers 
Type Of Material Computer model/algorithm 
Year Produced 2021 
Provided To Others? Yes  
Impact Development of further models and modelling studies - Cascades of periodic solutions in a neural circuit with delays and slow-fast dynamics, Andrea Ferrario, James Rankin, Frontiers in Applied Mathematics and Statistics, 7 (2021) - Thesis chapter in Farzaneh Darki's PhD thesis 
URL https://github.com/ferrarioa5/ferrario_rankin2021
 
Title Methods to assess binocular rivalry with periodic stimuli 
Description Model code to reproduce results from "Methods to assess binocular rivalry with periodic stimuli" published in the Journal of Mathematical Neuroscience. This approach is adaptable to study perceptual bistability with time-varying inputs as in e.g. the auditory streaming paradigm. 
Type Of Material Computer model/algorithm 
Year Produced 2020 
Provided To Others? Yes  
Impact The approach developed, whilst relatively new, has been used within the research group by the New Investigator Award funded postdoc to analyze a model of auditory streaming. 
URL https://github.com/farzaneh-darki/Darki2020_methods
 
Title Neuromechanistic modelling of auditory streaming 
Description Code and experimental data to accompany the paper Byrne, Rinzel and Rankin (2019) Auditory streaming and bistability paradigm extended to a dynamic environment 
Type Of Material Computer model/algorithm 
Year Produced 2019 
Provided To Others? Yes  
Impact This model has been used by other researchers to explain how cochlear implant users are able to separate sound sources: https://www.frontiersin.org/articles/10.3389/fncom.2019.00042/full 
URL https://github.com/james-rankin/auditory-streaming
 
Description Evidence of Adaptation in Neural Decoding of Perceptually Bistable Sounds from MEG Data 
Organisation University College London
Department Ear Institute
Country United Kingdom 
Sector Academic/University 
PI Contribution The PI and internee Jakub Onysk performed a new set of data analyses on an existing data set provided by collaborator Dr Alex Billig.
Collaborator Contribution Partner Dr Alex Billig provided anonymised brain imaging data from 24 subjects, representing 24 hours of recording from 306 MEG sensors. Dr Billig further provided guidance on data analysis methods for MEG data including access to his data analysis pipeline. Dr Billig provided regular input to help steer the research over the course of the internship.
Impact The internee Jakub Onysk produced program code to perform a new set of analyses on the provided data set. A reported was summarising the findings of the project will provide the basis for future research using the data set.
Start Year 2019
 
Description Understanding audio-tactile interactions to further develop hearing-assistive devices 
Organisation University of Southampton
Country United Kingdom 
Sector Academic/University 
PI Contribution Proposal of experiments to better understanding how tactile stimuli can be used to improve human performance in separating sound sources. Collection of preliminary data to demonstrate the potential of the approach, which relies on basic science investigations to inform engineering development. Successful funding application to support this work for several years.
Collaborator Contribution Collaborators at the University of Southampton are developing a hearing assistive device. PI Mark Fletcher helped to shape the design of experiments that form part of the funding application.
Impact Healthcare Technologies Investigator Led standard grant (EP/W032422/1) "Using touch to enhance auditory perception". Positive funding decision received March 2022, start date: 1/09/2022
Start Year 2021