📣 Help Shape the Future of UKRI's Gateway to Research (GtR)

We're improving UKRI's Gateway to Research and are seeking your input! Tell us what works, what doesn't, and how we can make GtR more user-friendly, impactful, and effective for the Research and Innovation community. Please send your feedback to gateway@ukri.org by 11 August 2025.

Evaluating probabilistic inferential models of learnt sound representations in auditory cortex

Lead Research Organisation: UNIVERSITY COLLEGE LONDON
Department Name: Gatsby Computational Neuroscience Unit

Abstract

Humans, animals, and some artificial intelligence (AI) systems can all build internal representations of their sensory environments that guide and inform their actions. From both evolutionary and engineering standpoints, good representations are those that facilitate flexible and adaptive behavioural outcomes. Training of AI systems often involves providing feedback about outcomes (reinforcement or supervised learning). However, direct feedback about behavioural outcomes is rare in nature. Thus, for animals at least, good internal representations may predominantly be shaped by unsupervised learning from statistical regularities in sensory input. Indeed, many experiments have shown that neural representations and behaviour in animals can be changed by passive exposure to altered sensory environments, especially during early or adolescent development. It is very likely that data-efficient learning in AI systems will also ultimately depend on effective unsupervised learning algorithms.

Our goal in this project is to understand the computational principles underlying unsupervised learning of sensory representations in biological systems, and how those computational principles relate to recent advances in unsupervised learning algorithms for AI systems. We will apply state-of-the-art unsupervised inferential approaches to learn probabilistic models of acoustic environments, and evaluate the fidelity with which those models can reproduce neural recordings in the auditory cortex from animals raised in routine and altered acoustic environments. Understanding the statistical principles that organise biological perception is likely to lead to better representational learning in AI systems, without the need for reinforcement or supervision. Conversely, algorithms for efficient, flexible representational learning explored in AI systems will help to elucidate the computational principles governing learning in biological systems.

Technical Summary

What are the computational principles that underlie the formation of perceptual representations? Animals and some artificial agents build internal representations of their sensory environments, which they use to guide and inform cognition and action. From both evolutionary and engineering standpoints, good representations are those that facilitate flexible and adaptive behaviour, but direct feedback about actions in the form of reinforcement or supervision is rare in nature. Thus, for animals at least, good internal representations may predominantly be shaped by unsupervised learning based on statistical regularities in sensory input---and indeed many experiments reveal changes in both behaviour and neural representations with passive exposure to altered sensory statistics, especially during early development. It is very likely that data-efficient learning in artificial agents will also ultimately depend on developing effective unsupervised algorithms.

We will apply three state-of-the-art unsupervised inferential approaches---structured variational autoencoders, contrastive predictive coding, and recognition-parametrised models---to learn models of acoustic environments. The outputs of these models on probe sounds will be evaluated against auditory receptive field models and novel Neuropixels high-density multielectrode recordings of responses to naturalistic sounds from auditory cortical areas. We will explore changes in representation that are induced by exposure to modified sound ensembles during development, using the inferential models to design synthetic sounds that should drive maximal representational change, and then using the resulting changes in cortical representation to assess the computational similarities between the biological and artificial networks.

Understanding the statistical principles that organise biological perception is likely to lead to better representational learning in AI, without the need for labelled or augmented data sets.

Publications

10 25 50
 
Description We have made significant advances in algorithmic development of a new unsupervised learning method for machine learning, called Recognition Parametrised Modelling. We have also compared the performance of this new unsupervised learning method with two more standard algorithms in machine learning (Variational Auto-Encoding and Predictive Coding). Thus far we have performed the algorithmic development and analysis of unsupervised learning performance using large databases of natural sounds and auditory cortex recordings from both mice and ferrets. We have submitted a paper on the Recognition Parametrised Modelling algorithm to the 2025 International Conference on Machine Learning; this paper is currently under review. We also have a manuscript in preparation comparing optimised representations of natural sounds produced by different unsupervised learning models, and analysing the ability of these models to predict neural responses in the auditory cortex.

Meanwhile we have made progress with obtaining Neuropixels recordings from the auditory cortex of awake mice, boosting the number of simultaneous recordings we can use for estimating representations of sound in the brain. We have successfully recorded from hundreds of auditory cortex neurons in parallel using Neuropixels probes in awake mice listening to a variety of complex sounds. We also secured permission to do prolonged sound recordings in mouse housing facilities at UCL, to obtain more accurate measurements of the sound enviroments experienced by laboratory mice. Such recordings turn out to be rare in public databases and will be a significant output of the project in themselves, with potential value for improving animal welfare.
Exploitation Route Our comparison of sound representations learned from natural sounds using three different unsupervised learning algorithms will itself be useful for understanding the strengths and weaknesses of different strategies for efficient processing of sensory stimuli, for example in robotics. Our additional comparisons to neural representations of sound in the auditory brain will help to identify unsupervised learning algorithms that most closely resemble biological mechanisms of statistical learning, and will suggest further refinements to those algorithms that might improve artificial intelligence. Finally, our analysis of natural sound environments for laboratory mice will be useful not only for understanding how auditory cortical representations develop in mice, but also for identifying features of the laboratory sound environment that might be modified to improve animal welfare.
Sectors Digital/Communication/Information Technologies (including Software)

Environment

Healthcare

 
Description Collaboration with Dr Audra Ames, Hubbs-Sea World Research Institute and Oceanografic Valencia 
Organisation Hubbs-Sea World Research Institute
Country United States 
Sector Charity/Non Profit 
PI Contribution We have established a collaboration with Dr Ames to study the structure of beluga whale vocalisations using unsupervised machine learning methods we have been developing and testing for analysis of other natural sounds.
Collaborator Contribution Dr Ames has extensive recordings of beluga whale vocalisations, including vocalisations of a baby whale and its mother over its childhood development. She made these recordings at Oceanografic Valencia, the largest oceanarium and marine animal research institute in Europe.
Impact Ongoing
Start Year 2024
 
Description Collaboration with Dr Charlotte Burn, Royal Veterinary College 
Organisation Royal Veterinary College (RVC)
Country United Kingdom 
Sector Academic/University 
PI Contribution We have started a collaboration this year with Dr Charlotte Burn, an expert in laboratory animal welfare. We are investigating the statistics of the sound environment of laboratory mice in the project, and Dr Burn is interested in the animal welfare implications of those statistics.
Collaborator Contribution So far the collaboration has focused on defining the aspects of the sound environment that are likely to be most relevant to behaviour and welfare of laboratory mice.
Impact None yet. Acquisition of a database of sound recordings is underway.
Start Year 2023