Brainwide neural populations supporting multisensory decision-making

Lead Research Organisation: University College London
Department Name: Institute of Ophthalmology

Abstract

To represent the external world and make appropriate decisions, brains need to combine information from different sensory modalities. This process is ubiquitous and vital, whether predator, prey, or pedestrian trying to safely cross the street. But despite the prevalence of multisensory decisions in natural environments, we remain largely ignorant about where and how multimodal streams of information are combined in the brain. This is partly because multisensory behaviors are difficult to robustly recreate in a laboratory environment, and partly because multisensory decisions involve neurons dispersed across a wide set of brain regions, and it's technically challenging to record from all of them. However, with new developments in rodent behavior and recording technology, we are now able to tackle the neural mechanisms underlying multisensory decision-making with unprecedented efficacy.

Our lab recently developed a multisensory behavioral task for mice where they turn a wheel to indicate whether a stimulus appeared on the left or right. The stimuli can be auditory, visual, or a combination of the two. We specifically designed this behavioral task to be compatible with new electrophysiology methods called Neuropixels probes, which we helped develop. These probes allow us to record from hundreds of neurons anywhere in the brain. By combining these two developments, we will answer longstanding questions about multisensory decision-making. We can also create the first brainwide map to trace the audio and visual signals as they propagate through the brain and evolve into the mouse's decision and action.

Our first objective focuses on the role of early sensory regions in multisensory decisions. Historically, certain regions of cortex were considered unisensory: they represented a single sensory modality, like vision or audition. However, several recent studies have claimed that these regions respond to multiple modalities, and that the first stages of audiovisual integration happens in these areas. We will record large neural populations simultaneously in two of these areas-primary visual and auditory cortices-in behaving mice. With these recordings, we can conclusively test whether these regions contain multisensory information and if so, whether this information guides the behavior of the mouse.

Our second objective focuses on how auditory and visual information is combined in multisensory regions. It's been proposed that multisensory brain regions mix visual and auditory information such that some neurons respond to only one sensory modality, some respond to neither, and a smaller fraction respond to both. However, this hypothesis was based on a region of the brain that we now suspect isn't required for audiovisual decisions. Thus, we plan to simultaneously record from a brain region we know to be required for the behavior, called frontal cortex, and earlier sensory regions. Through this experiment, we will understand how information is transformed between early and late regions in the multisensory decision-making pathway, and determine how auditory and visual signals are combined.

Our final objective is our most ambitious: to create a brainwide map of audiovisual signals while mice perform the behavioral task. This map will comprise ~100,000 neurons from regions across the brain, something that would have been unachievable just a few years ago. This map will be invaluable for two reasons. It will establish which regions of the brain have the potential to represent the mouse's choice, because these regions must contain multisensory neurons. Further, we expect to identify previously unexplored regions of multisensory integration, providing exciting new avenues of research.

Together, these experiments combine new developments in behavioral neuroscience and electrophysiology to gain unprecedented insights into the mechanisms underlying multisensory integration and decision-making.

Technical Summary

To make optimal actions, the brain typically needs to integrate information from different sensory modalities. The neural basis of this integration is only partially understood. The underlying neuronal populations can be distributed widely and sparsely, and understanding their activity requires recording brainwide recordings at neuronal scale during multisensory behavior.

These experiments are now possible thanks two advances: a new audiovisual localization task for head-fixed mice, and next-generation Neuropixels probes.

Objective 1: Role of primary sensory cortices in multisensory decision-making. We will test the hypotheses that even primary sensory areas are multisensory, and that their interaction is relevant for behavior. Alternatively, they may share behavioral but not multisensory signals. We will simultaneously record from large populations in auditory and visual cortex. We will then use analyses that characterize sensory and behavioral signals and the communication signals across populations.

Objective 2: Cortical transformation of audiovisual signals into decisions and actions. We have identified a region of frontal cortex that is required for audiovisual decision-making. Do neurons in this region carry audiovisual signals, and how are these signals combined and transformed from earlier unisensory signals? We will answer these questions by performing simultaneous recordings from early sensory regions and frontal cortex and analyzing the results using techniques similar to those in Objective 1.

Objective 3: Brainwide map of audiovisual decision-making. We will use enhanced Neuropixels probes to generate the first brainwide map of audiovisual processing, comprising ~100,000 neurons. This map will allow us to test longstanding hypotheses about the neural basis of multisensory behavior. It will reveal the flow of audiovisual information throughout the brain and may even reveal hitherto unknown regions of audiovisual integration.

Planned Impact

The proposed work is at the level of fundamental science and its main impact is in the generation of essential knowledge about how the brain works. But we expect the findings to have far-reaching implications on a longer timescale, particularly in the following domains.

(1) "Understanding and treating disease". Atypical audiovisual integration is a commonly observed in a number of mental disorders, including autism spectrum disorder and schizophrenia. Individuals with these conditions typically exhibit extended integration windows (i.e. combining auditory and visual stimuli from separate sources) and don't display the usual improvements in reaction time and reliability when responding to multisensory stimuli. These deficits extend to speech recognition, where sounds are typically combined with lip movements, indicating a possible link to difficulties with social communication. Mouse models of these conditions exist, but differences in audiovisual task performance or neural mechanisms have never been examined. The proposed project will create the tools needed to develop this area of research. By using the audiovisual task we have developed and recording neural activity in the brain regions we will identify, the multisensory integration properties of these disease models can be examined. This may help to understand atypical integration in human patients, and in the longer term, could lead to new treatment programs.

(2) "Development of artificial intelligence". Deep neural networks are being used for a vast array of projects, from analyzing health data to beating humans at computer games. However, human brains still handily beat the machines in multisensory integration and sensory processing, whether they are identifying objects in an image or recognizing speech. The visual system has been a major source of inspiration to image recognition research, with deep neural networks modelled after the hierarchical structure of visual cortex. But despite the prevalence and expertise of human multisensory integration, the neural mechanisms of this process haven't had a similarly significant impact on machine learning. This is primarily because we still don't understand how multisensory neural networks in the brain are structured. The proposed research will finally answer this question and will inspire future deep neural networks. The kind of computations we discover in the brain could one day be part of an autonomous vehicle or improving the text-to-speech capabilities of mobile phones.

(3) "Systems approaches to the biosciences." The project falls under Response Priority Mode "Systems approaches to the biosciences" because we are characterizing a complex biological process with unprecedented scope and accuracy. The system involves different hierarchical components, with jobs ranging from sensory processing of auditory and visual signals to integrated multisensory decisions. These components interact in complex ways, and will be studied during behavior (decisions, actions, arousal). Our final model of the system will capture the different components and their interactions so that it can be used to guide future studies of multisensory integration. To this end, we will build on strong collaborations with bioscientists and computational neuroscientists (e.g. the International Brain Laboratory, comprising more than 20 labs).

Publications

10 25 50
 
Description This project is establishing the first brainwide map of multisensory decision-making at the neuronal scale. Most labs, including ours, have hitherto investigated decisions based on a single modality. But in the natural world, sensory cues are rarely isolated, with most events producing multimodal stimuli. By integrating these stimuli, the brain can improve its interpretation of the external world, and consequently the accuracy of its decisions. A prime example of this integration is audiovisual localization, i.e. the ability to combine auditory and visual signals to better localize an object in space. The neural processes for such audiovisual integration were largely unknown, particularly how and where streams of visual and auditory information are combined in the brain. To tackle this problem, we developed a new audiovisual decision-making task for head-fixed mice. By combining this behavior with advanced electrophysiology techniques, achieved three key objectives.
(1) We tested the longstanding hypothesis that primary auditory and visual cortices are early sites of audiovisual integration by performing large-scale recordings in the two regions. This work indicates that -- contrary to longstanding hypotheses -- primary visual cortex does not participate in audiovisual integration. Work in auditory cortex is in progress.
(2) We determined how sensory signals are transformed in the cortex into the appropriate motor signals, by performing large-scale recordings in frontal cortex. The results showed that this area is multisensory and critical for the audiovisual behavior. Further, they revealed that neurons in this area perform similar additive combination of the cues as seen in the behavior, and that inactivation of the area causes deficits of the behavior consistent with a causal role.
(3) Building on these cortical experiments, we are obtaining the first brainwide map of audiovisual integration at the neuronal level, by recording from thousands of neurons throughout the mouse brain during behavior. Current work is focused on two subcortical regions: the superior colliculus and the striatum. Completing these objectives will provide a unique insight into the mechanisms of multisensory integration and more generally of decision-making.
Exploitation Route We are publishing these results in academic journals and the scientific community is citing them, indicating that the results are shaping scientific practice in our field.
Sectors Other

 
Title Chronic Electrophysiology Implant for Mice 
Description The "Apollo Implant" is a device for reversible chronic implantation of Neuropixels probes. The implant comprises two modules that are combined before implantation. The payload module accommodates up to two Neuropixels 2.0 probes, for a maximum of 8 shanks in the brain, and is recoverable (~€8). The docking module is cemented to the skull during implantation and is not recoverable (~€2). The implant is made of Formlabs Rigid Resin and weighs ~2.5 g with two 1.0 probes and ~1.7 g with two 2.0 probes. Its design is open source and can be readily adjusted to change the angle of insertion or the distance between the probes. We used the Apollo Implant to insert the same Neuropixels 2.0 probes across at least 4 head-fixed mice with no noticeable reduction in recording quality. Recordings were stable across weeks and sometimes months, allowing us to successfully track the same neurons over days and record from the entire probe (8 banks) while minimizing set-up time. With these implants, we explored neural responses across various parts of the brain (including superior colliculus, visual cortex, frontal cortex, and striatum) during an audiovisual behavioral task. Doing so, we could obtain many behavioral trials for each brain region, thus allowing finer analysis of the relationship between neural activity and choices of the animal. 
Type Of Material Technology assay or reagent 
Year Produced 2022 
Provided To Others? Yes  
Impact This implant has already been disseminated to more than 15 other labs, and is in active use by others. 
 
Title Lightweight, reusable chronic implants for Neuropixels probes 
Description Neuropixels probes have revolutionized electrophysiology, dramatically increasing the number of neurons that can be recorded in a single experiment in small animals such as mice. With chronic recordings, these neurons can be tracked across days, but this typically requires the experimenter to permanently cement the probe in place. There is thus substantial interest in developing chronic implants that are recoverable while being light enough to be used in mice. With funding from this grant we have developed the "Apollo Implant", a device for reversible chronic implantation of Neuropixels 2.0 probes. The implant comprises two modules which are combined before implantation. The implant weighs ~2.0 g with two probes. Its design is open source and can be readily adjusted to change the angle of insertion or the distance between the probes. The Apollo Implant provides a cheap, lightweight, and flexible solution for chronic recordings in head-fixed mice. We are currently developing versions of the implant optimised for Neuropixels 1.0 probes and freely moving animals. 
Type Of Material Technology assay or reagent 
Year Produced 2022 
Provided To Others? Yes  
Impact Too early to tell -- we will present it at a conference in July in Paris.