Learning mechanisms for perceptual decisions in biological and artificial neural systems

Lead Research Organisation: University of Cambridge
Department Name: Engineering

Abstract

It is common wisdom that practice makes perfect; that is, training improves our ability to solve difficult tasks and acquire new skills. For example, recognising objects in busy scenes or finding a friend in the crowd-seamless as it may seem- poses significant demands on the brain that is called to 1) detect and select targets from clutter, and (2) discriminate whether similar features belong to the same or different objects. Training and experience improve our ability to make these perceptual judgements accurately and rapidly resulting in successful actions. Yet, the way in which our everyday experiences change the brain is complex and the precise mechanisms that the brain employs to solve new problems based on previous experience remain largely unknown.

Here we propose to build models and artificial systems based on state-of-the-art mathematical algorithms that allow us to simulate the workings of the brain and understand better how it learns. In our first study, we will use an inference method developed in artificial intelligence to infer changes in the brain circuits underlying our ability to recognise objects in cluttered scenes, from high-resolution brain imaging data. This will allow us to identify aspects of the brain circuits (for example, suppressive or exciting connections) that change when we train to improve our perceptual judgments. In our second study, we will construct a model of the brain's visual system that, similarly to artificial neural networks, learns from experience by optimising its internal connections. Unlike artificial networks, our proposed model is inspired by our knowledge of the brain's connections and integrates key biological aspects of brain circuitry. By training this network in various perceptual judgement tasks we will make predictions for the brain mechanisms that underlie the brain's ability to improve its judgements.

We test and validate these models against existing data that we collected using state-of-the-art magnetic resonance imaging to trace how the brain changes its functions with learning at much finer resolution than previously possible. Further, we have exploited advances in MR imaging of metabolites to measure GABA, the primary neurotransmitter that the brain uses for suppressing rather than exciting its neurons. We have previously shown that GABA plays a critical role in learning to improve our perceptual skills.

We will use the developed models to understand the link between changes in the brain's function and neurochemistry due to training. In particular, we ask how: a) changes in the brain's neurochemistry link with changes in brain function, b) learning alters the balance in the brain's chemical signals (excitation vs. inhibition) to boost the brain's flexibility and capacity to perform in everyday tasks. Understanding these key brain processes of plasticity will, in turn, inform the design of better artificial systems. These systems will allow us to make new predictions about how the brain works, advancing our understanding of how the brain supports our ability to learn and adapt to change in our environment across the lifespan. Finally, these brain-inspired artificial systems may improve in their learning and advance digital technologies (e.g. brain-computer interface solutions) for patients with neurological disorders that are impaired in their ability to interact with the environment.

Technical Summary

Despite the fundamental role of learning in guiding our decisions, we know surprisingly little about how the brain learns to improve our judgments. Combining recent advances in computational modelling, machine learning, and brain imaging provides a unique opportunity to interrogate the computational principles and fine-scale circuit mechanisms that underlie learning for perceptual decisions.
Here, we propose an AI-inspired computational framework that integrates mechanistic circuit modelling and ultra high-field brain imaging to a) interrogate the adaptive computations and mechanisms that boost skills at the core of visual recognition (i.e. detect targets in clutter; discriminate similar objects), b) compare these mechanisms in biological and artificial systems.
We will use two complimentary model-based approaches to gain insight into the circuit mechanisms and computational principles that underlie learning-induced changes in networks of excitation and inhibition. First, we will adopt a data-driven mechanistic modelling approach; that is, using modern inference methods from machine learning, we will fit a mechanistic model of cortical circuits that we have previously developed to existing brain imaging data (high-resolution fMRI across cortical layers, MR spectroscopy measurements of neurotransmitters). Second, we will adopt a normative approach guided by principles of optimal learning. We will train a deep network model of visual cortex that respects biological constraints on excitatory and inhibitory connectivity. We will test the hypothesis that training optimises perceptual decisions by altering the balance of cortical excitation and inhibition in line with task-specific computations to support adaptive behaviour.
Our cross-disciplinary approach will a) advance our understanding of adaptive brain computations across scales, linking local circuits to whole-brain networks, b) inform the development of next-generation biologically-inspired artificial systems.