Creating Artificially intelligent neuroscience probes to determine how the brain makes decisions

Lead Research Organisation: Newcastle University
Department Name: Sch of Engineering

Abstract

Summary of Proposed Research
Reinforcement learning (RL) models are often used for the study of biological learning and the role of subcortical structures in this process, as well as for building artificial intelligence agents. However, the classical concept of dopamine-based reinforcement learning in the brain does not fully capture the higher-order processes associated with reinforcement learning models of artificial intelligence. This model-free RL interpretation is computationally inexpensive but suffers from two primary drawbacks: data inefficiency, requiring large amounts of experience to achieve accurate estimates, and inflexibility, being insensitive to changes in the value of outcomes. A core distinction with mammalian cognition is its ability to rapidly learn new concepts from only a handful of examples, leveraging prior knowledge goal associated 'state-value associations' to enable flexible inductive inferences. A growing body of work has emphasized the importance of neocortical contributions towards goal representation with regards to flexible decision-making, as such new models need to be developed that incorporate these diverse goal-driven behavioural and cognitive functions.

The ability to simultaneously image, stimulate and record spatiotemporal activity from many individual neurons provides a promising framework through which to quantify animal behaviour, investigating its relationship with population-level neuronal activity in freely moving rodents to provide better understanding of the complementary learning systems in the mammalian brain. Through these emerging technologies in high-density probing of the brain, neuroscientists are able to gather increasingly vast data sets. This presents the challenge of how to efficiently analyse and correlate these neuronal recordings with observed behaviour to gain meaningful insights into biological learning and flexible decision-making.

The proposed research aims to address this issue through the multi-sensory integration of fluorescent confocal microscopy utilising head-mounted miniscopes, electrophysiological stimulation/recording of neuronal activity, and machine learning methodologies for the real-time analysis of neuronal data in the study of flexible learning in freely moving subjects. Through the development and implementation of complex behavioural assays, this research aims to develop a systems neuroscience-level understanding of the brain, namely the algorithms, architectures, functions and representations it utilizes. This corresponds to the top two levels of analysis believed to be required to understand any complex biological system: the goal of the system (computational level) & the processes that realize this goal (the algorithmic level).

Significance
Through the combination of these systems, the aim is to facilitate multi-modal control of stimulation and recording methodologies that complement confocal calcium imaging of freely-moving rodents performing behavioural experiments. This integrated system will allow for the collection of dense data sets through a range of modalities. Utilising real-time data analysis, this will provide a solid foundation through which to further explore the neural underpinnings behind complex cognition and thus develop new, flexible models of reinforcement learning for artificial intelligence agents based on biological learning in dynamic environments.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/T517914/1 01/10/2020 30/09/2025
2595520 Studentship EP/T517914/1 01/10/2021 31/03/2025 Adam Chapman