Brain--inspired disinhihbitory learning rule for continual learning tasks in artificial neural networks

Lead Research Organisation: Imperial College London
Department Name: Bioengineering

Abstract

Machines are achieving near-human performance at learning tasks such as image categorisation or speech recognition, but most of the state-of-the-art solutions excel in fixed environments. Systems deployed in real-world scenario, on the other hand, need to be able to learn in changing environments. Why is this a challenge? Every time a typical learning system encounters a new task, it overwrites the solution to previous tasks by what it learns on the new one. Imagine a robot used for elderly care: After two months of training it to carry the person up and down the stairs, there are renovations in the house, and the robot learns to transport the person with a temporary lift. It would be silly if the robot would thereby unlearn its skills for navigating stairs and would have to relearn the stair condition for two months after the renovation. Current state-of-the art machine learning algorithms have this limitation, a challenge called continual learning, or life-long learning.

For this EPSRC Fellowship, we plan to develop a brain-inspired learning algorithm and test it in artificial neural networks solving a continual learning task. So, let's look at how the brain might solve continual learning. Humans have the fascinating ability to adapt to their environment and memorise experiences; both require memory. We can learn quickly and remember for a long time, but this leads to a dilemma: In order to learn quickly, the brain needs to change very easily i.e. be plastic, but in order to remember for a long time, the brain must not be too plastic. The basis of learning and memory at the neural level are changes in the connections between neurons, called synaptic plasticity. Scientists have worked thoroughly to characterise synaptic plasticity, focusing on excitatory neurons, while mostly neglecting the role of inhibitory neurons. We suggest that instead of learning equally across all experience, the solution to the dilemma is to regulate which memories to learn, therefore avoiding unnecessary overwriting of important memories. We propose that inhibition is the key to regulating learning, in that lowering inhibition opens a gate for learning. To test this hypothesis, in this EPSRC Fellowship we will investigate the interaction of excitatory and inhibitory plasticity using computational models in recurrent networks. We will then test whether and how inhibition gates synaptic plasticity and therefore learning. Finally, we will test the performance of our brain-inspired learning rule in a continual learning task of navigation under a reinforcement learning framework.

Planned Impact

*** Providing tools for the experimental neuroscience community. Our computational models will benefit the experimental neuroscience scientific community by providing models that can be used to test scientific hypotheses before performing any experiments; this will therefore reduce animal usage and provide novel tools to speed up science by increasing the number of possibilities that can be tested. To this end, we will publish our codes on a standard database in the field (ModelDB) along with an easy-to-use graphical interface.

***Technological impact: This work could benefit current and future developers of smart technologies, since the learning rules developed for this fellowship will develop new machine learning algorithms for continual learning. In particular, we will use our industry collaboration with Google Deepmind, London, to ensure that our learning rules can refine their state-of-the-art deep learning networks, and be another stepping stone toward general-purpose artificial intelligence (CC consults for Google twice a month), as well as our industrial contact at Qualcomm - USA, Eugene Izhikevich, to ensure that our work flows into marketable neuromorphic chip design technology.

***Health impact: On a longer timescale, this work will benefit people with diseases related to learning and memory such as Alzheimer's disease. It will also benefit people with diseases related to connectivity disorders such as autism and schizophrenia. Finally, it will benefit the growing ageing population in the UK and around the world, since this work will set a reference point for the amount of plasticity seen in the adults with natural or biased input statics. It has been shown that the synaptic turnover is increased in the aged brain. Therefore our work can be a starting point for studying the behaviour and the impact of synaptic plasticity in an aged brain.

***Educational impact: We will train a new generation of scientists by training the staff in my laboratory and by teaching at summer schools. But more importantly, we will train a new generation of non-academic workers in the UK by through CC's computational neuroscience course in the Bioengineering Department at Imperial College London, teaching a solid skillset for working in pharmaceutical, biotechnological, or engineering companies such as high-tech companies using machine learning or robotics, but also in banks and insurances that use artificial neural network techniques.

***Public engagement: We aim to sensibilise the broader public to the discoveries of neuroscience and communicate the great scientific challenges of the future. To this end, we propose to work with the outreach manager of my Department to set up a number of activities, such as press releases after publication, maintenance of a webpage with breaking news, stands at two different outreach activities of Imperial College London (Imperial Festival and Imperial Fringe) as well as giving talks to prospective students during the open days of Imperial.

Publications

10 25 50
 
Description We developed brain-inspired learning rules and are now testing them in non-stationary environments.
Exploitation Route Technology application in machine learning
Sectors Digital/Communication/Information Technologies (including Software),Education