ActiveAI - active learning and selective attention for robust, transparent and efficient AI

Lead Research Organisation: University of Sussex
Department Name: Sch of Engineering and Informatics

Abstract

We will bring together world leaders in insect biology and neuroscience with world leaders in biorobotic modelling and computational neuroscience to create a partnership that will be transformative in understanding active learning and selective attention in insects, robots and autonomous systems in artificial intelligence (AI). By considering how brains, behaviours and the environment interact during natural animal behaviour, we will develop new algorithms and methods for rapid, robust and efficient learning for autonomous robotics and AI for dynamic real world applications.

Recent advances in AI and notably in deep learning, have proven incredibly successful in creating solutions to specific complex problems (e.g. beating the best human players at Go, and driving cars through cities). But as we learn more about these approaches, their limitations are becoming more apparent. For instance, deep learning solutions typically need a great deal of computing power, extremely long training times and very large amounts of labeled training data which are simply not available for many tasks. While they are very good at solving specific tasks, they can be quite poor (and unpredictably so) at transferring this knowledge to other, closely related tasks. Finally, scientists and engineers are struggling to understand what their deep learning systems have learned and how well they have learned it.

These limitations are particularly apparent when contrasted to the naturally evolved intelligence of insects. Insects certainly cannot play Go or drive cars, but they are incredibly good at doing what they have evolved to do. For instance, unlike any current AI system, ants learn how to forage effectively with limited computing power provided by their tiny brains and minimal exploration of their world. We argue this difference comes about because natural intelligence is a property of closed loop brain-body-environment interactions. Evolved innate behaviours in concert with specialised sensors and neural circuits extract and encode task-relevant information with maximal efficiency, aided by mechanisms of selective attention that focus learning on task-relevant features. This focus on behaving embodied agents is under-represented in present AI technology but offers solutions to the issues raised above, which can be realised by pursuing research in AI in its original definition: a description and emulation of biological learning and intelligence that both replicates animals' capabilities and sheds light on the biological basis of intelligence.

This endeavour entails studying the workings of the brain in behaving animals as it is crucial to know how neural activity interacts with, and is shaped by, environment, body and behaviour and the interplay with selective attention. These experiments are now possible by combining recent advances in neural recordings of flies and hoverflies which can identify neural markers of selective attention, in combination with virtual reality experiments for ants; techniques pioneered by the Australian team. In combination with verification of emerging hypotheses on large-scale neural models on-board robotic platforms in the real world, an approach pioneered by the UK team, this project represents a unique and timely opportunity to transform our understanding of learning in animals and through this, learning in robots and AI systems.

We will create an interdisciplinary collaborative research environment with a "virtuous cycle" of experiments, analysis and computational and robotic modelling. New findings feed forward and back around this virtuous cycle, each discipline informing the others to yield a functional understanding of how active learning and selective attention enable small-brained insects to learn a complex world. Through this understanding, we will develop ActiveAI algorithms which are efficient in learning and final network configuration, robust to real-world conditions and learn rapidly.

Planned Impact

We will combine expertise in insect neuroscience with biomimetic robotic control to gain a functional understanding of how active learning and selective attention underpin rapid and efficient visual learning. Through this, we will develop ActiveAI algorithms, reinforcement learning methods and artificial neural network (ANN) architectures for robotics and AI that learn rapidly, are computationally efficient, work with limited training data and robust to novel and changing scenarios.

Industrial impact
Our novel sensing, learning and processing algorithms offer impact in robotics/autonomous systems and AI generally. AI problems are currently resolved by increasing computational and training resources. We take a fundamentally different approach using insects as inspiration for efficient algorithms. Here we will develop smart movement patterns which combine with attentional mechanisms to aid information identification/extraction, reducing computational and training loads. We foresee two immediate problem domains in RAS: those where learning speed is highly constrained (e.g. disaster recovery robots, exploration, agri-robotics); and those where computational load and energy usage are limited (e.g. UAVs, agritech, space robotics). Longer term we foresee applications in general AI where a new class of highly efficient and thus scalable ANNs are required to realise grand challenges such as General Intelligence.

We will ensure tight coupling to industrial needs using established industrial members of the Brains on Board advisory board, comprising Dyson, Parrot, NVidia and Google DeepMind as well as collaborators for robotic applications (Harper Adams University, GMV and RALSpace) and Sheffield Robotics contacts (e.g. Amazon, iniVation, Machine With Vision) as well as leveraging new opportunities through both UK and Australian Universities commercialisation operations (Macquarie University's Office of Commercialisation and Innovation Hub; Sussex Research Quality and Impact team, Sussex Innovation Centre; Sheffield Engineering Hub, Sheffield Partnerships and Knowledge Exchange team). Where possible, we will seek to commercialise knowledge through IP licensing, and university supported spin-outs. We already have experience doing so, in particular: optic flow commercialisation through ApisBrain (Marshall); and sensor commercialisation through Skyline Sensors (Mangan). In Sussex, support will be provided by the.

Impact on the team
PDRAs will receive cross-disciplinary training from the UK team in GPU computing, neural simulations, biorobotics and bio-inspired machine learning - very active and rapidly expanding areas and sought after skills in AI and robotics - as well as training from the Australian team in cutting edge neuroscientific methods (electrophysiology and pharmacology combined with virtual reality-enabled behavioural experiments). This will prepare them for careers across academia and industry. In addition, UK co-I Mangan, as an early career researcher, will benefit from support and advice from the senior investigators (UK + Aus), supporting his development as an independent researcher.

Advocacy + general public
We firmly believe in the benefit of ethically aware technology development through responsible innovation. We have already created an ethical code of conduct for the Brains on Board project and engaged with government consultations. We will extend this work and, by promoting and adhering to this philosophy, we will have impact on policy through advocacy and on the general public, through continuation of our extensive public engagement activities e.g. regular public lectures (Cafe Scientifique, Nerd Nite, U3A etc), media appearances (BBC, ABC radio, BBC Southeast) and large outreach events (e.g. Royal Society Science Exhibition 2010, British Science Festival 2017, Brighton Science Festivals, 2010-2018).

Academic Impact
We will impact AI, Robotics and neuroscience (see Academic Beneficiaries).

Publications

10 25 50