Emergent embodied cognition in shallow, biological and artificial, neural networks

Lead Research Organisation: University of Sussex
Department Name: Sch of Life Sciences

Abstract

Natural intelligence has been shaped by evolution for specific tasks in specific environments and to be adaptable to allow for lifelong rapid and robust learning. These properties of cognition are not reserved for animals with large brains. Insects such as ants and bees show impressive cognitive performance within their natural foraging behaviours. For instance, bees are expert navigators and can learn multiple complex routes based on visual memories. Similarly, bees are able to rapidly learn the floral patterns of rewarding flowers from which to collect nectar and pollen. We believe that the impressive performance of insects on such tasks arises from specialisation. That is, evolution has shaped sensors, neural circuits and behaviours for solving flower learning and navigation tasks. Consequently, bees show better-than-human performance on some tasks despite having brains a million times smaller. It is thus crucially important to understand how sensory systems and learning strategies can be adapted to tasks and environments, such that small neural circuits produce complex cognition. In this project, we will leverage recent advances in the speed of computational neuroscience simulations to systematically investigate the emergence of cognition in small neural networks and to tease apart the contribution of body, brain and environment.

While these issues are relevant to all animals, we will focus on insects. It is easier to determine neuroanatomy and observe neurophysiology in insects than vertebrates, and we have detailed descriptions of the neural circuits involved in cognition (sensory lobes, learning centres and motor control regions). Furthermore, the specific behaviours where insects demonstrate impressive cognition are very well described in the lab and in the wild. In both these regards, our understanding of insects is more detailed and comprehensive than for vertebrate model systems.

Insect brains are not only small but also shallow. There are only a few layers of processing between sensory systems and motor output. We hypothesise that these shallow learning networks work so well because they are interacting with carefully tuned sensory systems and behaviours. In order to decipher these complex interactions we will create a simulated world and spiking neural network model of key insect brain regions. With this model, we will investigate properties of visual cognition in tasks inspired by the foraging of bees. Our bee-like agents will move in a 3D simulated world, with a sensory system which replicates insect vision and learning implemented as models of the Mushroom Bodies (the insect learning circuits). Crucially, we will independently manipulate these components and use optimisation methods to ask what classes of sensory system and behaviour produce the best learning performance.

Investigations of this kind have been envisioned before. However, to computationally determine the contribution of brain body and environment to cognition necessitates optimisation of multiple parameters at different levels of the agent. Optimisation methods for such multi-level problems exist (meta-learning or learning to learn algorithms) but common to these approaches is the need for very large numbers of evaluations. It is only now that we can explore these questions, in part due to increases in computational power, but also in conjunction with our recent breakthroughs in the speed of spiking neural network simulations on GPUs and insect-eye rendering technology.

We therefore have the opportunity to investigate the relationships between sensory environment, brains and behaviour in detailed models of insect visual cognition. The understanding of how intelligence emerges from shallow neural networks will be fundamental to neuroscience and cognitive science, but also has the potential to produce more natural AI algorithms.

Technical Summary

Despite small brains, insects are capable of complex cognition. The practicality of neuroanatomy in insects means we have excellent descriptions of sensory lobes, learning centres and motor control regions. Furthermore, the behaviours in which insects demonstrate their impressive cognition are well-described in the lab and in the wild. In both these regards, our understanding of insects is more detailed and comprehensive than vertebrate model systems. We will therefore study how the sensory systems, behaviour and learning of insects, honed by evolution for particular tasks and environments, allow cognition to emerge from shallow neural networks. Using meta-learning methods, enabled by recent
increases in the speed of GPU-accelerated simulation technology and the replication of insect vision, we will systematically investigate the interactions of these components and their impact on learning and cognition.

We will build simulated natural environments in the UNITY game engine. Bee-like agents will interact with these environments via configurable insect visual systems and spiking neural networks representing the Mushroom Bodies, circuits of the insect brain involved in visual cognition. Agents will learn visual information for two types of real-world learning (recognition vs categorisation), across two tasks, flower learning and navigation. We will use diverse and possibly conflicting performance metrics: learning accuracy, speed, robustness and memory capacity. We will therefore use multi-objective meta-learning algorithms to optimise sensory systems and/or behaviour, enabling us to both compare emergent solutions across task and environment, but also across learning metrics.

Experiments will systematically investigate how variations in task, natural image statistics of the environment and behaviour lead to differences in the optimal sensory systems and learning performance. Thus elucidating the role that these components play in cognition.

Publications

10 25 50