Swarm exploration: multi-robot visual navigation through insect-inspired strategies

Lead Research Organisation: University of Sussex
Department Name: Sch of Engineering and Informatics

Abstract

This interdisciplinary project will form part of the EPSRC funded Brains on Board (BoB; brainsonboard.co.uk) program grant, a multi-university project in which we aim to create robots with the learning abilities of bees.
The views experienced as a robot traverses a route are used to train an artificial neural network (ANN) to learn a compact encoding of the familiarity of route views. Once trained, the network estimates whether new views - and thus new poses - have been experienced before. For route guidance, robots move, sampling different views, and head in familiar directions, as specified by the ANN. As the route knowledge is encoded in the weights of an ANN, it can be easily transmitted between robots. However, to be useful, the information needs to be encoded in such a way that it is independent of that particular robot's perspective and knowledge also needs to be carefully combined from multiple robots. We project entails three distinct project stages.

Stage 1: Perspective-invariant route encodings for natural environments [12 months]. Panoramic images hold redundant information for recovering azimuthal directions in low-frequency components that can potentially be perspective-invariant (e.g. ants use visual features from the skyline profile). We will explore perspective invariant visual processing (through inspiration from visual neuroscience) and incorporate IMU information to route learning and navigation algorithms.

Stage 2: UAV to wheeled robot transfer [12-24 months]. The 2nd year will field-test the algorithms developed in Stage 1 to allow transfer of navigation information between a UAV and a wheeled robot in field-tests of increasing difficulty.

Stage 3: Route combination [24-36 months]. We will finally investigate methods for amalgamating route knowledge from multiple robots exploring from a hub. To do this, we will incorporate noisy positional estimates (from odometry or GPS) to build up 'spokes' with estimated positions into a form of topographic map. This will incorporate some meta-learning and rehearsal to test route-spokes

Publications

10 25 50