Bioinspired vision processing for autonomous terrestrial locomotion

Lead Research Organisation: University of Bristol
Department Name: Mechanical Engineering

Abstract

Land vehicles have been designed almost exclusively to use wheels whereas terrestrial animals almost exclusively use legs for locomotion. Wheeled systems can be fast and efficient on hard flat ground; leg based systems are more versatile and efficient on natural terrain. As we move towards a future of autonomous systems operating beyond the extent of the road network and on other planets it is likely that development of robust artificial leg-based locomotion will become increasingly important.
At present, several limits of technology prevent the emergence of autonomous legged systems with biocomparable performance. Even if a system was to emerge that could walk, run, leap, and turn without falling over, the technology does not exist safely to guide it through complex terrain using vision. Typically research into using vision for autonomous locomotion is undertaken using available vehicle technology - suggesting that the emergence of high-performance, vision-guided legged systems might occur at some time following the emergence of a basic high performance legged vehicle platform. In a novel approach we will expedite the development of a vision control architecture for locomotion over complex terrain by using human subjects as high performance vehicle platforms.
The visual scene captured using a head mounted camera will be processed to identify terrain characteristics known to be important for control of locomotion. A map of the terrain synthesised in 3D virtual space and updated in real-time is presented to the human using a virtual reality headset. The overall outcome measure will be the locomotion performance achieved by the humans using the system compared to that with no vision information available and with normal vision.
There are many benefits of this approach: it will allow us to investigate how humans modulate gait paramters and limb mechanics to compensate for partial or unreliable inforamtion about the environment. It will provide insight into the integration of feedforward and feedback control of locomotion. It will allow us to determine the locomotion performance that is possible from a given amount and quality of visually derived information given a highly developed locomotor platform and thus to understand how these two components of a high performance locomotor sytem combine to determine overall performance.
The basic principles and technologies establilsed during this project will be applicable to any land vehicle whether based on wheels or legs. Additionally, the processing of visual information for locomotion control is a special case of the more generalised task to search the ground for an object or visual feature. The technology developed in this project may be translated to other applications in which visually-guided autonomous function is required.

Planned Impact

(See Academic Beneficiaries for impact in the Academic community)

Systems for autonomous land locomotion have potential application accross most major industry sectors (see case for support). The nature of the work in this project is to develop fundamental key enabling technology for autonomous locomotion, which will in turn enable a wide range of autonomous systems. The full extent of the impact therefore will be wide and over the long term. Two types of company may derive short term benefit from our research (1) Solutions Providers such as BAE Systems and SCISYS who are already engaged in programmes of R&D of autonomous systems (2) Industries with specific and well defined applications for autonomous systems, these include the partner companies Sellafield Limited, Network Rail, National Nuclear Laboratory.

Core technology will be developed using advanced signal processing techniques to map features in the enviroment from fused video and kinematic data. Short term beneficiares include industries engaged in visual mapping, autonomous systems and robotics, and those with 'foraging' type applications where a large geographical location is to be searched for certain localised visual characteristics.
 
Description During the course of the project we developed equipment to allow humans to walk in a virtual reality (VR) environment that was synthesised in real-time from their real environment. Using this technique we were able to study human locomotion in environments where very little visual information is present. It was remarkable that, at the time of the project, the best available computer, VR headset and camera technology, allowed only a very limited synthesis of the environment facilitating only slow locomotion. The insight provided from the work is that considerably higher resolution sensors, considerably more computation-dense computers and considerably more efficient algorithms are required before autonomous systems will be capable of the robust high-speed locomotion over complex terrain observed in mammals; one might expect several decades to elapse before such systems emerge. As a result we focussed our study on human locomotion as a system in which the "algorithms" for visual guided control are manifest and experimentally tractable. Ultimately this has fed into two new areas of work: We translated our techniques to study human-structure interaction developing the first VR simulator of human-bridge dynamics in Europe and establishing the technique as a way to study human interaction with moving structures. The Urban Vision research group has formed to investigate how urban visual environments affect human locomotion behaviour and wellbeing.
Exploitation Route The potential beneficiaries of the project were engineers designing autonomous systems, scientists studying the mammalian locomotor system and clinicians studying its pathologies. In each case there are outcomes from the work that may usefully be exploited. The most valuable future application for the work will be to develop VR techniques to study the visual control of human locomotion. To date our understanding of visually guided locomotion is limited to a relatively small number of rules that can be elucidated from controlled experiments in a physical environment. Used effectively, VR allows subtle and varied perturbations of locomotion to be explored in a time-efficient manner. The extreme versatility and robustness of the mammalian locomotion system arises from highly complicated and agile neural control. The sorts of techniques and protocols we developed during this project provide not just an opportunity, but inevitably are required to make significant progress in the neuroscience of optimised terrestrial locomotion.
Sectors Construction,Digital/Communication/Information Technologies (including Software),Energy,Healthcare,Pharmaceuticals and Medical Biotechnology,Transport

 
Title Realtime VR scene synthesis 
Description A synthesised visual environment containing a subset of visual information available using human vision is presented to a human using VR. The human wears a VR headset and a head-mounted camera. Software translates the camera view into a 3D geometric model of the environment which is projected onto a viewing plane and shown in the VR headset. The method allows the study of locomotor performance in reduced visual environments 
Type Of Material Improvements to research infrastructure 
Provided To Others? No  
Impact none yet