An Integrated Vision and Control Architecture for Agile Robotic Exploration

Lead Research Organisation: University of Bristol
Department Name: Computer Science

Abstract

Autonomous robots, capable of independent and intelligent navigation through unknown environments, have the potential to significantly increase human safety and security. They could replace people in potentially hazardous tasks, for instance search and rescue operations in disaster zones, or surveys of nuclear/chemical installations. Vision is one of the primary senses that can enable this capability, however, visual information processing is notoriously difficult, especially at speeds required for fast moving robots, and in particular where low weight, power dissipation and cost of the system are of concern. Conventional hardware and algorithms are not up to the task. The proposal here is to tightly integrate novel sensing and processing hardware, together with vision, navigation and control algorithms, to enable the next generation of autonomous robots.

At the heart of the system will be a device known as a 'vision chip'. This bespoke integrated circuit differs from a conventional image sensor, including a processor with each pixel. This will offer unprecedented performance. The massively parallel processor array will be programmed to pre-process images, passing higher-level feature information upstream to vision tracking algorithms and the control system. Feature extraction at pixel level results in an extremely efficient and high speed throughput of information. Another feature of the new vision chip will be the measurement of 'time of flight' data in each pixel. This will allow the distance to a feature to be extracted and combined with the image plane data for vision tracking, simplifying and speeding up the real-time state estimation and mapping capabilities. Vision algorithms will be developed to make the most optimal use of this novel hardware technology.

This project will not only develop a unique vision processing system, but will also tightly integrate the control system design. Vision and control systems have been traditionally developed independently, with the downstream flow of information from sensor through to motor control. In our system, information flow will be bidirectional. Control system parameters will be passed to the image sensor itself, guiding computational effort and reducing processing overheads. For example a rotational demand passed into the control system, will not only result in control actuation for vehicle movement, but will also result in optic tracking along the same path. A key component of the project will therefore be the management and control of information across all three layers: sensing, visual perception and control. Information share will occur at multiple rates and may either be scheduled or requested. Shared information and distributed computation will provide a breakthrough in control capabilities for highly agile robotic systems.

Whilst applicable to a very wide range of disciplines, our system will be tested in the demanding field of autonomous aerial robotics. We will integrate the new vision sensors onboard an unmanned air vehicle (UAV), developing a control system that will fully exploit the new tracking capabilities. This will serve as a demonstration platform for the complete vision system, incorporating nonlinear algorithms to control the vehicle through agile manoeuvres and rapidly changing trajectories. Although specific vision tracking and control algorithms will be used for the project, the hardware itself and system architecture will be applicable to a very wide range of tasks. Any application that is currently limited by tracking capabilities, in particular when combined with a rapid, demanding control challenge would benefit from this work. We will demonstrate a step change in agile, vision-based control of UAVs for exploration, and in doing so develop an architecture which will have benefits in fields as diverse as medical robotics and industrial production.
 
Description We have developed novel algorithms for knowing how a camera has moved in space when the camera is of the special type of having a processor at every pixel. These new algorithms have been published at top international conferences in Computer Vision and Robotics. The interest from academic colleagues as well of industry is very encouraging as the new type of cameras and algorithms we are developing on this project can have a dramatic impact for robots and AI systems that move and have to perceive in real-time.
Exploitation Route We are currently being approached by colleagues from another UK university interested in collaborating as well as colleagues from another European institution.
Sectors Aerospace, Defence and Marine,Digital/Communication/Information Technologies (including Software),Electronics

URL https://sites.google.com/view/project-agile
 
Description Initial collaboration with Imperial College London 
Organisation Imperial College London
Country United Kingdom 
Sector Academic/University 
PI Contribution As we aim to expand the impact of Pixel Processor Arrays we have supported Imperial College Researchers to get a better understanding of the technology and ways to program devices with computer vision algorithms.
Collaborator Contribution At least one paper has been written and will be submitted soon.
Impact Still pending
Start Year 2019
 
Description Invited talk: Towards Integrated Perception and Control for UAVs with Pixel Processor Arrays 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact This was an invited talk at this leading workshop on agile perception for UAVs as part of the IROS conference in Vancouver, CA.
Year(s) Of Engagement Activity 2017
URL http://www.seas.upenn.edu/~loiannog/workshopIROS2017uav/