An Integrated Vision and Control Architecture for Agile Robotic Exploration

Lead Research Organisation: University of Manchester
Department Name: Electrical and Electronic Engineering

Abstract

Autonomous robots, capable of independent and intelligent navigation through unknown environments, have the potential to significantly increase human safety and security. They could replace people in potentially hazardous tasks, for instance search and rescue operations in disaster zones, or surveys of nuclear/chemical installations. Vision is one of the primary senses that can enable this capability, however, visual information processing is notoriously difficult, especially at speeds required for fast moving robots, and in particular where low weight, power dissipation and cost of the system are of concern. Conventional hardware and algorithms are not up to the task. The proposal here is to tightly integrate novel sensing and processing hardware, together with vision, navigation and control algorithms, to enable the next generation of autonomous robots.

At the heart of the system will be a device known as a 'vision chip'. This bespoke integrated circuit differs from a conventional image sensor, including a processor with each pixel. This will offer unprecedented performance. The massively parallel processor array will be programmed to pre-process images, passing higher-level feature information upstream to vision tracking algorithms and the control system. Feature extraction at pixel level results in an extremely efficient and high speed throughput of information. Another feature of the new vision chip will be the measurement of 'time of flight' data in each pixel. This will allow the distance to a feature to be extracted and combined with the image plane data for vision tracking, simplifying and speeding up the real-time state estimation and mapping capabilities. Vision algorithms will be developed to make the most optimal use of this novel hardware technology.

This project will not only develop a unique vision processing system, but will also tightly integrate the control system design. Vision and control systems have been traditionally developed independently, with the downstream flow of information from sensor through to motor control. In our system, information flow will be bidirectional. Control system parameters will be passed to the image sensor itself, guiding computational effort and reducing processing overheads. For example a rotational demand passed into the control system, will not only result in control actuation for vehicle movement, but will also result in optic tracking along the same path. A key component of the project will therefore be the management and control of information across all three layers: sensing, visual perception and control. Information share will occur at multiple rates and may either be scheduled or requested. Shared information and distributed computation will provide a breakthrough in control capabilities for highly agile robotic systems.

Whilst applicable to a very wide range of disciplines, our system will be tested in the demanding field of autonomous aerial robotics. We will integrate the new vision sensors onboard an unmanned air vehicle (UAV), developing a control system that will fully exploit the new tracking capabilities. This will serve as a demonstration platform for the complete vision system, incorporating nonlinear algorithms to control the vehicle through agile manoeuvres and rapidly changing trajectories. Although specific vision tracking and control algorithms will be used for the project, the hardware itself and system architecture will be applicable to a very wide range of tasks. Any application that is currently limited by tracking capabilities, in particular when combined with a rapid, demanding control challenge would benefit from this work. We will demonstrate a step change in agile, vision-based control of UAVs for exploration, and in doing so develop an architecture which will have benefits in fields as diverse as medical robotics and industrial production.

Planned Impact

High technology industries have been identified as a key sector in the UK economy. Robotic and autonomous systems have in turn been singled out as technologies of particular importance to maintain economic competitiveness in high technology applications. We intend to develop the next generation of active vision sensors for autonomous robots, focusing on delivering systems with real-time high-speed functionality at low-power, low-weight, and low-cost. These will have potential applications across a range of industries, including security, transportation, manufacturing, agriculture, nuclear and healthcare. Just some of these applications are highlighted below.

Agile micro air-vehicles, and more generally, advanced vision-based navigation systems for autonomous robots will find both civilian and military applications in reconnaissance and search and rescue operations. The system being developed will be applicable in unmanned vehicles (air, land and water based), either as a primary navigation and control system, or alternatively to enhance safety and provide additional features. Further applications include inspection (e.g. nuclear decommissioning, inspection of overhead power lines, monitoring of oil rigs or power sub-stations, agricultural surveys, etc) and space exploration. In all these fields, robots are currently used, but our research will offer significant performance benefits, expanding the scope of applications. Applications of autonomous robots, from self-driving automobiles, to drone-based goods delivery and robotic companions, have been recently attracting both large amount of public interest and significant industry investment. While some of these still remain long-term aspirations, the companies we collaborate with on this project see a more immediate (on a 5-10 year horizon) use of our technologies in their application domains.

There is widespread expectation of autonomous robots entering everyday life - but for many applications not just the performance but also the system cost, size and power consumption are currently prohibitive. The combination of performance and small-size/low-cost of our proposed system will make it suitable for cost-sensitive applications, from consumer robotics to toys. Similar properties are also needed for automated video surveillance (especially distributed smart cameras, e.g. crowd and traffic monitoring, early forest-fire detection, or fall monitoring for the elderly) and in vehicular applications (e.g. parking assistance, collision avoidance and driver alertness). Low-power intelligent sensing is a prerequisite for the plethora of much talked-about "internet-of-things", "ambient intelligence" and "cyber-physical systems" applications, and in portable and wearable systems.

The unparalleled high-speed potential of our near-sensor vision processing approach will be a key advantage in the field of manufacturing process control, where an advanced machine vision system can provide the opportunity to respond to and to control high-speed events. These might include component manufacture and assembly, laser welding, control of industrial robots, and high-speed metrology for sorting or visual inspection for quality assurance.

Further afield, and on longer time-scales, potential applications of the developed technologies can be identified in fields ranging from nuclear research to healthcare. For example, high-speed sensor-level tracking systems, based on the technologies we will develop could be used for beam control in particle accelerators, or to reduce the highly-redundant data typically collected by these systems, or to locate and track radiation instability within nuclear fusion research facilities. In healthcare, ultra high-speed vision systems could be used to improve the accuracy and reduce time-to-diagnosis of cancer screening - identifying circulating tumour cells from blood samples of several million cells.
 
Description The work contributed to the development of image sensors including processor circuitry embedded into the pixels of the sensor array, and their applications in agile robotics, and beyond. The novel hardware design (chip design and system-level work) has produced useable systems, with full software support, that have been exercised in applications, and made available to existing and new collaborators. We were able to demonstrate sophisticated vision processing on the sensor, including complete light-to-decision operation where all processing is carried out at the sensor device level. We demonstrated how the speed and efficiency of these sensors can be used in control of autonomous agile drones, including navigation, obstacle detection, and flight control tasks. We also explored implementations of deep learning (neural networks), and other complex algorithms on the focal plane. The work paves the way towards future research on visual computing for the edge devices, and indicates significant potential for significantly improving performance and efficiency of vision systems, using the developed methods of on-chip pixel-level processing.
Exploitation Route The work paves the way towards practical applications of vision sensors with pixel-parallel processor arrays and the development of future vision processor devices. The hardware systems and methods developed in the project will continue to be used in the research on pixel-parallel processor arrays in our laboratories and by our collaborators, and the research community in general. We have made systems available on loan to other research groups. The workl also has commercial potential.
Sectors Digital/Communication/Information Technologies (including Software),Electronics

URL https://sites.google.com/view/project-agile/home
 
Description The vision sensors and smart camera systems developed in the project have significant commercial potential. In addition to robotics, they can be used in a range of applications, such as industrial machine vision, security cameras, smart buildings, VR systems, gesture recognition, automotive systems, and in many other edge computing applications where a low-cost energy-efficient embedded vision system is required. A license of the SCAMP technology to a Chinese company that was planing to develop the technology commercially through a UK-based subsidiary was blocked by the UK government. Currently, a new spin-out company is being formed in the UK to commercialise this technology.
First Year Of Impact 2022
Sector Digital/Communication/Information Technologies (including Software),Electronics
Impact Types Economic

 
Description ETH Zurich 
Organisation ETH Zurich
Country Switzerland 
Sector Academic/University 
PI Contribution We provide expertise and vision sensors hardware
Collaborator Contribution Expertise on vision algorithms, development of software
Impact Research papers on applications of vision sensors
Start Year 2015
 
Description Stanford University (Julien Martel, Gordon Wetzstein) 
Organisation Stanford University
Country United States 
Sector Academic/University 
PI Contribution Development of pixel-parallel vision sensor hardware and software
Collaborator Contribution Development of computer vision and computational photography algorithms for pixel-parallel vision sensors
Impact Conference publications, demonstrations
Start Year 2019
 
Description University of Bristol (W.Mayol-Cuevas, T.Richardson) 
Organisation University of Bristol
Country United Kingdom 
Sector Academic/University 
PI Contribution This was a collaborative project between The University of Manchester and University of Bristol
Collaborator Contribution The researchers at University of Bristol provided expertise in computer vision and UAV applications.
Impact multiple research papers and technology demosntrations
Start Year 2015
 
Description Vision Institute, Paris 
Organisation Pierre and Marie Curie University - Paris 6
Country France 
Sector Academic/University 
PI Contribution Currently I am a Visiting Researcher in the Natural Computation group at the Vision Institute, working on applications of vision sensors and neuromorphic computation.
Collaborator Contribution I am based to the Vision Institute for 2017-18, interacting with the researchers here.
Impact publications in progress (microparticle tracking)
Start Year 2017
 
Title SCAMP-7 Vision Chip & System 
Description A vision sensor (SCAMP-7 vision chip) and associated hardware/software system. 
Type Of Technology Physical Model/Kit 
Year Produced 2020 
Impact This is a follow-up to the SCAMP-5 system. 
 
Title Scamp5d Vision System 
Description This is a smart camera system, based on the SCAMP-5 vision chip. The system provides a complete software/hardware development kit, and can be deployed to develop low-power high-performance machine vision solutions. 
Type Of Technology Physical Model/Kit 
Year Produced 2018 
Impact The system is provided free of charge (on loan) to various collaborating institutions, including Bristol University, Imperial College London, Stanford University, ETH Zurich, Sorbonne University, who use it to develop their algorithms and explore use of these devices in the context of various applications in robotics, machine intelligence, vision. 
URL https://scamp.gitlab.io/scamp5d_doc/
 
Description Press release on on-sensor CNN implementation 
Form Of Engagement Activity A press release, press conference or response to a media enquiry/interview
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact A press release was made. This was subsequently reported in various media and online (scitechdaily.com, photonics.com, etc)
Year(s) Of Engagement Activity 2020
URL https://scitechdaily.com/intelligent-cameras-that-can-learn-and-understand-what-they-are-seeing/