Reflexive robotics using asynchronous perception

Lead Research Organisation: University of Surrey
Department Name: Vision Speech and Signal Proc CVSSP

Abstract

This project will develop a fundamentally different approach to visual perception & autonomy where the concept of an image itself is replaced with a stream of independently firing pixels, similar to unsynchronised biological cells in the retina. Recent advances in computer vision & machine learning have enabled robots which can perceive, understand, and interact intelligently with, their environments. However, this "interpretive" behaviour is just one of the fundamental models of autonomy found in nature. The techniques developed in this project will exploit recent breakthroughs in instantaneous, non-image-based, visual sensing, to enable entirely new types of autonomous system. The corresponding step-change in robotic capabilities will impact the manufacturing, space, autonomous vehicles and medical sectors.

If we perceive an object approaching at high speed, we instinctively try to avoid the object without taking the time to interpret the scene. It is not important to understand what the object is or why it's approaching us. This "reflexive" behavioural model is vital to react to time-critical events. In such cases, the situation has often already been resolved by the time we become consciously aware of it. Reflexive behaviour is also a vital component of continuous control problems. We are reluctant to take our eyes off the road while driving, as we know that we will rapidly begin to veer off course without a constant cycle of perception and correction. We also find it far easier to pick up and manipulate objects while looking at them, rather than relying entirely on tactile sensing.
Unfortunately, visual sensing hardware requires enormous bandwidth. Megapixel cameras produce millions of bytes per frame. Thus, the temporal sampling rate is low, reaction times are high, and reflexive adjustments based on visual data become impractical.

We finally have the opportunity to overturn the paradigm of vision being impractical for low-latency problems, and to facilitate a step change in robotic capabilities, thanks to recent advances in visual sensor technology. Asynchronous visual sensors (also known as event cameras) eschew regular sensor wide updates (i.e. images). Instead, every pixel independently and asynchronously transmits a packet of information, as soon as it detects an intensity change from its previous transmission. This drastically reduces data bandwidth by avoiding the redundant transmission of unchanged pixels. More importantly, because these packets are transmitted immediately, the sensor typically provides a latency reduction of 3 orders of magnitude (30ms to 30us) between an event occurring and it being perceived.

This advancement in visual sensing is dramatic, but we are desperately in need of a commensurate revolution in robotic perception research. Without the concepts of the image or synchronous sampling, decades of computer vision and machine learning research is rendered unusable with these sensors. This project will provide the theoretical foundations for the robot perception revolution, by developing novel asynchronous paradigms for both perception and understanding. Mirroring biological systems, this will comprise a hierarchical perception framework encompassing both low-level reflexes and high-level understanding, in a manner reminiscent of modern deep-learning. However, unlike deep-learning, pixel-update events will occur asynchronously and will propagate independently through the system, hence maintaining extremely low latency.

The sensor technology is still in its early trial phase, and few researchers are exploring its implications for perception. No group, nationally or internationally, is currently making a concerted effort in this area. Hence, this project not only lays the groundwork for a plethora of new biologically-inspired "reflexive robotics" applications. It will also support the development of a unique new research team, placing the UK at the forefront of this exciting field.

Planned Impact

This research is a disruptive technology spanning several of the largest growing research fields: Robotics, Computer Vision and AI. Consequently, the impact of this research will be felt across a broad range of areas.
For society as a whole, the major impact will be improved productivity, leading to a corresponding improvement in quality of life. There are also likely to be societal health benefits from the increased automation of hazardous jobs and a strengthening of the economy.

In addition to direct economic growth through increased automation, another potential benefit of building a dedicated UK team in a newly emerging area with such promise, is a strengthening of our international leverage. Post-brexit, the ability to bring some truly unique expertise to the negotiating table is invaluable in the crowded Robotics and Autonomous Systems marketplace. The pathways to impact outlines a number of techniques which will ensure that the UK has the knowledge, and people, to become the commercial centre for this technology (beyond this proposal and the PI's research team).

A list of more specific industrial sectors which will be impacted by this research is provided below. For each of the currently identified areas, the PI has already begun building relationships with potential industrial partners (see the Strategic Advisory Board in the Pathways to Impact).
- Autonomous vehicles - it is obvious that driving is an area where the ability to make reflexive low latency actions in an emergency situation, is vital. Perceptual techniques which specifically account for such low-latency emergencies will significantly reduce fatality rates of future autonomous vehicles. Similarly, consumer convenience will be better served by reliable autonomous delivery systems, which are able to react appropriately to dangerous situations.
- Manufacturing - as discussed under National Importance, the UK manufacturing sector has some of the worst productivity ratings, and lowest levels of automation, in the EU. This is largely due to the huge quantity of SME manufacturers, whose production runs are too limited to warrant expensive robotic integration. Improving the intelligence and reactivity of robotic automation systems, coupled with suitable hardware, will loosen the integration requirements and make automation feasible for a wider range of small batch manufacturers. Apart from the economic benefits, this will have additional societal impacts: greater consumer choice and more "personalised" manufacturing.
- Space robotics - as with autonomous vehicles, this is a high stakes environment where accidents come with an enormous cost, and the ability to react rapidly to emergencies is invaluable. Reliable space robotics systems will drastically reduce the cost of many space missions, by removing the need for life support systems. In addition to enabling greater exploitation of space for scientific and industrial purposes, this research will reduce the loss of life to human astronauts.

In these industrial sectors, the impact of the proposed research is clearly apparent. However, some of the more speculative impact areas may also prove to be the most exciting. Developing and exploring the properties of bio-inspired asynchronous perception systems, could have a profound impact on our understanding of the biological systems they mimic. Changes in our understanding of perception could affect the way we approach learning and teaching. It may also inform how we deal with certain disabilities and psychological disorders. As with the industrial areas above, the Strategic Advisory Board has been designed to help ensure these potential impacts are appropriate explored.

Publications

10 25 50