The Internet of Silicon Retinas (IoSiRe): Machine to machine communications for neuromorphic vision sensing data

Lead Research Organisation: University College London
Department Name: Electronic and Electrical Engineering

Abstract

Abstracts are not currently available in GtR for all funded research. This is normally because the abstract was not required at the time of proposal submission, but may be because it included sensitive information such as personal details.

Publications

10 25 50

publication icon
Amjad J (2021) Deep Learning Model-Aware Regulatization With Applications to Inverse Problems in IEEE Transactions on Signal Processing

publication icon
Anarado I (2017) Mitigating Silent Data Corruptions in Integer Matrix Products: Toward Reliable Multimedia Computing on Unreliable Hardware in IEEE Transactions on Circuits and Systems for Video Technology

publication icon
Bi Y (2020) Graph-based Spatio-Temporal Feature Learning for Neuromorphic Vision Sensing. in IEEE transactions on image processing : a publication of the IEEE Signal Processing Society

publication icon
Chadha A (2019) Video Classification With CNNs: Using the Codec as a Spatio-Temporal Activity Sensor in IEEE Transactions on Circuits and Systems for Video Technology

publication icon
Chadha A (2019) Improved Techniques for Adversarial Discriminative Domain Adaptation. in IEEE transactions on image processing : a publication of the IEEE Signal Processing Society

publication icon
Jubran M (2020) Rate-Accuracy Trade-Off in Video Classification With Deep Convolutional Neural Networks in IEEE Transactions on Circuits and Systems for Video Technology

 
Description Unlike existing projects in IoT visual sensing that enhance compression or transmission aspects for conventional video data, IOSIRE took a foundational approach by reconsidering the essentials of visual sensing and working out new representation, compaction and transmission schemes based on recently-developed dynamic vision sensing technologies. This complements existing work by other academic and industrial groups in the UK and worldwide in IoT, sensing and low-power systems.

We have now begun to derived actual results showing that, under the same precision (e.g., confusion matrix values for classification or mean absolute precision (mAP) in visual search), we obtain 10 to 100-fold reduction of energy consumption and delay in comparison to conventional video streaming, thereby allowing for applications that would be impossible with conventional video-based IoT systems. As written in our proposal, this stems from the aggregate of: (i) up to 10-fold reduction in sensing power from DVS vs. conventional cameras; (ii) up to 5-fold reduction in bandwidth due to T1.2 (in comparison to video coding); (iii) 2 to 5-fold reduction of redundant transmissions due to T2.2, T2.3, T3.1 and T3.2 (fountain coding, adaptive modulation and FD capabilities, and combining their advantages via adaptive NET-application co-learning); (iv) up to 3-fold reduction in traffic volume sent to the cloud via the adaptive edge processing of WP3.
Exploitation Route Possibilities of inclusion of the practical outcomes of this research in future chip designs of MediaTek and iniLabs will be assessed, along with commercial licensing and further collaboration possibilities. Thales can also assess the feasibility of including DVS and M2M communications in their roadmap for visual surveillance and monitoring systems. It is also expected that our DVS-IoT system simulations/emulations may be deployed in advancing the LinkIt development platform and EDA design automation software, which are offered
to this project, respectively by MediaTek and Keysight. Beyond such activities, by open-sourcing the derived software for artificial generation of DVS triggering events based on conventional frame-based video footage (WP1), large video datasets that have been manually annotated for classification and retrieval tests can be converted into DVS streams. This will substantially facilitate the training of deep neural networks for different contexts, which can become a catalyst for further R&D work in the field.
Sectors Aerospace, Defence and Marine,Digital/Communication/Information Technologies (including Software)

URL https://github.com/pix2nvs
 
Description The project has created two important outcomes that have now begun to receive significant attention: 1) An emulator (PIX2NVS) has been developed and released as open-source, which is now being used widely by several researchers in academia and industry for the generation of neuromorphic vision sensing data from conventional video. The project is available at: https://github.com/PIX2NVS/PIX2NVS and the related paper has received significant attention. 2) Related datasets and methods for neuromorphic-based action recognition, see project: https://github.com/PIX2NVS/NVS2Graph This has also been widely used, both as the open-source method and as a dataset, e.g., the paper has received over 50 citations in a 2-year period since it was released by both academic and industrial research groups.
First Year Of Impact 2021
Sector Digital/Communication/Information Technologies (including Software)
Impact Types Economic

 
Description EPSRC CASE - funded by the MEDIA RESEARCH PARTNERS TRADING AS THE MEDIA INSTITUTE
Amount £30,878 (GBP)
Organisation Engineering and Physical Sciences Research Council (EPSRC) 
Sector Public
Country United Kingdom
Start 08/2017 
End 08/2021
 
Description Enabling Visual IoT Applications with Advanced Network Coding Algorithms
Amount € 195,454 (EUR)
Funding ID 750254 
Organisation European Commission 
Sector Public
Country European Union (EU)
Start 01/2018 
End 01/2020
 
Description Leverhulme Trust Senior Research Fellowship
Amount £51,380 (GBP)
Funding ID LTSRF1617/13/28 
Organisation The Royal Society 
Department Royal Society Leverhulme Trust Senior Research Fellowship
Sector Charity/Non Profit
Country United Kingdom
Start 09/2017 
End 09/2018