Scalable Information Fusion: Adaptivity for Complex Environments and Secure Data
Lead Research Organisation:
University of Bristol
Department Name: Electrical and Electronic Engineering
Abstract
Visual analysis by human operators or service personnel is widely acknowledged to benefit from a fused representation, where images or video information from different spectral bands are combined into a single representation. To provide maximum utility fused data, or its constituent components, must be delivered in a timely manner, must facilitate simple and flexible processing and must be robust to loss and network congestion.Non infrastructure-based Mobile Ad-Hoc Networks are emerging as suitable platforms for exchanging and fusing real-time multi-sensor content. Such networks are characterised by the highly dynamic behaviour of the transmission routes and high path outage probabilities. They exemplify the type of complex, heterogeneous end-end transmission environments which will be commonly encountered in future military scenarios. The low-bandwidth, noisy nature of the physical channel in many sensor networks represents the most serious challenge to implementation of the digital battlefield of the future. One of the key challenges in such complex networking environments is the need to reliably transport and fuse real time video. Video is acknowledged to be inherently difficult to transmit and this is compounded by the need to support multiple sources to aid fusion and situational awareness while maintaining data security. We will focus our work on embedded video bitstreams (MPEG-4 (SVC) which offer scalability and enhanced flexibility for adaptation to varying channel types, interference levels, network structures and content types. These mitigate the need for highly inefficient video transrating processes and instead present a more tractable requirement in the form of dynamic bitstream management.A multisource approach to streaming is proposed which will support video fusion in a bandwidth-efficient manner while having the potential to significantly increase the robustness of real-time transmission in complex heterogeneous networks. Source coding and fusion will be based on the concept of scalability using an embedded bitstream. This means that the source need only be encoded once and that the coded representation can be truncated to support multiple diverse terminal types and to provide inherent congestion management without feedback. Such a system must be designed to maintain optimum fusion performance and hence intelligibility in the presence of bitstream truncation. The potential advantages of this scheme include:- A joint framework for scalable fusion and compression supporting both lossless and lossy representations. - Flexibility for optimisation depending on content type and application.- Graceful degradation: the capability of the fused video bitstream to adapt to differing terminal types and dynamic network conditions - Error resilience: the structure of the code stream can aid subsequent error correction systems alleviating catastrophic decoding failures.- Secure delivery: the ability to design encryption schemes which support truncation.- Region-of-Interest coding: supporting definition of ROIs for priority transmission.
Publications

Hill P
(2013)
Scalable video fusion

Hill P
(2014)
Dual-tree complex wavelet coefficient magnitude modelling using the bivariate Cauchy-Rayleigh distribution for image denoising
in Signal Processing


Hill P
(2011)
Scalable fusion using a 3D dual tree wavelet transform

Loza A
(2013)
Automatic contrast enhancement of low-light images based on local statistics of wavelet coefficients
in Digital Signal Processing
Description | This project introduced a new framework for joint scalable compression and fusion of video content in the compressed domain. Firstly, it demonstrated that compressed domain fusion is impossible for conventional video codecs due to drift introduced by multiple sets of interacting prediction loops. The proposed framework overcomes this by using a prediction-free compression technique based on a 3D Dual-tree Discrete Wavelet Transform (3D-DDWT) together with iterative noiseshaping, a novel non-expansive symmetrical wavelet boundary extension method and an efficient bitplane encoder. Fusion in the 3D-DDWT domain has been demonstrated to retain critical salient spatial and temporal information from all input sources. The new fusion method offers equivalent performance to the best frame-by-frame approach while eliminating associated temporal fusion artefacts. This system enables, for the first time, multi-source video fusion with the capability to create an embedded and quality-scalable bitstream, thus facilitating a flexible and reliable approach for delivery of fused content under dynamic network conditions. |
Exploitation Route | The findings would be applicable in defence or security applications where communication of multi-sensor data occurs over a network with dynamic bandwidth variations. In such cases it is often necessary to change the video bandwidth and retain security. Conventionally this is done by decrypting and then transcoding and then re-encrypting. Our system enables, for the first time, multi-source video fusion with the capability to create an embedded and quality-scalable bitstream, thus facilitating a flexible and reliable approach for delivery of fused content under dynamic network conditions. |
Sectors | Aerospace Defence and Marine Digital/Communication/Information Technologies (including Software) Healthcare Security and Diplomacy Transport |
URL | http://www.bristol.ac.uk/vi-lab/projects/udtcwt/ |
Description | The work was performed in collaboration with General Dynamics who performed an assessment of the technology. The constraint on deployment was the use of a non-standardised codec. The methods developed were subsequently extended to deal with mitigating atmospheric distortions of images in collaboration with DSTL and GDUK; The work was identified as the best candidate by US DoD for a new surveillance system - unfortunately funding for this system was withdrawn prior to adoption. |
Sector | Aerospace, Defence and Marine |
Description | DSTL |
Amount | £36,000 (GBP) |
Funding ID | ITP UKG 6784 |
Organisation | Defence Science & Technology Laboratory (DSTL) |
Sector | Public |
Country | United Kingdom |
Start | 06/2009 |
End | 12/2009 |
Description | DSTL |
Amount | £36,000 (GBP) |
Funding ID | ITP UKG 6784 |
Organisation | Defence Science & Technology Laboratory (DSTL) |
Sector | Public |
Country | United Kingdom |
Start | 04/2010 |
End | 02/2011 |
Description | EPSRC IAA CLEAR |
Amount | £20,000 (GBP) |
Organisation | University of Bristol |
Sector | Academic/University |
Country | United Kingdom |
Start | 08/2016 |
End | 08/2017 |
Description | EU H2020 - Multidrone |
Amount | € 579,000 (EUR) |
Funding ID | 731667 |
Organisation | European Commission H2020 |
Sector | Public |
Country | Belgium |
Start | 01/2017 |
End | 12/2019 |
Description | General Dynamics UK Ltd |
Amount | £34,000 (GBP) |
Funding ID | UoB/HH1 |
Organisation | General Dynamics |
Sector | Private |
Country | United Kingdom |
Start | 03/2008 |
End | 12/2008 |
Description | General Dynamics UK Ltd |
Amount | £34,000 (GBP) |
Funding ID | UoB/HH1 |
Organisation | General Dynamics |
Sector | Private |
Country | United Kingdom |
Start |
Description | Fusion Collaboration with RFEL |
Organisation | RFEL |
Country | United Kingdom |
Sector | Private |
PI Contribution | RFEL approached UoB, based on this grant to provide research direction and consultancy on optimised architectures and algorithms for video fusion. Support for R&D - optimised fusion algorithms and architectures. |
Start Year | 2012 |