Adaptive Hardware Systems with Novel Algorithmic Design and Guaranteed Resource Bounds

Lead Research Organisation: Heriot-Watt University
Department Name: Sch of Engineering and Physical Science


Digital processing of signals and images are frequently performed in many commercial electronic devices, including computer networks, mobile telephones and computer vision systems. A steady growth in demand for high functionality and reliability in devices such as mobile phones means that many different types of computer processors are used, from general purpose processors found in personal computers to computer chips designed to perform very specific tasks. However, at present there are no efficient design techniques that allow complex devices to be built up from a range of different computer processors. This means that current designs are often inefficient in terms of power usage and their responsiveness. Thus, a key requirement for the long-term exploitation of signal and image processing technologies lies in developing the increasingly complex processors that are required for high performance.This project addresses this need. It represents a rich inter-disciplinary collaboration between electronic engineers and computer scientists collectively aimed at overcoming fundamental challenges in high-performance computing applications. The proposed research builds on recent world-leading work in signal and image processing methods, techniques to assess the performance and complexity of computer software, and complex processor design techniques. A successful outcome to this research will allow new and efficient implementations of complex signal processing algorithms to support a diverse range of applications.


10 25 50

publication icon
Wallace EURASIP Member A (2010) Full Waveform Analysis for Long-Range 3D Imaging Laser Radar in EURASIP Journal on Advances in Signal Processing

publication icon
Grov G (2011) Hume box calculus: robust system development through software transformation in Higher-Order and Symbolic Computation

publication icon
Mustafa Aswad (Author) (2011) Low pain vs no pain multicore Haskell

publication icon
Ye J (2013) Parallel Bayesian inference of range and reflectance from LaDAR profiles in Journal of Parallel and Distributed Computing

publication icon
Limprasert W (2013) Real-Time People Tracking in a Camera Network in IEEE Journal on Emerging and Selected Topics in Circuits and Systems

publication icon
Abyd Al Zain (Author) (2009) Towards Hume SIMD Vectorisation

Description In this programme, which is part of a collaboration between four universities (Heriot-Watt, Edinburgh, Belfast and St Andrews), we explored the formally motivated development of representative complex reconfigurable algorithms on a heterogeneous multi-processor platform, to enable fast, efficient design of robust systems with predictable performance.
We implemented new adaptive and parallel software to process full waveform LiDAR (LIght Detection And Ranging) data which has become a crucial source for information for such diverse applications as remote sensing of the environment, remote target classification and vehicle navigation and perception.

(1) Formally Motivated Software Development:
Sensor platforms, like all contemporary computer systems, are made of up multiple, closely connected hardware processing elements, typically multiple CPU cores complemented with single instruction multiple data (SIMD) processors oriented to games and graphics. In the longer term multi-core CPUs will be further complemented by general purpose field programmable gate arrays (FPGAs) which may be configured for a variety of different tasks. Programs in the Hume language are made up of concurrent boxes linked by wires. Thus, we have explored how to implement Hume programs on heterogeneous platforms by realising boxes on the different processing elements.

In this work, we developed library functions for using SIMD elements in vector processing. Second, we explored the implementation of Hume on FPGAs using multiple micro-Blaze soft core processors through the standard Hume tool chain. This was based on an intermediate abstract machine which enables the accurate recourse analysis conducted by our co-workers at St. Andrews. Thus, we implemented boxes as both multiple abstract machines and as soft core native code. Third, we developed a new lightweight Hume compiler which generates C directly, and implemented boxes on multi-cores using OpenMP.

Reconfiguring Hume programs to make optimal use of the processing elements in heterogeneous platforms requires the ability to reason about how different patterns of box coordination affect overall resource use. To support this, we elaborated a novel methodology for the systematic transformation of Hume programs in our box calculus which changes program properties in a principled and predictable manner. Using program refinement from recursive definitions, we established strong formal properties of multi-box configurations equivalent to both task farm and divide and conquer patterns of parallel programming. We have also evaluated these configurations and shown that they deliver predictably improved performances.

(2) Adaptive, Parallel LiDAR processing
We developed efficient and adaptive techniques to apply reversible jump Markov chain Monte Carlo algorithms to the analysis of full waveform LiDAR (Light detection and Ranging) signals. These techniques allow us to interpret multi-layered three dimensional depth and reflectance data for a wide range of applications. In particular, the group have been looking at remotely sensed LiDAR data (air, space) to monitor forest dynamics as part of current concerns about climate change, and at images from LiDAR-equipped vehicles as part of autonomy and driver assistance for navigation and situation awareness. In general, our studies have shown that a combination of adaptive processing and parallel implementation can lead to substantial efficiency gains, while maintaining the much greater accuracy and precision of the RJMCMC method.

To expand on these issues, we applied convergence diagnostics to control adaptively the Markov chain mixing and length, reducing the number of iterations to achieve accurate inversion of the necessary signal parameters. Taking these ideas further, we then developed a state space decomposition (SSD)-RJMCMC approach. Rather than allow a single chain or chains to explore the complete variation of model dimension, this split the problem into subsets which had constrained (typically three) variation in the model dimension, and hence were much more efficient. This idea led naturally to control parallelism, in which different processors explored the different subsets. This was subsequently combined with data parallelism on the raw LiDAR data to further improve load balancing. Taking into account both algorithmic changes and parallel processing, we were able to achieve equivalent processing of LiDAR signals at speeds of up to 40 times the serial equivalent on a 32 processor network.
Exploitation Route Developing software with guaranteed space and time performance is a important for safety critical systems. The work presented here could provide a basis for future work in analysisng and developing such software.

LiDAR is a key sensor for a number of applications, including for autonomous vehicles and for forest mapping which are current areas of our own research and development. The work here showed how a combination of parallel processing and convergence analysis on a Markov chain could lead to substantial reductions on processing time. This can be used in deploying LiDAR in such scenarios to allow real time operation.
Sectors Aerospace

Defence and Marine

Digital/Communication/Information Technologies (including Software)



Description Programmable embedded platforms for remote and compute intensive image processing applications
Amount £726,819 (GBP)
Funding ID EP/K009583/1 
Organisation Engineering and Physical Sciences Research Council (EPSRC) 
Sector Public
Country United Kingdom
Start 03/2013 
End 04/2017