DART: Design Accelerators by Regulating Transformations
Lead Research Organisation:
Imperial College London
Department Name: Computing
Abstract
The DART project aims to pioneer a ground-breaking capability to enhance the performance and energy efficiency of reconfigurable hardware accelerators for next-generation computing systems. This capability will be achieved by a novel foundation for a transformation engine based on heterogeneous graphs for design optimisation and diagnosis. While hardware designers are familiar with transformations by Boolean algebra, the proposed research promotes a design-by-transformation style by providing, for the first time, tools which facilitate experimentation with design transformations and their regulation by meta-programming. These tools will cover design space exploration based on machine learning, and end-to-end tool chains mapping designs captured in multiple source languages to heterogeneous reconfigurable devices targeting cloud computing, Internet-of-Things and supercomputing. The proposed approach will be evaluated through a variety of benchmarks involving hardware acceleration, and through codifying strategies for automating the search of neural architectures for hardware implementation with both high accuracy and high efficiency.
Organisations
- Imperial College London (Lead Research Organisation)
- Corerain Technologies (Project Partner)
- Tianjin University (Project Partner)
- Deloitte (United Kingdom) (Project Partner)
- Intel Corporation (UK) Ltd (Project Partner)
- Xilinx (United States) (Project Partner)
- Microsoft (United States) (Project Partner)
- Maxeler Technologies (United Kingdom) (Project Partner)
- University of British Columbia (Project Partner)
- Stanford University (Project Partner)
- RIKEN (Project Partner)
- Cornell University (Project Partner)
- Tesco (United Kingdom) (Project Partner)
People |
ORCID iD |
Wayne Luk (Principal Investigator) |
Publications
Vandebon V.
(2022)
Meta-Programming Design-Flow Patterns for Automating Reusable Optimisations
Todman T
(2022)
Custom Instructions for Networked Processor Templates
in IEEE Transactions on Circuits and Systems II: Express Briefs
Sahebi A
(2023)
Distributed large-scale graph processing on FPGAs.
in Journal of big data
Que Z
(2024)
LL-GNN: Low Latency Graph Neural Networks on FPGAs for High Energy Physics
in ACM Transactions on Embedded Computing Systems