Biologically-Inspired Massively Parallel Architectures - computing beyond a million processors
Lead Research Organisation:
University of Southampton
Department Name: Electronics and Computer Science
Abstract
Abstracts are not currently available in GtR for all funded research. This is normally because the abstract was not required at the time of proposal submission, but may be because it included sensitive information such as personal details.
Organisations
Publications
Brown A
(2015)
Reliable computation with unreliable computers
in IET Computers & Digital Techniques
Brown A.D.
(2015)
Event-driven computing
Dugan K
(2013)
Interconnection system for the SpiNNaker biologically inspired multi-computer
in IET Computers & Digital Techniques
Dugan K.J
(2015)
Reliable computation with unreliable computers
Fonseca Guerra GA
(2017)
Using Stochastic Spiking Neural Networks on SpiNNaker to Solve Constraint Satisfaction Problems.
in Frontiers in neuroscience
Furber S
(2013)
Overview of the SpiNNaker System Architecture
in IEEE Transactions on Computers
Hopkins M
(2015)
Accuracy and Efficiency in Fixed-Point Neural ODE Solvers.
in Neural computation
Knight JC
(2016)
Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware.
in Frontiers in neuroanatomy
Knight JC
(2016)
Synapse-Centric Mapping of Cortical Models to the SpiNNaker Neuromorphic Architecture.
in Frontiers in neuroscience
Painkras E
(2013)
SpiNNaker: A 1-W 18-Core System-on-Chip for Massively-Parallel Neural Network Simulation
in IEEE Journal of Solid-State Circuits
Rudolf R
(2013)
Fully differential electro-mechanical phase locked loop sensor circuit
in Sensors and Actuators A: Physical
Description | SpiNNaker (Spiking Neural Network Architecture) is a specialised computing engine, designed for the efficient real-time simulation of neural systems. It consists of a mesh of 240x240 nodes, each of which contains 18 ARM9 processors, giving a total of over a million cores. These communicate via a bespoke, high-speed, high-bandwidth network, specifically designed for efficient communication of neural spike data. The design intent of the machine is that it will ultimately support the simulation of up to a billion neurons in real time, allowing neural simulation experiments to be taken to hitherto unattainable levels of scale and complexity. The architecture achieves this remarkable performance by rendering irrelevant three of the axioms of computing hardware design: the communication fabric is non-deterministic (and non-transitive); there is no global core synchronisation, and the system state - held in memory distributed across the physical fabric - is not coherent. Further, time models itself, in that there is no notion of computed simulation time - wallclock time is simulation time. Data is processed through the simulation system at biologically realistic speeds and densities. Whilst these design decisions fly in the face of conventional computer architecture design, they bring the behaviour of the engine much closer to its intended simulation target - neural systems. It is an example of a class of computing engines called neuromorphic machines. General-purpose (low physical thread count) computers have been used for simulation applications since they were first invented. As with most technologies, the capabilities of the underlying machines have advanced more-or-less hand-in-hand with the expectations of users. However, general-purpose computers are, by definition, designed in the absence of knowledge of their intended application (which makes them ill-suited for almost every specific task), and the plummeting cost of hardware has allowed the rise of bespoke engines, tailored to specific (types of) computing tasks. Neural simulation is an application where the computing resource necessary to undertake simulations of the scale necessary to demonstrate emergent behaviour is far outstripping the capabilities of commodity machines, and severely testing those of multi-million dollar supercomputers. Other areas in this class include large-scale particle/particle and particle/field problems (computational chemistry, cosmology, high-energy theoretical physics), weather modelling, financial market stress testing. The activities of the activities of the grant focussed on the design and development of the machine specifically designed to simulate the behaviour of extremely large numbers of small data packets moving through an extremely large graph in real time: a mammalian nervous system. The exploitation of the machine and its capabilities now lie before us. |
Exploitation Route | SpiNNaker and the BIMPA project are but one step in a large strategic research programme, held jointly by Manchester, Southampton, Cambridge and Sheffield. This succeeded "A scalable chip multiprocessor for large-scale neural simulation" (Manchester/Southampton), and "Efficient VLSI architectures for inexact associative memories" (Manchester). The headline goal for all these activities has been large-scale neural simulation, brokered by a highly specialised event-based parallel architecture (SpiNNaker - Spiking Neural Network Architecture). Two fundamental research questions were addressed: • How can massively parallel computing resources accelerate our understanding of brain function? • How can our growing understanding of brain function point the way to more efficient parallel, fault-tolerant computation? Moving onwards During this work, it became apparent that other, equally rewarding problem domains are amenable to attack by the methodology embodied in the SpiNNaker architecture. The inevitable conclusion is that SpiNNaker is simply the first (existence proof) of a new form of parallel computing known as POETS (Partially Ordered Event Triggered Systems), one application of which is neural simulation; the overarching goal of this proposal is to develop this new form of computing capability, diversifying into new problem domains. This work is inspired by (and a deliverable from) the latter fundamental question above. |
Sectors | Aerospace Defence and Marine Chemicals Digital/Communication/Information Technologies (including Software) Electronics Energy Financial Services and Management Consultancy Manufacturing including Industrial Biotechology Pharmaceuticals and Medical Biotechnology |
URL | http://apt.cs.manchester.ac.uk/projects/SpiNNaker/ |
Description | BIMPA is part of an on-going large strategic research proram that began over ten years ago. The original concept was to build a machine capable of simulating biologically realistic neural systems of hitherto unattainable size and complexity. During the develeopment work, it became clear that the potential of the machine was far wider than we had previously envisaged. Work is continuing at Uo Manchester to exploit the neurocomputing perspectives (the project has been absorbed into the Human Brain Project, until recently run out of EPFL in Switzerland). Southampton has taken teh lead in exploiting the ideas of event-based computing - that underpin the SpiNNaker computing model - and applying them to a much wider and more diverse portfolio of problems. |
First Year Of Impact | 2016 |
Sector | Digital/Communication/Information Technologies (including Software),Electronics,Manufacturing, including Industrial Biotechology,Pharmaceuticals and Medical Biotechnology |
Description | Human Brain Project |
Amount | SFr. 12,000 (CHF) |
Organisation | Swiss Federal Institute of Technology in Lausanne (EPFL) |
Sector | Public |
Country | Switzerland |
Start | 09/2014 |
End | 02/2015 |
Description | Programme Grant |
Amount | £4,981,302 (GBP) |
Funding ID | EP/N031768/1 |
Organisation | Engineering and Physical Sciences Research Council (EPSRC) |
Sector | Public |
Country | United Kingdom |
Start | 04/2016 |
End | 11/2021 |
Description | A R M Ltd |
Organisation | Arm Limited |
Country | United Kingdom |
Sector | Private |
Start Year | 2006 |
Description | BIMPA partners |
Organisation | University of Manchester |
Department | Materials Performance Centre |
Country | United Kingdom |
Sector | Academic/University |
PI Contribution | Massively parallel computer architecture for neuronal systems |
Collaborator Contribution | Extensive collaboration on computer architectures and algorithms to describe massively parallel neuronal systems. |
Impact | Research papers. |
Start Year | 2009 |
Description | BIMPA partners |
Organisation | University of Sheffield |
Country | United Kingdom |
Sector | Academic/University |
PI Contribution | Massively parallel computer architecture for neuronal systems |
Collaborator Contribution | Extensive collaboration on computer architectures and algorithms to describe massively parallel neuronal systems. |
Impact | Research papers. |
Start Year | 2009 |
Description | BIMPA partners |
Organisation | University of Southampton |
Department | School of Electronics and Computer Science Southampton |
Country | United Kingdom |
Sector | Academic/University |
PI Contribution | Massively parallel computer architecture for neuronal systems |
Collaborator Contribution | Extensive collaboration on computer architectures and algorithms to describe massively parallel neuronal systems. |
Impact | Research papers. |
Start Year | 2009 |
Description | Silistix Ltd |
Organisation | Silistix Ltd |
Country | United Kingdom |
Sector | Private |
Start Year | 2006 |
Title | Partial Ordered Event Triggered Systems (POETS) |
Description | see apt.cs.manchester.ac.uk/projects/SpiNNaker |
Type Of Technology | Software |
Year Produced | 2010 |
Open Source License? | Yes |
Impact | see apt.cs.manchester.ac.uk/projects/SpiNNaker -- this is the correct URL, although it's rejected by the robot in the next box |
Description | Visiting Lecture Series in Norwegian University of Science and Technology Trondheim |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Undergraduate students |
Results and Impact | Professor Andrew Brown delivered a series of four 90-minute lectures to postgraduate students at the University of Kaiserslautern as a visiting international speaker. Seminar I covered some history of computing before introducing the concept of parallel computing, focussing on event-based computation to solve real-world engineering computing problems. Seminar II described event-based simulation of neural circuits using the SpiNNaker machine, the SpiNNaker machine's architecture and how it produces biologically realistic behaviour. Seminar III described the POETS engine, another event-based machine that can be exploited for a much wider application portfolio than SpiNNaker, covering in some detail how the POETS engine uses event-based techniques to solve the real problem of space-filling neural synthesis. Seminar IV discussed a number of topics allied to event-based computation, including solving heat equations, investigating reliability and presenting an overview of some areas that can benefit from event-based computing such as computational chemistry, weather modelling, financial market modelling and genome searching. |
Year(s) Of Engagement Activity | 2018 |
Description | Visiting Lecture Series in University of Klaiserslauten |
Form Of Engagement Activity | A talk or presentation |
Part Of Official Scheme? | No |
Geographic Reach | International |
Primary Audience | Postgraduate students |
Results and Impact | Professor Andrew Brown delivered a series of four 90-minute lectures to postgraduate students at the University of Kaiserslautern as a visiting international speaker. Seminar I covered some history of computing before introducing the concept of parallel computing, focussing on event-based computation to solve real-world engineering computing problems. Seminar II described event-based simulation of neural circuits using the SpiNNaker machine, the SpiNNaker machine's architecture and how it produces biologically realistic behaviour. Seminar III described the POETS engine, another event-based machine that can be exploited for a much wider application portfolio than SpiNNaker, covering in some detail how the POETS engine uses event-based techniques to solve the real problem of space-filling neural synthesis. Seminar IV discussed a number of topics allied to event-based computation, including solving heat equations, investigating reliability and presenting an overview of some areas that can benefit from event-based computing such as computational chemistry, weather modelling, financial market modelling and genome searching. |
Year(s) Of Engagement Activity | 2018,2019 |