Biologically-Inspired Massively Parallel Architectures - computing beyond a million processors

Lead Research Organisation: University of Southampton
Department Name: Electronics and Computer Science

Abstract

The human brain remains as one of the great frontiers of science - how does this organ upon which we all depend so critically actually do its job? A great deal is known about the underlying technology - the neuron - and we can observe large-scale brain activity through techniques such as magnetic resonance imaging, but this knowledge barely starts to tell us how the brain works. Something is happening at the intermediate levels of processing that we have yet to begin to understand, but the essence of the brain's information processing function probably lies in these intermediate levels. To get at these middle layers requires that we build models of very large systems of spiking neurons, with structures inspired by the increasingly detailed findings of neuroscience, in order to investigate the emergent behaviours, adaptability and fault-tolerance of those systems.Our goal in this project is to deliver machines of unprecedented cost-effectiveness for this task, and to make them readily accessible to as wide a user base as possible. We will also explore the applicability of the unique architecture that has emerged from the pursuit of this goal to other important application domains.

Publications

10 25 50
 
Description SpiNNaker (Spiking Neural Network Architecture) is a specialised computing engine, designed for the efficient real-time simulation of neural systems. It consists of a mesh of 240x240 nodes, each of which contains 18 ARM9 processors, giving a total of over a million cores. These communicate via a bespoke, high-speed, high-bandwidth network, specifically designed for efficient communication of neural spike data. The design intent of the machine is that it will ultimately support the simulation of up to a billion neurons in real time, allowing neural simulation experiments to be taken to hitherto unattainable levels of scale and complexity. The architecture achieves this remarkable performance by rendering irrelevant three of the axioms of computing hardware design: the communication fabric is non-deterministic (and non-transitive); there is no global core synchronisation, and the system state - held in memory distributed across the physical fabric - is not coherent. Further, time models itself, in that there is no notion of computed simulation time - wallclock time is simulation time. Data is processed through the simulation system at biologically realistic speeds and densities. Whilst these design decisions fly in the face of conventional computer architecture design, they bring the behaviour of the engine much closer to its intended simulation target - neural systems. It is an example of a class of computing engines called neuromorphic machines.
General-purpose (low physical thread count) computers have been used for simulation applications since they were first invented. As with most technologies, the capabilities of the underlying machines have advanced more-or-less hand-in-hand with the expectations of users. However, general-purpose computers are, by definition, designed in the absence of knowledge of their intended application (which makes them ill-suited for almost every specific task), and the plummeting cost of hardware has allowed the rise of bespoke engines, tailored to specific (types of) computing tasks. Neural simulation is an application where the computing resource necessary to undertake simulations of the scale necessary to demonstrate emergent behaviour is far outstripping the capabilities of commodity machines, and severely testing those of multi-million dollar supercomputers. Other areas in this class include large-scale particle/particle and particle/field problems (computational chemistry, cosmology, high-energy theoretical physics), weather modelling, financial market stress testing. The activities of the activities of the grant focussed on the design and development of the machine specifically designed to simulate the behaviour of extremely large numbers of small data packets moving through an extremely large graph in real time: a mammalian nervous system. The exploitation of the machine and its capabilities now lie before us.
Exploitation Route SpiNNaker and the BIMPA project are but one step in a large strategic research programme, held jointly by Manchester, Southampton, Cambridge and Sheffield. This succeeded "A scalable chip multiprocessor for large-scale neural simulation" (Manchester/Southampton), and "Efficient VLSI architectures for inexact associative memories" (Manchester). The headline goal for all these activities has been large-scale neural simulation, brokered by a highly specialised event-based parallel architecture (SpiNNaker - Spiking Neural Network Architecture).
Two fundamental research questions were addressed:
• How can massively parallel computing resources accelerate our understanding of brain function?
• How can our growing understanding of brain function point the way to more efficient parallel, fault-tolerant computation?
Moving onwards
During this work, it became apparent that other, equally rewarding problem domains are amenable to attack by the methodology embodied in the SpiNNaker architecture. The inevitable conclusion is that SpiNNaker is simply the first (existence proof) of a new form of parallel computing known as POETS (Partially Ordered Event Triggered Systems), one application of which is neural simulation; the overarching goal of this proposal is to develop this new form of computing capability, diversifying into new problem domains. This work is inspired by (and a deliverable from) the latter fundamental question above.
Sectors Aerospace, Defence and Marine,Chemicals,Digital/Communication/Information Technologies (including Software),Electronics,Energy,Financial Services, and Management Consultancy,Manufacturing, including Industrial Biotechology,Pharmaceuticals and Medical Biotechnology

URL http://apt.cs.manchester.ac.uk/projects/SpiNNaker/
 
Description BIMPA is part of an on-going large strategic research proram that began over ten years ago. The original concept was to build a machine capable of simulating biologically realistic neural systems of hitherto unattainable size and complexity. During the develeopment work, it became clear that the potential of the machine was far wider than we had previously envisaged. Work is continuing at Uo Manchester to exploit the neurocomputing perspectives (the project has been absorbed into the Human Brain Project, until recently run out of EPFL in Switzerland). Southampton has taken teh lead in exploiting the ideas of event-based computing - that underpin the SpiNNaker computing model - and applying them to a much wider and more diverse portfolio of problems.
First Year Of Impact 2016
Sector Digital/Communication/Information Technologies (including Software),Electronics,Manufacturing, including Industrial Biotechology,Pharmaceuticals and Medical Biotechnology
 
Description Human Brain Project
Amount SFr. 12,000 (CHF)
Organisation Swiss Federal Institute of Technology in Lausanne (EPFL) 
Sector Public
Country Switzerland
Start 10/2014 
End 02/2015
 
Description Programme Grant
Amount £4,981,302 (GBP)
Funding ID EP/N031768/1 
Organisation Engineering and Physical Sciences Research Council (EPSRC) 
Sector Public
Country United Kingdom
Start 05/2016 
End 11/2021
 
Description A R M Ltd 
Organisation Arm Limited
Country United Kingdom 
Sector Private 
Start Year 2006
 
Description BIMPA partners 
Organisation University of Manchester
Department Materials Performance Centre
Country United Kingdom 
Sector Academic/University 
PI Contribution Massively parallel computer architecture for neuronal systems
Collaborator Contribution Extensive collaboration on computer architectures and algorithms to describe massively parallel neuronal systems.
Impact Research papers.
Start Year 2009
 
Description BIMPA partners 
Organisation University of Sheffield
Country United Kingdom 
Sector Academic/University 
PI Contribution Massively parallel computer architecture for neuronal systems
Collaborator Contribution Extensive collaboration on computer architectures and algorithms to describe massively parallel neuronal systems.
Impact Research papers.
Start Year 2009
 
Description BIMPA partners 
Organisation University of Southampton
Department School of Electronics and Computer Science Southampton
Country United Kingdom 
Sector Academic/University 
PI Contribution Massively parallel computer architecture for neuronal systems
Collaborator Contribution Extensive collaboration on computer architectures and algorithms to describe massively parallel neuronal systems.
Impact Research papers.
Start Year 2009
 
Description Silistix Ltd 
Organisation Silistix Ltd
Country United Kingdom 
Sector Private 
Start Year 2006
 
Title Partial Ordered Event Triggered Systems (POETS) 
Description see apt.cs.manchester.ac.uk/projects/SpiNNaker 
Type Of Technology Software 
Year Produced 2010 
Open Source License? Yes  
Impact see apt.cs.manchester.ac.uk/projects/SpiNNaker -- this is the correct URL, although it's rejected by the robot in the next box 
 
Description Visiting Lecture Series in Norwegian University of Science and Technology Trondheim 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Undergraduate students
Results and Impact Professor Andrew Brown delivered a series of four 90-minute lectures to postgraduate students at the University of Kaiserslautern as a visiting international speaker. Seminar I covered some history of computing before introducing the concept of parallel computing, focussing on event-based computation to solve real-world engineering computing problems. Seminar II described event-based simulation of neural circuits using the SpiNNaker machine, the SpiNNaker machine's architecture and how it produces biologically realistic behaviour. Seminar III described the POETS engine, another event-based machine that can be exploited for a much wider application portfolio than SpiNNaker, covering in some detail how the POETS engine uses event-based techniques to solve the real problem of space-filling neural synthesis. Seminar IV discussed a number of topics allied to event-based computation, including solving heat equations, investigating reliability and presenting an overview of some areas that can benefit from event-based computing such as computational chemistry, weather modelling, financial market modelling and genome searching.
Year(s) Of Engagement Activity 2018
 
Description Visiting Lecture Series in University of Klaiserslauten 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact Professor Andrew Brown delivered a series of four 90-minute lectures to postgraduate students at the University of Kaiserslautern as a visiting international speaker. Seminar I covered some history of computing before introducing the concept of parallel computing, focussing on event-based computation to solve real-world engineering computing problems. Seminar II described event-based simulation of neural circuits using the SpiNNaker machine, the SpiNNaker machine's architecture and how it produces biologically realistic behaviour. Seminar III described the POETS engine, another event-based machine that can be exploited for a much wider application portfolio than SpiNNaker, covering in some detail how the POETS engine uses event-based techniques to solve the real problem of space-filling neural synthesis. Seminar IV discussed a number of topics allied to event-based computation, including solving heat equations, investigating reliability and presenting an overview of some areas that can benefit from event-based computing such as computational chemistry, weather modelling, financial market modelling and genome searching.
Year(s) Of Engagement Activity 2018,2019