Unlocking spiking neural networks for machine learning research

Lead Research Organisation: University of Sussex
Department Name: Sch of Engineering and Informatics

Abstract

In the last decade there has been an explosion in artificial intelligence research in which artificial neural networks, emulating biological brains, are used to solve problems ranging from obstacle avoidance in self-driving cars to playing complex strategy games. This has been driven by mathematical advances and powerful new computer hardware which has allowed large 'deep networks' to be trained on huge amounts of data. For example, after training a deep network on 'ImageNet' - which consists of over 14 million manually annotated images - it can accurately identify the content of images. However, while these deep networks have been shown to learn similar patterns of connections to those found in the parts of our brains responsible for early visual processing, they differ from real brains in several important ways, especially in how the individual neurons communicate. Neurons in real brains exchange information using relatively infrequent electrical pulses known as 'spikes', whereas, in typical artificial neural network models, the spikes are abstracted away and values representing the 'rates' at which spikes would be emitted are continuously exchanged instead. However, neuroscientists believe that large amounts of information is transmitted in the precise times at which spikes are produced. Artificial 'spiking neural networks' can harness these properties, making them useful in applications which are challenging for current models such as real-world robotics and processing data with a temporal component, such as video. However, spiking neural networks can only be used effectively if suitable computer hardware and software is available. While there is existing software for simulating spiking neural networks, it has mostly been designed for studying real brains, rather than building AI systems. In this project, I am going to build a new software package which bridges this gap. It will use abstractions and processes familiar to machine learning researchers, but with techniques developed for brain simulation, allowing exciting new SNN models to be used by AI researchers. We will also explore how spiking models can be used with a special new type of sensors which directly outputs spikes rather than a stream of images.

In the first phase of the project, I will focus on using Graphics Processing Units to accelerate spiking neuron networks. These devices were originally developed to speed up 3D games but have evolved into general purpose devices, widely used to accelerate scientific and AI applications. However, while these devices have become incredibly powerful and are well-suited to processing lots of data simultaneously, they are less suited to 'live' applications such as when video must be processed as fast as possible. In these situations, Field Programmable Gate Arrays - devices where the hardware itself can be re-programmed - can be significantly faster and are already being used behind the scenes in data centres. In this project, by incorporating support for FPGAs into our new software, we will make these devices more accessible to AI researchers and unlock new possibilities of using biologically-inspired spiking neural networks to learn in real-time.

As well as working on these new research strands, I will also dedicate time during my fellowship to advocate for research software engineering as a valuable component of academic institutions, both via knowledge exchange and research funding. In the shorter term, I will work to develop a community of researchers involved in writing software at Sussex by organising an informal monthly 'surgery' as well as delivering specialised training on programming Graphics Processing Units and more fundamental computational and programming training for new PhD students. Finally, I will develop internship and career development opportunities for undergraduate students, to gain experience in research software engineering.

Publications

10 25 50

publication icon
Turner J (2022) mlGeNN: accelerating SNN inference using GPU-enabled neural networks in Neuromorphic Computing and Engineering

 
Description Efficient spike-based machine learning on existing HPC hardware
Amount £17,215 (GBP)
Funding ID CPQ-2417168 
Organisation Oracle Corporation 
Sector Private
Country United States
Start 04/2022 
End 04/2023
 
Title Dataset for paper "mlGeNN: Accelerating SNN inference using GPU-Enabled Neural Networks" 
Description Dataset for paper accepted in IOP Neuromorphic Computing and Engineering March 2022 Dataset contains trained weights from TensorFlow 2.4.0 for the following models:- vgg16_imagenet_tf_weights.h5 - VGG-16 model trained on ImageNet ILSVRC dataset - vgg16_tf_weights.h5 - VGG-16 model trained on CIFAR-10 dataset- resnet20_cifar10_tf_weights.h5 - ResNet-20 model trained on CIFAR-10 dataset- resnet34_imagenet_tf_weights.h5 - ResNet-34 model trained on ImageNet ILSVRC Abstract "In this paper we present mlGeNN - a Python library for the conversion of artificial neural networks (ANNs) specified in Keras to spiking neural networks (SNNs). SNNs are simulated using GeNN with extensions to efficiently support convolutional connectivity and batching. We evaluate converted SNNs on CIFAR-10 and ImageNet classification tasks and compare the performance to both the original ANNs and other SNN simulators. We find that performing inference using a VGG-16 model, trained on the CIFAR-10 dataset, is 2.5x faster than BindsNet and, when using a ResNet-20 model trained on CIFAR-10 with FewSpike ANN to SNN conversion, mlGeNN is only a little over 2x slower than TensorFlow." FundingBrains on Board grant number EP/P006094/1ActiveAI grant number EP/S030964/1Unlocking spiking neural networks for machine learning research grant number EP/V052241/1 European Union's Horizon 2020 research and innovation program under Grant Agreement 945539 
Type Of Material Database/Collection of data 
Year Produced 2022 
Provided To Others? Yes  
URL https://sussex.figshare.com/articles/dataset/Dataset_for_paper_mlGeNN_Accelerating_SNN_inference_usi...
 
Title Dataset for paper "mlGeNN: Accelerating SNN inference using GPU-Enabled Neural Networks" 
Description Dataset for paper accepted in IOP Neuromorphic Computing and Engineering March 2022 Dataset contains trained weights from TensorFlow 2.4.0 for the following models:- vgg16_imagenet_tf_weights.h5 - VGG-16 model trained on ImageNet ILSVRC dataset - vgg16_tf_weights.h5 - VGG-16 model trained on CIFAR-10 dataset- resnet20_cifar10_tf_weights.h5 - ResNet-20 model trained on CIFAR-10 dataset- resnet34_imagenet_tf_weights.h5 - ResNet-34 model trained on ImageNet ILSVRC Abstract "In this paper we present mlGeNN - a Python library for the conversion of artificial neural networks (ANNs) specified in Keras to spiking neural networks (SNNs). SNNs are simulated using GeNN with extensions to efficiently support convolutional connectivity and batching. We evaluate converted SNNs on CIFAR-10 and ImageNet classification tasks and compare the performance to both the original ANNs and other SNN simulators. We find that performing inference using a VGG-16 model, trained on the CIFAR-10 dataset, is 2.5x faster than BindsNet and, when using a ResNet-20 model trained on CIFAR-10 with FewSpike ANN to SNN conversion, mlGeNN is only a little over 2x slower than TensorFlow." FundingBrains on Board grant number EP/P006094/1ActiveAI grant number EP/S030964/1Unlocking spiking neural networks for machine learning research grant number EP/V052241/1 European Union's Horizon 2020 research and innovation program under Grant Agreement 945539 
Type Of Material Database/Collection of data 
Year Produced 2022 
Provided To Others? Yes  
URL https://sussex.figshare.com/articles/dataset/Dataset_for_paper_mlGeNN_Accelerating_SNN_inference_usi...
 
Description Collaboration with Professor Karin Nordstrom at Flinders Medical Centre 
Organisation Flinders Medical Centre
Country Australia 
Sector Hospitals 
PI Contribution Helped refine analysis methods for experimental data and are working on providing computational models to validate hypotheses about neuroanatomy of hover flies.
Collaborator Contribution Providing experimental data and expertise to help develop computational models
Impact None as yet
Start Year 2022
 
Description Structure to Function compute time agreement 
Organisation Graz University of Technology
Country Austria 
Sector Academic/University 
PI Contribution We have provided our GeNN software
Collaborator Contribution * Julich Research Centre has provided compute time on their JUWELS and JUWELS booster supercomputing systems as well as expertise in analysis large spiking neural network models * TU Graz has provided expertise in implementing bio-inspired learning rule and working with large models of mouse cortex developed by Allen Institute
Impact "Efficient GPU training of LSNNs using eProp" publication Accepted conference paper in NICE 2023 workshop ArXiv preprint 2212.01232
Start Year 2022
 
Description Structure to Function compute time agreement 
Organisation Julich Research Centre
Country Germany 
Sector Academic/University 
PI Contribution We have provided our GeNN software
Collaborator Contribution * Julich Research Centre has provided compute time on their JUWELS and JUWELS booster supercomputing systems as well as expertise in analysis large spiking neural network models * TU Graz has provided expertise in implementing bio-inspired learning rule and working with large models of mouse cortex developed by Allen Institute
Impact "Efficient GPU training of LSNNs using eProp" publication Accepted conference paper in NICE 2023 workshop ArXiv preprint 2212.01232
Start Year 2022
 
Title genn-team/genn: GeNN 4.8.0 
Description Release Notes for GeNN 4.8.0 This release adds a number of significant new features to GeNN as well as including a number of bug fixes that have been identified since the 4.7.1 release. User Side Changes Custom updates extended to work on SynapseMatrixWeight::KERNEL weight update model variables (#524). Custom updates extended to perform reduction operations across neurons as well as batches (#539). PyGeNN can now automatically find Visual Studio build tools using functionality in setuptools.msvc.msvc14_get_vc_env (#471) GeNN now comes with a fully-functional Docker image and releases will be distributed via Dockerhub as well as existing channels. Special thanks to @Stevinson , @jamesturner246 and @bdevans for their help on this (see the README for more information) (#548 and #550). Bug fixes Fixed bug relating to merging of synapse groups which perform presynaptic "revInSyn" updates (#520). Added missing parameter to PyGeNN. pygenn.genn_model.create_custom_postsynaptic_class function so postsynaptic models with extra global parameters can be created (#522). Correctly substitute 0 for \$(batch) when using single-threaded CPU backend (#523). Fixed issues building PyGeNN with Visual Studio 2017 (#533). Fixed bug where model might not be rebuilt if sparse connectivity initialisation snippet was changed (#547). Fixed longstanding bug in the gen_input_structured tool -- used by some userprojects -- where data was written outside of array bounds (#551). Fixed issue with debug mode of genn-buildmodel.bat when used with single-threaded CPU backend (#551). Fixed issue where, if custom update models were the only part of a model that required an RNG for initialisation, one might not be instantiated (#540). 
Type Of Technology Software 
Year Produced 2022 
Open Source License? Yes  
Impact As well as ongoing impact of helping user community, specific advances used in both accepted conference paper at NICE 2023 and preprint arxiv ID 2212.01232v1 
URL https://zenodo.org/record/7267620
 
Title genn-team/genn: GeNN v4.7.0 
Description Release Notes for GeNN v4.7.0 This release adds a number of significant new features to GeNN as well as including a number of bug fixes that have been identified since the 4.6.0 release. User Side Changes While a wide range of convolutional type connectivity can be implemented using SynapseMatrixConnectivity::PROCEDURAL, the performance is often worse than sparse connectivity. SynapseMatrixConnectivity::TOEPLITZ provides a more efficient solution with InitToeplitzConnectivitySnippet::Conv2D and InitToeplitzConnectivitySnippet::AvgPoolConv2D implementing some typical connectivity patterns (#484). Shared weight kernels had to be previously provided as extra global parameters via the InitVarSnippet::Kernel variable initialisation snippet. This meant kernels had to be manually allocated to the correct size and couldn't be initialised using standard functionality. SynapseMatrixWeight::KERNEL allows kernels to be treated as standard state variables (#478). Some presynaptic updates need to update the state of presynaptic neurons as well as postsynaptic. These updates can now be made using the \$(addToPre,...) function from presynaptic update code and the destination additional input variable can be specified using SynapseGroup::setPreTargetVar (#479). On Windows, all models in the same directory would build their generated code into DLLs with the same name, prevented the the caching system introduced in v4.5.0 working properly. CodeGenerator::PreferencesBase::includeModelNameInDLL includes the name of the model in the DLL filename, resolving this problem. This is now the default behaviour in PyGeNN but, when using GeNN from C++, the flag must be manually set and MSBuild projects updated to link to the correct DLL (#476). Neuron code can now sample the binomial distribution using \$(gennrand_binomial) and this can be used to initialise variables with InitVarSnippet::Binomial (#498). In the latest version of Windows Subsystem for Linux, CUDA is supported but libcuda is mounted in a non-standard location. GeNN's CUDA backend now adds this location to the linker paths (#500). Bug fixes: Fixed issues with some configurations of InitSparseConnectivitySnippet::Conv2D when stride > 1 which caused incorrect connectivity to be instantiated as well as crashes when this snippet was used to generate sparse connectivity (#489, #491). Fixed issue where, if \$(addToInSynDelay) was used in spike-like event code, it was not detected and dendritic delay structures were not correctly created (#494). Fixed issue where precision wasn't being correctly applied to neuron additional input variable and sparse connectivity row build state variable initialisation meaning double precision code could unintentially be generated (#489). 
Type Of Technology Software 
Year Produced 2022 
Open Source License? Yes  
Impact As well as ongoing impact of helping user community, specific advances were key to results in 10.1088/2634-4386/ac5ac5 and initial work towards preprint arxiv ID 2212.01232v1 
URL https://zenodo.org/record/6047460
 
Title genn-team/ml_genn: mlGeNN 2.0 
Description As well as continuing to support the conversion of ANNs trained using TensorFlow to SNNs, this release adds a large amount of new functionality which enables SNNs to be defined from scratch in mlGeNN and trained directly using e-prop. User Side Changes New model description API for model description inspired by Keras (see documentation) Extensible Callback system allowing custom logic including for recording state to be triggered mid-simulation (see documentation) Extensible metrics system, allowing various metrics to be calculated efficiently (see documentation) Training using e-prop learning rule Conversion of ANNs trained in TensorFlow is now handled through the ml_genn_tf module (see documentation) Known issues The SpikeNorm algorithm for converting deep ANNs to rate-coded SNNs is currently broken - if you require this functionality please stick with mlGeNN 1.0 
Type Of Technology Software 
Year Produced 2023 
Impact Enabled work on accepted conference paper at NICE 2023 
URL https://zenodo.org/record/7705308
 
Description Co-organised tutorial on our GeNN software at CNS*2022 in Melbourne 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact I co-organised a half day tutorial on using our GeNN software at CNS*2022 in Melbourne. Approximately 30 people attended and the content sparked questions and discussion on the day as well as increased interest in GeNN via email and github discussions afterwards
Year(s) Of Engagement Activity 2022
URL https://www.cnsorg.org/cns-2022-tutorials#T6
 
Description Co-organised workshop on Bio-inspired active AI at CNS*2022 in Melbourne 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact We co-organised a one day workshop on Bio-inspired active AI at CNS*2022 in Melbourne with 10 invited speakers. This event strengthened existing collaborations and provided an excellent opportunity for networking.
Year(s) Of Engagement Activity 2022
URL http://users.sussex.ac.uk/~tn41/CNS2022_workshop/
 
Description Interview with Code for Thought podcast 
Form Of Engagement Activity A broadcast e.g. TV/radio/film/podcast (other than news/press)
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Interview with 7 of the other new EPSRC fellows in the UK where we discussed our hopes, ideas and aspirations for our fellowships.
Year(s) Of Engagement Activity 2022
URL https://codeforthought.buzzsprout.com/1326658/9859960-join-the-fellowship
 
Description Invited to panel on RSE fellowship scheme at SeptembRSE 2021 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Professional Practitioners
Results and Impact I was invited to join other members of my RSE fellowship cohort on a panel at SeptembRSE 2021 discussing our fellowship plans.
Year(s) Of Engagement Activity 2021
URL https://septembrse.github.io/#/event/L1005