📣 Help Shape the Future of UKRI's Gateway to Research (GtR)

We're improving UKRI's Gateway to Research and are seeking your input! If you would be interested in being interviewed about the improvements we're making and to have your say about how we can make GtR more user-friendly, impactful, and effective for the Research and Innovation community, please email gateway@ukri.org.

Unlocking spiking neural networks for machine learning research

Lead Research Organisation: University of Sussex
Department Name: Sch of Engineering and Informatics

Abstract

In the last decade there has been an explosion in artificial intelligence research in which artificial neural networks, emulating biological brains, are used to solve problems ranging from obstacle avoidance in self-driving cars to playing complex strategy games. This has been driven by mathematical advances and powerful new computer hardware which has allowed large 'deep networks' to be trained on huge amounts of data. For example, after training a deep network on 'ImageNet' - which consists of over 14 million manually annotated images - it can accurately identify the content of images. However, while these deep networks have been shown to learn similar patterns of connections to those found in the parts of our brains responsible for early visual processing, they differ from real brains in several important ways, especially in how the individual neurons communicate. Neurons in real brains exchange information using relatively infrequent electrical pulses known as 'spikes', whereas, in typical artificial neural network models, the spikes are abstracted away and values representing the 'rates' at which spikes would be emitted are continuously exchanged instead. However, neuroscientists believe that large amounts of information is transmitted in the precise times at which spikes are produced. Artificial 'spiking neural networks' can harness these properties, making them useful in applications which are challenging for current models such as real-world robotics and processing data with a temporal component, such as video. However, spiking neural networks can only be used effectively if suitable computer hardware and software is available. While there is existing software for simulating spiking neural networks, it has mostly been designed for studying real brains, rather than building AI systems. In this project, I am going to build a new software package which bridges this gap. It will use abstractions and processes familiar to machine learning researchers, but with techniques developed for brain simulation, allowing exciting new SNN models to be used by AI researchers. We will also explore how spiking models can be used with a special new type of sensors which directly outputs spikes rather than a stream of images.

In the first phase of the project, I will focus on using Graphics Processing Units to accelerate spiking neuron networks. These devices were originally developed to speed up 3D games but have evolved into general purpose devices, widely used to accelerate scientific and AI applications. However, while these devices have become incredibly powerful and are well-suited to processing lots of data simultaneously, they are less suited to 'live' applications such as when video must be processed as fast as possible. In these situations, Field Programmable Gate Arrays - devices where the hardware itself can be re-programmed - can be significantly faster and are already being used behind the scenes in data centres. In this project, by incorporating support for FPGAs into our new software, we will make these devices more accessible to AI researchers and unlock new possibilities of using biologically-inspired spiking neural networks to learn in real-time.

As well as working on these new research strands, I will also dedicate time during my fellowship to advocate for research software engineering as a valuable component of academic institutions, both via knowledge exchange and research funding. In the shorter term, I will work to develop a community of researchers involved in writing software at Sussex by organising an informal monthly 'surgery' as well as delivering specialised training on programming Graphics Processing Units and more fundamental computational and programming training for new PhD students. Finally, I will develop internship and career development opportunities for undergraduate students, to gain experience in research software engineering.

Publications

10 25 50
 
Description Efficient spike-based machine learning on existing HPC hardware
Amount £17,215 (GBP)
Funding ID CPQ-2417168 
Organisation Oracle Corporation 
Sector Private
Country United States
Start 03/2022 
End 04/2023
 
Title Data for 'Estimating orientation in Natural scenes: A Spiking Neural Network Model of the Insect Central Complex' (2024) 
Description Data for paper published in PLOS Computational Biology (Aug 2024)AbstractThe central complex of insects contains cells, organised as a ring attractor, that encode head direction. The `bump' of activity in the ring can be updated by idiothetic cues and external sensory information. Plasticity at the synapses between these cells and the ring neurons, that are responsible for bringing sensory information into the central complex, has been proposed to form a mapping between visual cues and the heading estimate which allows for more accurate tracking of the current heading, than if only idiothetic information were used.In Drosophila, ring neurons have well characterised non-linear receptive fields. In this work we produce synthetic versions of these visual receptive fields using a combination of excitatory inputs and mutual inhibition between ring neurons. We use these receptive fields to bring visual information into a spiking neural network model of the insect central complex based on the recently published Drosophila connectome. Previous modelling work has focused on how this circuit functions as a ring attractor using the same type of simple visual cues commonly used experimentally. While we initially test the model on these simple stimuli, we then go on to apply the model to complex natural scenes containing multiple conflicting cues. We show that this simple visual filtering provided by the ring neurons is sufficient to form a mapping between heading and visual features and maintain the heading estimate in the absence of angular velocity input. The network is successful at tracking heading even when presented with videos of natural scenes containing conflicting information from environmental changes and translation of the camera.All code used for this project has been made publicly available on GitHub: https://github.com/stenti/stentiford_cx_ra. Data required to run the code can be found here.mp4 files: all raw videos used as input to the model. '3rev_static' indicated videos recorded with the camera rotating for 3 revolutions in a stationary position. 'circling' idicates videos recorded using the spidercam robot either rotation on the spot 'static' or moving in a circle 'super'. (to be loaded by cx_ra_rn.py)pkl files: simple stimuli input that does not require preprocessing (to be loaded by cx_ra_rn.py)npy files: Weight matrices between different populations of cells (to be loaded by cx_ra_rn.py) 
Type Of Material Database/Collection of data 
Year Produced 2024 
Provided To Others? Yes  
Impact This dataset was key to the results published in 10.1371/journal.pcbi.1011913 
URL https://sussex.figshare.com/articles/dataset/Data_for_Estimating_orientation_in_Natural_scenes_A_Spi...
 
Title Dataset for paper "mlGeNN: Accelerating SNN inference using GPU-Enabled Neural Networks" 
Description Dataset for paper accepted in IOP Neuromorphic Computing and Engineering March 2022 Dataset contains trained weights from TensorFlow 2.4.0 for the following models:- vgg16_imagenet_tf_weights.h5 - VGG-16 model trained on ImageNet ILSVRC dataset - vgg16_tf_weights.h5 - VGG-16 model trained on CIFAR-10 dataset- resnet20_cifar10_tf_weights.h5 - ResNet-20 model trained on CIFAR-10 dataset- resnet34_imagenet_tf_weights.h5 - ResNet-34 model trained on ImageNet ILSVRC Abstract "In this paper we present mlGeNN - a Python library for the conversion of artificial neural networks (ANNs) specified in Keras to spiking neural networks (SNNs). SNNs are simulated using GeNN with extensions to efficiently support convolutional connectivity and batching. We evaluate converted SNNs on CIFAR-10 and ImageNet classification tasks and compare the performance to both the original ANNs and other SNN simulators. We find that performing inference using a VGG-16 model, trained on the CIFAR-10 dataset, is 2.5x faster than BindsNet and, when using a ResNet-20 model trained on CIFAR-10 with FewSpike ANN to SNN conversion, mlGeNN is only a little over 2x slower than TensorFlow." FundingBrains on Board grant number EP/P006094/1ActiveAI grant number EP/S030964/1Unlocking spiking neural networks for machine learning research grant number EP/V052241/1 European Union's Horizon 2020 research and innovation program under Grant Agreement 945539 
Type Of Material Database/Collection of data 
Year Produced 2022 
Provided To Others? Yes  
Impact The large trained models in this dataset provide a useful benchmark for our mlGeNN software 
URL https://sussex.figshare.com/articles/dataset/Dataset_for_paper_mlGeNN_Accelerating_SNN_inference_usi...
 
Title EvDownsampling dataset 
Description This dataset is used in the publication "EvDownsampling: A Robust Method For Downsampling Event Camera Data", ECCV Workshop on Neuromorphic Vision: Advantages and Applications of Event Cameras [29/09/2024].This dataset contains event streams of highly dynamic real-world scenes collected using two DVS cameras of different spatial resolutions - a DVXplorer (640×480 px) and a Davis346 (346×260 px). Both cameras simultaneously recorded each scene with negligible parallax error. The dataset is provided to test event-based spatio-temporal downsampling techniques through comparing downsampled higher-resolution recordings with matching lower-resolution recordings, as explained in our publication above.There are four classes {class_folder} of scenes:Traffic: natural lighting. Bus and car moving across camera visual field with several pedestrians. 6 seconds long.HandGestures: fluorescent lighting. Person either waving their hand, waving their arms or doing jumping jacks. 12-15 seconds long.Corridor: fluorescent lighting. Moving through corridors. One corridor scene (Pevensey) has a carpet which provides texture, while the other scene (Arundel) does not have a carpet. 18-24 seconds long.Cars: natural lighting. Car moving across camera visual field with few pedestrians. 3-5 seconds long.Each dataset/{class_folder} contains two folders consisting of:Videos of the scene recordings captured by both DVS cameras placed side-by-side (.mp4)Raw event data information in the form of (x, y, timestamp, polarity) in AEDAT 4 format (.aedat4).The script dualCam_dvRead.py can be used to convert the .aedat4 files into a NumPy format and to generate frame reconstructions. The syntax to call the script from the command-line is:python3 dualCam_dvRead.py --data_folder {class_folder} --input {scene_recording} --publisher_rate {publisher_rate}class_folder is the class of the scene recording e.g. corridorscene_recording is the specific recording in that class e.g. Pevenseypublisher_rate determines frame rate of images published (in fps) e.g. 1000.More information is available at: https://github.com/anindyaghosh/EvDownsampling.The conference website is: https://sites.google.com/view/nevi2024/home-page. 
Type Of Material Database/Collection of data 
Year Produced 2024 
Provided To Others? Yes  
Impact After presenting this work at a conference, we have had numerous researchers reach out to collaborate on using this dataset 
URL https://sussex.figshare.com/articles/dataset/EvDownsampling_dataset/26528146
 
Title Hoverfly (Eristalis tenax) descending neurons respond to pursuits of artificial targets 
Description Many animals use motion vision information to control dynamic behaviors. Predatory animals, for example, show an exquisite ability to detect rapidly moving prey followed by pursuit and capture. Such target detection is not only used by predators but can also play an important role in conspecific interactions. Male hoverflies (Eristalis tenax), for example, vigorously defend their territories against conspecific intruders. Visual target detection is believed to be subserved by specialized target-tuned neurons that are found in a range of species, including vertebrates and arthropods. However, how these target-tuned neurons respond to actual pursuit trajectories is currently not well understood. To redress this, we recorded extracellularly from target selective descending neurons (TSDNs) in male Eristalis tenax hoverflies. We show that the neurons have dorso-frontal receptive fields, with a preferred direction up and away from the visual midline, with a clear division into a TSDNLeft and a TSDNRight cluster. We next reconstructed visual flow-fields as experienced during pursuits of artificial targets (black beads). We recorded TSDN responses to six reconstructed pursuits and found that each neuron responded consistently at remarkably specific time points, but that these time points differed between neurons. We found that the observed spike probability was correlated with the spike probability predicted from each neuron's receptive field and size tuning. Interestingly, however, the overall response rate was low, with individual neurons responding to only a small part of each reconstructed pursuit. In contrast, the TSDNLeft and TSDNRight populations responded to substantially larger proportions of the pursuits, but with lower probability. This large variation between neurons could be useful if different neurons control different parts of the behavioral output. 
Type Of Material Database/Collection of data 
Year Produced 2023 
Provided To Others? Yes  
Impact Current biology paper 
URL https://datadryad.org/stash/dataset/doi:10.5061/dryad.tdz08kq4d
 
Title Research data for paper "Loss shaping enhances exact gradient learning with Eventprop in Spiking Neural Networks" 
Description The data in this repository was generated in the context of training spiking neural networks for keyword recognition using the Eventprop algorithm. It accompanies the paper 'Loss shaping enhances exact gradient learning with Eventprop in Spiking Neural Networks' Neuromorphic Computing and Engineering (09 Jan, 2025)The data relates to two benchmarks:Spiking Heidelberg Digits (SHD) (Cramer et al. 2022)Spiking Speech Commands, derived from Google Speech Commands (Warden et al. 2018).The data was generated and analysed with the code available on GitHub at https://github.com/tnowotny/genn_eventpropThe data is organised into 6 zip volumes each of which corresponds to a parameter scan of networks trained on the SHD data set (4 scans) and SSC data set (2 scans).scan_SHD_base_xval.zipResult from leave-one-speaker-out cross-validation runs on the "base SHD models", i.e. networks trained with Eventprop, including regularisation but no augmentations and only one hidden layer. There were 160 parameter combinations:Four different loss types - LOSS_TYPE: sum, sum_weigh_exp, first_spike_exp, max; each with individual best settings for HIDDEN_OUTPUT_MEAN, HIDDEN_OUTPUT_STD, LBD_UPPER, ETAscaling of LBD_UPPER from base value by: 0.1, 0.5, 1.0, 5.0, 10.0RECURRENT: False, TrueTAU_MEM: 20, 40TAU_SYN: 5,10For each of the combinations, there are two files:SHD_xval_xxxx.json:A JSON file containing the used parameter settings.SHD_xval_xxxx_results.txt:An ASCII file containing in each row the metrics after each training epoch, separated by blanks:Epochtraining accuracytraining lossvalidation accuracyvalidation lossmean number of spikes in the hidden layerstandard deviation of number of spikes in the hidden layerminimum of number of spikes in the hidden layermaximum of number of spikes in the hidden layermean number of spikes per neuron per trial across a mini-batchstandard deviation of number of spikes per neuron per trial across a mini-batchminimum number of spikes per neuron per trial across a mini-batchmaximal number of spikes per neuron per trial across a mini-batchnumber of silent neuronstime (s) since training startscan_SHD_base_traintest.zipResults from training the base models on the SHD training set, interleaved with testing on the test set. This uses the 8 parameter combinations from scan_SHD_base_xval.zip that use the four LOSS_TYPE choices and RECURRENT False or True. For each of these 8 cases, the other parameters from scan_SHD_base_xval.zip were taken from the run with the best mean cross-validation score. Each of the 8 runs was repeated 8 times with different random seeds.The files included are:SHD_tt_xxxx.jsonParameter settings as above.SHD_tt_xxxx_results.txtResults file with columns as above except that columns 4 and 5 now relate to test accuracy and -loss respectively.SHD_tt_xxxx_best.txtThe best result across epochs (same data format as SHD_tt_xxxx_results.txt)SHD_tt_xxxx_w_input_hidden_best.npyThe weight matrix of input to hidden connections at the epoch where the best training accuracy was achieved (early stopping upon training accuracy). The weights are arranged in a "pre-major" order, i.e. entries 1 to n_hidden are the weights from input neuron 0 to all hidden neurons, followed by the weight from input neuron 2 and so on. All weight matrices are stored in this way.SHD_tt_xxxx_w_hidden_output_best.npyThe weight matrix of hidden to output connections at the best epoch.If the network is recurrent, there is alsoSHD_tt_xxxx_w_hidden0_hidden0_best.npyThe recurrent weight matrix from the hidden layer to itself.scan_SHD_final_xval.zipResults from the ablation experiments on the full SHD models. Leave-one-speaker-out cross-validation runs were performed to determine the best regularisation strength LBD_UPPER for each of the following parameter combinations (512 combinations):DT_MS: 1,2,5,10,20NUM_HIDDEN: 64, 128, 256, 512, 1024 (for DT= 1 or 2), 256, 1024 (for other DT)N_INPUT_DELAY: 0, 10AUGMENTATION: None, blend: [0.5, 0.5], random_shift: 40.0, blend, blend & shiftHIDDEN_NEURON_TYPE: LIF, hetLIFTRAIN_TAU: False, True5 different LBD_UPPER values were tested for 2 repeats with different random seed each (total 5120 runs).The files included are:SHD_xval_xxxx.jsonParameter settings as above.SHD_xval_results.txtResults file with columns as described for scan_SHD_base_xval above.scan_SHD_final_traintest.zipResults from training on the SHD training set, interleaved with testing on the test set. This was done for 320 different parameter settings, corresponding to dt=1,2 only and choosing the best LBD_UPPER as determined by the run in scan_SHD_final_xval where the average validation error in the epochs of best training error in each fold was best. For each of the 320 combinations, 8 independent runs with different random seeds were executed (2560 total runs).For each of the runs, there are 3 files:SHD_tt_xxxx.jsonA JSON file with the used parameter settings.SHD_tt_xxxx_results.txtThe results file with columns as described before, columns 4 and 5 relate to the accuracy and loss on the test set.SHD_tt_xxxx_best.txtThe values from the epoch when the test accuracy was best. Same columns as SHD_tt_xxxx_results.txt.In addition, for the runs that had the best test results (within the 8 repeats), we also includeSHD_tt_0004_w_input_hidden_best.npyThe weights from input to hidden layer.SHD_tt_0004_w_hidden_output_best.npyThe weights from hidden to output layer.SHD_tt_0004_w_hidden0_hidden0_best.npyThe recurrent weights from the hidden layer to itself (in this scan all networks are recurrent).scan_SSC_final.zipResults from the ablation experiments on SSC. We ran the same parameter combinations as for scan_SHD_final_xval but as SSC has a dedicated validation set, the runs were performed as training epochs interleaved with testing on the validation set (5120 runs). The provided files areSSC_xxxx.jsonThe parameter values used.SSC_xxxx_results.txtThe results of the training/validation run.SSC_5118_best.txtThe row from SSC_xxxx_results.txt with the best validation error.We subsequently ran testing on the trained network from the epoch where the validation error was best. From these we haveSSC_5118_test.jsonThe parameter settings of the test run.SSC_5118_test_results.txtThe results of the test run. This has the same columns as the training runs, except that columns 2 and 3 are devoid meaning.For the runs of a given parameter setting that were best across LBD_UPPER and random seed values, we also provideSSC_xxxx_w_input_hidden_best.npyThe weights from input to hidden layer for the epoch where the validation error was best. These are the connection weights used for the testing run.SSC_xxxx_w_hidden_output_best.npyThe corresponding weights from hidden to output layer.SSC_xxxx_w_hidden0_hidden0_best.npyThe corresponding recurrent weights from the hidden layer to itself (in this scan all networks are recurrent).scan_SSC_final_repeats.zipIn this scan we made 6 more repeated runs for all parameter combinations from scan_SSC_final with the best performing LBD_UPPER values (1920 runs). The files provided are exactly as for scan_SSC_final.Relationship to the publicationFigure 2 of the publication is based on scan_SHD_base_xval and scan_SHD_base_traintest and the panels of the figure can be generated using the scripts plot_SHD_base_curves.py and plot_SHD_base_summary.py.Figure 3 of the publication is based on the data scan_SHD_final_traintest and its panels can be generated with the script plot_final_ablation.py with the argument "SHD".Figure 4 of the publication is based on data scan_SSC_final and scan_SSC_final_repeats and can be generated with the script plot_final_ablation.py with the argument "SSC". 
Type Of Material Database/Collection of data 
Year Produced 2025 
Provided To Others? Yes  
Impact The very detailed parameter exploration in this dataset has enabled computing time to be saved in subsequent works such as 10.48550/arXiv.2501.07331 
URL https://sussex.figshare.com/articles/dataset/Research_data_for_paper_Loss_shaping_enhances_exact_gra...
 
Title Stanmer Park outdoor navigational data 
Description This dataset contains omnidirectional 1440?1440 resolution images taken using a Kodak Pixpro SP360 camera paired with RTK GPS information obtained using a simple RTK2B - 4G NTRIP kit and fused yaw, pitch and roll data recorded from a BNO055 IMU. The data was collected using a 4 wheel ground robot that was manually controlled by a human operator. The robot was driven 15 times along a route at Stanmer Park (shown in map.png). The route consists mostly of open fields and a narrow path through a forest and is approximately 700m long. The recordings took place at various days and times starting in March 2021, with the date and time indicated by the filename. For example '20210420_135721.zip' corresponds to a route driven on 20/03/2021 starting at 13:57:21 GMT. During the recordings the weather varied from clear skies and sunny days to overcast and low light conditions. Each recording consists of an mp4 video of the camera footage for the route, and a database_entries.csv file with the following columns:Timestamp of video frame (in ms)X, Y and Z coordinate (in mm) and zone representing location in UTM coordinates from GPSHeading, pitch and roll (in degrees) from IMU. In some early routes, the IMU failed and when this occurs these values are recorded as "NaN".Speed and Steering angle commands being sent to robot at that timeGPS quality (1=GPS, 2=DGNSS, 4=RTK Fixed and 5=RTK Float)X, Y and Z coordinates (in mm) fitted to a degree one polynomial to smooth out GPS noiseHeading (in degrees) derived from smoothed GPS coordinatesIMU heading (in degrees) with discontinuities resulting from IMU issues fixedFor completeness, each folder also contains a database_entries_original.csv containing the data before pre-processing. The pre-processing is documented in more detail in pre_processing_notes.pdf. 
Type Of Material Database/Collection of data 
Year Produced 2024 
Provided To Others? Yes  
Impact Conference paper in preparation 
URL https://sussex.figshare.com/articles/dataset/Stanmer_Park_outdoor_navigational_data/25118383
 
Title UoS campus and Stanmer park outdoor navigational data 
Description This dataset contains omnidirectional 1440?1440 resolution images taken using a Kodak Pixpro SP360 camera paired with RTK GPS information obtained using a simple RTK2B - 4G NTRIP kit and fused yaw, pitch and roll data recorded from a BNO055 IMU. The data was collected using a 4 wheel ground robot (SuperDroid IG42-SB4-T) that was manually controlled by a human operator. The robot was driven 10 times along a route on the University of Sussex campus (shown in campus.png) and 10 times at the adjacent Stanmer Park (shown in stanmer.png). The first route is a mix of urban structures (university buildings), small patches of trees and paths populated by people and is approximately 700m long. The second route consists mostly of open fields and a narrow path through a forest and is approximately 600m long. The recordings took place at various days and times starting in May 2023, with the date and time indicated by the filename. For example 'campus_route5_2023_11_22_102925.zip' corresponds to the 5th route recorded on the Sussex campus on 22/11/2023 starting at 10:29:25 GMT. During the recordings the weather varied from clear skies and sunny days to overcast and low light conditions. Each recording consists of the .jpg files that make up the route, and a .csv file with the following columns:X, Y and Z coordinate (in mm) and zone representing location in UTM coordinates from GPSHeading, pitch and roll (in degrees) from IMU. In some early routes, the IMU failed and when this occurs these values are recorded as "NaN".Filename of corresponding camera imageLatitude (in decimal degrees west) and Longitude (in decimal degrees north) and Altitude (in m) from GPSGPS quality (1=GPS, 2=DGNSS, 4=RTK Fixed and 5=RTK Float) horizontal dilation (in mm)Timestamp (in ms) 
Type Of Material Database/Collection of data 
Year Produced 2023 
Provided To Others? Yes  
Impact Conference paper in preparation 
URL https://sussex.figshare.com/articles/dataset/UoS_campus_and_Stanmer_park_outdoor_navigational_data/2...
 
Description Collaboration with Giulia D'Angelo 
Organisation Czech Technical University in Prague
Department Faculty of Electrical Engineering
Country Czech Republic 
Sector Academic/University 
PI Contribution Expertise on SNN simulation using GeNN and ideas on student projects
Collaborator Contribution Expertise on event-based vision systems and ideas on student projects
Impact None yet
Start Year 2024
 
Description Collaboration with Professor Karin Nordstrom at Flinders Medical Centre 
Organisation Flinders Medical Centre
Country Australia 
Sector Hospitals 
PI Contribution Helped refine analysis methods for experimental data and are working on providing computational models to validate hypotheses about neuroanatomy of hover flies.
Collaborator Contribution Providing experimental data and expertise to help develop computational models
Impact None as yet
Start Year 2022
 
Description Structure to Function compute time agreement 
Organisation Graz University of Technology
Country Austria 
Sector Academic/University 
PI Contribution We have provided our GeNN software
Collaborator Contribution * Julich Research Centre has provided compute time on their JUWELS and JUWELS booster supercomputing systems as well as expertise in analysis large spiking neural network models * TU Graz has provided expertise in implementing bio-inspired learning rule and working with large models of mouse cortex developed by Allen Institute
Impact "Efficient GPU training of LSNNs using eProp" publication Accepted conference paper in NICE 2023 workshop ArXiv preprint 2212.01232
Start Year 2022
 
Description Structure to Function compute time agreement 
Organisation Julich Research Centre
Country Germany 
Sector Academic/University 
PI Contribution We have provided our GeNN software
Collaborator Contribution * Julich Research Centre has provided compute time on their JUWELS and JUWELS booster supercomputing systems as well as expertise in analysis large spiking neural network models * TU Graz has provided expertise in implementing bio-inspired learning rule and working with large models of mouse cortex developed by Allen Institute
Impact "Efficient GPU training of LSNNs using eProp" publication Accepted conference paper in NICE 2023 workshop ArXiv preprint 2212.01232
Start Year 2022
 
Title genn-team/genn: GeNN 4.8.0 
Description Release Notes for GeNN 4.8.0 This release adds a number of significant new features to GeNN as well as including a number of bug fixes that have been identified since the 4.7.1 release. User Side Changes Custom updates extended to work on SynapseMatrixWeight::KERNEL weight update model variables (#524). Custom updates extended to perform reduction operations across neurons as well as batches (#539). PyGeNN can now automatically find Visual Studio build tools using functionality in setuptools.msvc.msvc14_get_vc_env (#471) GeNN now comes with a fully-functional Docker image and releases will be distributed via Dockerhub as well as existing channels. Special thanks to @Stevinson , @jamesturner246 and @bdevans for their help on this (see the README for more information) (#548 and #550). Bug fixes Fixed bug relating to merging of synapse groups which perform presynaptic "revInSyn" updates (#520). Added missing parameter to PyGeNN. pygenn.genn_model.create_custom_postsynaptic_class function so postsynaptic models with extra global parameters can be created (#522). Correctly substitute 0 for \$(batch) when using single-threaded CPU backend (#523). Fixed issues building PyGeNN with Visual Studio 2017 (#533). Fixed bug where model might not be rebuilt if sparse connectivity initialisation snippet was changed (#547). Fixed longstanding bug in the gen_input_structured tool -- used by some userprojects -- where data was written outside of array bounds (#551). Fixed issue with debug mode of genn-buildmodel.bat when used with single-threaded CPU backend (#551). Fixed issue where, if custom update models were the only part of a model that required an RNG for initialisation, one might not be instantiated (#540). 
Type Of Technology Software 
Year Produced 2022 
Open Source License? Yes  
Impact As well as ongoing impact of helping user community, specific advances used in both accepted conference paper at NICE 2023 and preprint arxiv ID 2212.01232v1 
URL https://zenodo.org/record/7267620
 
Title genn-team/genn: GeNN 4.9.0 
Description Release Notes for GeNN 4.9.0 This release adds a number of significant new features to GeNN as well as including a number of bug fixes that have been identified since the 4.8.1 release. It is intended as the last release for GeNN 4.X.X. Fixes for serious bugs may be backported if requested but, otherwise, development will be switching to GeNN 5. User Side Changes Implemented pygenn.GeNNModel.unload to manually unload GeNN models to improve control in scenarios such as parameter sweeping where multiple PyGeNN models need to be instantiated (#581). Added Extra Global Parameter references to custom updates (see Defining Custom Updates, Defining your own custom update model and Extra Global Parameter references (#583). Expose $(num_pre), $(num_post), $(num_batches) to all user code strings (#576) Bug fixes Fixed handling of indices specified as sequences types other than numpy arrays in pygenn.SynapseGroup.set_sparse_connections (#597). Fixed bug in CUDA constant cache estimation bug which could cause nvLink errors in models with learning rules which required previous spike times (#589). Fixed longstanding issue with setuptools that meant PyGeNN sometimes had to be built twice to obtain a functional version. Massive thanks to @erolm-a for contributing this fix (#591). Optimisations Reduced the number of layers and generally optimised Docker image. Massive thanks to @bdevans for his work on this (#601). 
Type Of Technology Software 
Year Produced 2023 
Open Source License? Yes  
Impact As well as ongoing impact of helping user community, specific advances enable ongoing research 
URL https://zenodo.org/record/8430715
 
Title genn-team/genn: GeNN 5.0.0 
Description Release Notes for GeNN 5.0.0 This is a very large update to GeNN that has fixed a large number of longstanding bugs and we hope will make GeNN easier to use and enable various exciting new features in the near future. The licence has also been switched from GPL to LGPL making it mildly more liberal by allowing PyGeNN to be used as a component in closed-source systems. This release breaks backward compatibility so all models are likely to require updating but the documentation has also been completely re-done and the pre-release version is at https://genn-team.github.io/genn/documentation/5/. This includes a guide to updating existing models New features GeNN has a whole new code generator. This gives much better quality error messages to the user about syntax/typing errors in code strings and will enable use to do smarter optimisations in future but it does restrict user code to a well-defined subset of C99 (https://github.com/genn-team/genn/pull/595)** As well as simulation kernels, GeNN 4.x generated large amounts of boilerpalte for allocating memory and copying from device to host. This resulted in very long compile times with large models. In GeNN 5 we have replaced this with a new runtime which reduces compilation time by around 10x on very large models (https://github.com/genn-team/genn/pull/602) In GeNN 4.X, parameters were always "scalar" type. This resulted in poor code generation when these are used to store integers. Parameters now have types and can also be made dynamic to allow them to be changed at runtime (https://github.com/genn-team/genn/pull/607) Weight update models now have postsynaptic spike-like events, allowing a wider class of learning rules to be implemented (https://github.com/genn-team/genn/pull/609)** Bug fixes PyGeNN only really works with precision set to float (#289) Refine global - register -global transfers (#55) Avoiding creating unused variables enhancement (#47) PyGeNN doesn't correctly handle neuron variables with delay slots (#393) assign_external_pointer overrides should use explicitly sized integer types (#288) Repeat of spike-like-event conditions in synapse code flawed (#379) Dangerous conflict potential of user and system code (#385) Accessing queued pre and postsynaptic weight update model variables (#402) Linker-imposed model complexity limit on Windows (#408) Got 'error: duplicate parameter name' when ./generate_run test in userproject/Izh_sparse_project bug (#416) Issues with merging synapse groups where pre or postsynaptic neuron parameters are referenced (#566) Presynaptic Synapse Variable undefined in Event Threshold Condition (#594) 
Type Of Technology Software 
Year Produced 2024 
Open Source License? Yes  
Impact No direct impacts but the work contained in this release are going to be vital for subsequent workpackages of my fellowship 
URL https://zenodo.org/doi/10.5281/zenodo.11032927
 
Title genn-team/genn: GeNN 5.1.0 
Description Release Notes for GeNN 5.1.0 This release adds a number of significant new features to GeNN as well as including a number of bug fixes that have been identified since the 5.0.0 release. User Side Changes Updated CUDA block size optimiser to support SM9.0 (#627) Access to postsynaptic variables with heterogeneous delay (#629) Special variable references for zeroing internals (#634) Stop Windows CUDA compilation relying on correct order of CUDA + Visual Studio installation (#639) Bug fixes Fixed issues building GeNN on Mac OS/Clang (#623) Fixed bug when using dendritic delays in batched models (#630) Fixed issues with new version of setuptools 74 and newer (#636,#640) Fixed bug with merging/fusing of neuron groups with multiple spike-like event conditions (#638) 
Type Of Technology Software 
Year Produced 2024 
Open Source License? Yes  
Impact The updates in this release were key to our recent preprint 10.48550/arXiv.2501.07331 
URL https://zenodo.org/doi/10.5281/zenodo.14051978
 
Title genn-team/genn: GeNN v4.7.0 
Description Release Notes for GeNN v4.7.0 This release adds a number of significant new features to GeNN as well as including a number of bug fixes that have been identified since the 4.6.0 release. User Side Changes While a wide range of convolutional type connectivity can be implemented using SynapseMatrixConnectivity::PROCEDURAL, the performance is often worse than sparse connectivity. SynapseMatrixConnectivity::TOEPLITZ provides a more efficient solution with InitToeplitzConnectivitySnippet::Conv2D and InitToeplitzConnectivitySnippet::AvgPoolConv2D implementing some typical connectivity patterns (#484). Shared weight kernels had to be previously provided as extra global parameters via the InitVarSnippet::Kernel variable initialisation snippet. This meant kernels had to be manually allocated to the correct size and couldn't be initialised using standard functionality. SynapseMatrixWeight::KERNEL allows kernels to be treated as standard state variables (#478). Some presynaptic updates need to update the state of presynaptic neurons as well as postsynaptic. These updates can now be made using the \$(addToPre,...) function from presynaptic update code and the destination additional input variable can be specified using SynapseGroup::setPreTargetVar (#479). On Windows, all models in the same directory would build their generated code into DLLs with the same name, prevented the the caching system introduced in v4.5.0 working properly. CodeGenerator::PreferencesBase::includeModelNameInDLL includes the name of the model in the DLL filename, resolving this problem. This is now the default behaviour in PyGeNN but, when using GeNN from C++, the flag must be manually set and MSBuild projects updated to link to the correct DLL (#476). Neuron code can now sample the binomial distribution using \$(gennrand_binomial) and this can be used to initialise variables with InitVarSnippet::Binomial (#498). In the latest version of Windows Subsystem for Linux, CUDA is supported but libcuda is mounted in a non-standard location. GeNN's CUDA backend now adds this location to the linker paths (#500). Bug fixes: Fixed issues with some configurations of InitSparseConnectivitySnippet::Conv2D when stride > 1 which caused incorrect connectivity to be instantiated as well as crashes when this snippet was used to generate sparse connectivity (#489, #491). Fixed issue where, if \$(addToInSynDelay) was used in spike-like event code, it was not detected and dendritic delay structures were not correctly created (#494). Fixed issue where precision wasn't being correctly applied to neuron additional input variable and sparse connectivity row build state variable initialisation meaning double precision code could unintentially be generated (#489). 
Type Of Technology Software 
Year Produced 2022 
Open Source License? Yes  
Impact As well as ongoing impact of helping user community, specific advances were key to results in 10.1088/2634-4386/ac5ac5 and initial work towards preprint arxiv ID 2212.01232v1 
URL https://zenodo.org/record/6047460
 
Title genn-team/ml_genn: mlGeNN 2.0 
Description As well as continuing to support the conversion of ANNs trained using TensorFlow to SNNs, this release adds a large amount of new functionality which enables SNNs to be defined from scratch in mlGeNN and trained directly using e-prop. User Side Changes New model description API for model description inspired by Keras (see documentation) Extensible Callback system allowing custom logic including for recording state to be triggered mid-simulation (see documentation) Extensible metrics system, allowing various metrics to be calculated efficiently (see documentation) Training using e-prop learning rule Conversion of ANNs trained in TensorFlow is now handled through the ml_genn_tf module (see documentation) Known issues The SpikeNorm algorithm for converting deep ANNs to rate-coded SNNs is currently broken - if you require this functionality please stick with mlGeNN 1.0 
Type Of Technology Software 
Year Produced 2023 
Impact Enabled work on accepted conference paper at NICE 2023 
URL https://zenodo.org/record/7705308
 
Title genn-team/ml_genn: mlGeNN 2.1 
Description This release adds a number of significant new features to mlGeNN including support for training models using EventProp, as well as including a number of bug fixes that have been identified since the 2.0 release. User Side Changes EventProp compiler for training models with EventProp (#57, #64, #70) System so compilers can define default settings for neuron models e.g. reset behaviour (#63) Support for time varying inputs as well as a wider range of input neuron types (#69) Spike-like event recording (#54) Spike count recording (#73) Bug fixes Fixed issues with manual training loops e.g. for augmentation (#74, #78) Fixed issue with management of callback state (#65) Fixed issue with loading and unloading compiled networks (#66) 
Type Of Technology Software 
Year Produced 2023 
Impact As well as ongoing impact of helping user community, training using EventProp and support for time-varying inputs have been vital for, as yet unpublished work on our Intel Neuromorphic Research Community grant 
URL https://zenodo.org/record/8430906
 
Title genn-team/ml_genn: mlGeNN 2.2 
Description This release adds a number of new features to mlGeNN as well as including a number of bug fixes that have been identified since the 2.1 release. This version is also the first mlGeNN release to use GeNN 5. User Side Changes Data parallel training support (#79) Added predict method to CompiledInferenceNetwork to return raw model predictions rather than metrics (#84) Added histogram_thresh keyword argument to ml_genn.utils.data.preprocess_tonic_spikes to ensure input spike trains don't contain duplicate spikes for the same neuron within one timstep (#86) Bug fixes Fixed bug effecting non-square inputs in ml_genn.utils.data.preprocess_tonic_spikes (#83) 
Type Of Technology Software 
Year Produced 2024 
Open Source License? Yes  
Impact As well as ongoing impact of helping user community, specific advances were key to preprint arxiv ID 2501.07331 
URL https://zenodo.org/doi/10.5281/zenodo.11067932
 
Title genn-team/ml_genn: ml_genn_2_3_0 
Description This release adds a number of new features to mlGeNN as well as including a number of bug fixes that have been identified since the 2.2.1 release. This version requires GeNN 5.1.0. User Side Changes Added support for performing regression using EventProp (#98) Added support for training models with fixed and learnable heterogeneous delays with EventProp (#104) Added re-implementation of best-performing SHD model from "Loss shaping enhances exact gradient learning with EventProp in Spiking Neural Networks" to examples (#114) Added Yin yang dataset, Time To First Spike readout and event-prop implementation (#117) Bug fixes Fixed issue with input neurons with batch size 1 (#103) Fixed several bugs in the EventProp compiler (#110, #115) 
Type Of Technology Software 
Year Produced 2024 
Open Source License? Yes  
Impact Features in this release were vital for 10.48550/arXiv.2501.07331 
URL https://zenodo.org/doi/10.5281/zenodo.14258972
 
Description Co-organised research computing workshop 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Postgraduate students
Results and Impact To coincide with the launch of a new Research Computing platform at Sussex, I organised a 1-day research computing workshop on 30th July 2024.

The event featured invited talks and tutorials on High Performance Computing:

9:30 - 10:00 - Registration and coffee
10:00 - 10:10 - Welcome (James Knight, Informatics)
10:10 - 10:50 - OpenStack HPC (Stig Telfer, OpenStack)
10:50 - 11:20 - Research Computing in the Digital Humanities (Sharon Webb, SHL)
11:20 - 12:00 - A Sussex perspective on the external HPC landscape (Johanna Senk, Informatics)
12:00 - 12:40 - Green computing (Charlotte Rae, Psychology)
12:40 - 13:30 - Lunch
13:30 - 16:00 - Artemis tutorials (Reese Wilkinson, ITS)
Year(s) Of Engagement Activity 2024
URL https://www.ticketsource.co.uk/research-software-engineering-network/research-computing-workshop/e-x...
 
Description Co-organised tutorial on our GeNN software at CNS*2022 in Melbourne 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact I co-organised a half day tutorial on using our GeNN software at CNS*2022 in Melbourne. Approximately 30 people attended and the content sparked questions and discussion on the day as well as increased interest in GeNN via email and github discussions afterwards
Year(s) Of Engagement Activity 2022
URL https://www.cnsorg.org/cns-2022-tutorials#T6
 
Description Co-organised workshop on Bio-inspired active AI at CNS*2022 in Melbourne 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact We co-organised a one day workshop on Bio-inspired active AI at CNS*2022 in Melbourne with 10 invited speakers. This event strengthened existing collaborations and provided an excellent opportunity for networking.
Year(s) Of Engagement Activity 2022
URL http://users.sussex.ac.uk/~tn41/CNS2022_workshop/
 
Description Interview with Code for Thought podcast 
Form Of Engagement Activity A broadcast e.g. TV/radio/film/podcast (other than news/press)
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Interview with 7 of the other new EPSRC fellows in the UK where we discussed our hopes, ideas and aspirations for our fellowships.
Year(s) Of Engagement Activity 2022
URL https://codeforthought.buzzsprout.com/1326658/9859960-join-the-fellowship
 
Description Invited to panel on RSE fellowship scheme at SeptembRSE 2021 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Professional Practitioners
Results and Impact I was invited to join other members of my RSE fellowship cohort on a panel at SeptembRSE 2021 discussing our fellowship plans.
Year(s) Of Engagement Activity 2021
URL https://septembrse.github.io/#/event/L1005