📣 Help Shape the Future of UKRI's Gateway to Research (GtR)

We're improving UKRI's Gateway to Research and are seeking your input! If you would be interested in being interviewed about the improvements we're making and to have your say about how we can make GtR more user-friendly, impactful, and effective for the Research and Innovation community, please email gateway@ukri.org.

ActiveAI - active learning and selective attention for robust, transparent and efficient AI

Lead Research Organisation: University of Sussex
Department Name: Sch of Engineering and Informatics

Abstract

We will bring together world leaders in insect biology and neuroscience with world leaders in biorobotic modelling and computational neuroscience to create a partnership that will be transformative in understanding active learning and selective attention in insects, robots and autonomous systems in artificial intelligence (AI). By considering how brains, behaviours and the environment interact during natural animal behaviour, we will develop new algorithms and methods for rapid, robust and efficient learning for autonomous robotics and AI for dynamic real world applications.

Recent advances in AI and notably in deep learning, have proven incredibly successful in creating solutions to specific complex problems (e.g. beating the best human players at Go, and driving cars through cities). But as we learn more about these approaches, their limitations are becoming more apparent. For instance, deep learning solutions typically need a great deal of computing power, extremely long training times and very large amounts of labeled training data which are simply not available for many tasks. While they are very good at solving specific tasks, they can be quite poor (and unpredictably so) at transferring this knowledge to other, closely related tasks. Finally, scientists and engineers are struggling to understand what their deep learning systems have learned and how well they have learned it.

These limitations are particularly apparent when contrasted to the naturally evolved intelligence of insects. Insects certainly cannot play Go or drive cars, but they are incredibly good at doing what they have evolved to do. For instance, unlike any current AI system, ants learn how to forage effectively with limited computing power provided by their tiny brains and minimal exploration of their world. We argue this difference comes about because natural intelligence is a property of closed loop brain-body-environment interactions. Evolved innate behaviours in concert with specialised sensors and neural circuits extract and encode task-relevant information with maximal efficiency, aided by mechanisms of selective attention that focus learning on task-relevant features. This focus on behaving embodied agents is under-represented in present AI technology but offers solutions to the issues raised above, which can be realised by pursuing research in AI in its original definition: a description and emulation of biological learning and intelligence that both replicates animals' capabilities and sheds light on the biological basis of intelligence.

This endeavour entails studying the workings of the brain in behaving animals as it is crucial to know how neural activity interacts with, and is shaped by, environment, body and behaviour and the interplay with selective attention. These experiments are now possible by combining recent advances in neural recordings of flies and hoverflies which can identify neural markers of selective attention, in combination with virtual reality experiments for ants; techniques pioneered by the Australian team. In combination with verification of emerging hypotheses on large-scale neural models on-board robotic platforms in the real world, an approach pioneered by the UK team, this project represents a unique and timely opportunity to transform our understanding of learning in animals and through this, learning in robots and AI systems.

We will create an interdisciplinary collaborative research environment with a "virtuous cycle" of experiments, analysis and computational and robotic modelling. New findings feed forward and back around this virtuous cycle, each discipline informing the others to yield a functional understanding of how active learning and selective attention enable small-brained insects to learn a complex world. Through this understanding, we will develop ActiveAI algorithms which are efficient in learning and final network configuration, robust to real-world conditions and learn rapidly.

Planned Impact

We will combine expertise in insect neuroscience with biomimetic robotic control to gain a functional understanding of how active learning and selective attention underpin rapid and efficient visual learning. Through this, we will develop ActiveAI algorithms, reinforcement learning methods and artificial neural network (ANN) architectures for robotics and AI that learn rapidly, are computationally efficient, work with limited training data and robust to novel and changing scenarios.

Industrial impact
Our novel sensing, learning and processing algorithms offer impact in robotics/autonomous systems and AI generally. AI problems are currently resolved by increasing computational and training resources. We take a fundamentally different approach using insects as inspiration for efficient algorithms. Here we will develop smart movement patterns which combine with attentional mechanisms to aid information identification/extraction, reducing computational and training loads. We foresee two immediate problem domains in RAS: those where learning speed is highly constrained (e.g. disaster recovery robots, exploration, agri-robotics); and those where computational load and energy usage are limited (e.g. UAVs, agritech, space robotics). Longer term we foresee applications in general AI where a new class of highly efficient and thus scalable ANNs are required to realise grand challenges such as General Intelligence.

We will ensure tight coupling to industrial needs using established industrial members of the Brains on Board advisory board, comprising Dyson, Parrot, NVidia and Google DeepMind as well as collaborators for robotic applications (Harper Adams University, GMV and RALSpace) and Sheffield Robotics contacts (e.g. Amazon, iniVation, Machine With Vision) as well as leveraging new opportunities through both UK and Australian Universities commercialisation operations (Macquarie University's Office of Commercialisation and Innovation Hub; Sussex Research Quality and Impact team, Sussex Innovation Centre; Sheffield Engineering Hub, Sheffield Partnerships and Knowledge Exchange team). Where possible, we will seek to commercialise knowledge through IP licensing, and university supported spin-outs. We already have experience doing so, in particular: optic flow commercialisation through ApisBrain (Marshall); and sensor commercialisation through Skyline Sensors (Mangan). In Sussex, support will be provided by the.

Impact on the team
PDRAs will receive cross-disciplinary training from the UK team in GPU computing, neural simulations, biorobotics and bio-inspired machine learning - very active and rapidly expanding areas and sought after skills in AI and robotics - as well as training from the Australian team in cutting edge neuroscientific methods (electrophysiology and pharmacology combined with virtual reality-enabled behavioural experiments). This will prepare them for careers across academia and industry. In addition, UK co-I Mangan, as an early career researcher, will benefit from support and advice from the senior investigators (UK + Aus), supporting his development as an independent researcher.

Advocacy + general public
We firmly believe in the benefit of ethically aware technology development through responsible innovation. We have already created an ethical code of conduct for the Brains on Board project and engaged with government consultations. We will extend this work and, by promoting and adhering to this philosophy, we will have impact on policy through advocacy and on the general public, through continuation of our extensive public engagement activities e.g. regular public lectures (Cafe Scientifique, Nerd Nite, U3A etc), media appearances (BBC, ABC radio, BBC Southeast) and large outreach events (e.g. Royal Society Science Exhibition 2010, British Science Festival 2017, Brighton Science Festivals, 2010-2018).

Academic Impact
We will impact AI, Robotics and neuroscience (see Academic Beneficiaries).

Publications

10 25 50
 
Description Through this award we have established a collaboration between leading insect neurobiologists in Australia with experts in biological modelling and bioinspired AI in the UK. Most importantly, we have secured joint on-going funding: ARC: "Closing the loop on target detection: Neural and behavioural mechanisms", between Nordstrom Knight and Nowotny, including a new collaboration with Prof Simon Sponberg at Georgia Institute of Technology. AUD$690,295. Philippides, Graham and Barron have submitted bids to ARC and to the EU for fellowships for Rachael Stentiford (one refused but to be re-submitted; others pending).

We are also very happy to note that four of the 5 PDRAs on the grant have gone on to research positions: two as permanent lecturers, one postdoc and one in industry. The remaining PDRA is still employed at Sussex on other research projects. This is excellent impact

In terms of academic impact, the collaboration has allowed us to find out more about how insects are able to so rapidly and robustly learn visual tasks despite the fact that they have tiny brains. For instance, we have shown how hoverflies are able to detect targets in cluttered environments and how what were thought to be cognitive tasks like counting can be achieved by bees through simpler mechanisms when the problem is considered as an active task. We have also developed bio-inspired models of the brain which mimic things we see in insect brains and behaviour, the latter tracked with newly developed tools. Through our spiking neural network tools, these models are being used on-board robots for efficient and accurate visual learning.

In more detail, the outputs break down into 3 major sections

1. Insect intelligence: Our aim was to deepen our understanding of insect behavioural experiments through modelling. Among others, notable findings include :

a. In a collaboration between Sussex and Flinders: Descending neurons of the hoverfly respond to pursuits of artificial target (Current biology, 2023) we analysed electrophysiological responses of target-selective descending neurons (TSDNs) in the hoverfly and demonstrated how the responses to simulated target pursuits could partially be predicted by preceding analytical measurements of receptive fields and direction and speed-selectivity of neurons. Following a successful visit to Flinders, we have worked closely with Karin's team, performing a virtuous loop of computational modelling and further experiments to unpick the upstream connectivity of TSDNs (PLOS Comp Bio, in preparation)

b. In computational studies of the Drosophila Mushroom Body, we found that memory performance could be enhanced through homeostatic mechanisms while another study showed that findings could be explained by a reward prediction error hypothesis

c. Collaborations with Macquarie have deepened our understanding of bee cognition. For instance, we found that Non-numerical strategies can be used by bees to solve numerical cognition tasks and that Honeybees can solve a multi-comparison ranking task by probability matching. We have also developed collaborative funding bids on bee predictive coding

d. We have mapped the entire foraging history of individual desert ants which has established how quickly they learn routes, questioned how they learn about their environment, and identified physical movement patterns that can underpin their visual search strategies. The dataset has been made available to the community as a resource for active learning (https://cater.cvmls.org). Data is also published with a 2D environmental reconstruction from which behaviour can be reanalysed with reference to the environment, and other animals which is not possible with standard data-logging techniques.

e. In collaborations with scientists in the UK (due to Covid we re-focussed early experiments we found that hymenopteran visual learning involves multimodal interactions, aversive traces and even (desert ants) and even their own size


2. Efficient and powerful bio-inspired neural models: We have made significant advances in bio-inspired neural models in two key areas:

a. Multiple high-profile papers have shown that multi-scale echo-state models can produce complex behaviour replicating neural dynamics. These models have been applied to visual navigation, a foundational problem for both insects and robots, demonstrating that temporal information could be utilised for this problem making it suitable for robots and hinting at information useful for ant navigation.

b. GENN, our toolbox for running spiking neural network models on GPUs, has been demonstrated in workshops both to our collaborators and other groups, and we have used it to embody models of insect visual learning on-board small robots demonstrating GENNs utility for proving hypotheses on insect learning. In this vein, we are exploring biomimetic algorithms for small target detection in close collaboration with Flinders, where our collaborators test hypotheses in virtual reality experiments in hover flies.


3. We have developed open-source tools for tracking of insects in the field, reconstructing their visual input and putting it through an arbitrary eye-model

a. The CATER (Combined Animal Tracking and Environment Reconstruction) is a general software allowing moving animals to be tracked in video data even when they are small and occluded (not possible with other methods). (https://cater.cvmls.org). CATER has been used at Sussex to map ant trajectories and is currently being used by researchers at Macquarie (Dr Cody Freas) for capturing Australian ant data. We organised a workshop to demonstrate this tool and bring in others working on similar problems

b. The CompoundRay rendering pipeline was developed to model insect vision in high fidelity and at high frame rate. The tool has been made open-source and shown capable of reconstructing realistic 3D eye structures and rendering outputs at 5000fps enabling a new generation of data-driven investigations not possible with hardware or slower models. (https://github.com/BrainsOnBoard/compound-ray). This work has led to collaboration with researchers at Lund & Stockholm Universities to replicate real insect eye data. Researchers at Sussex and Sheffield are working to integrate CompoundRay into emerging industry standard pipelines.
Exploitation Route Neuroscientists and others will use the results on insect learning to better understand small-brained cognition. Likewise bio-inspired AI algorithms will be useful to engineers studying learning in problems with a time-varying component or where computing resource is limited. The latter findings will be of use in industrial contexts particularly in navigation through unstructured or dynamic environments (eg in AgriTech, space, security, search and rescue) and for use on edge devices.

The most notable outlet for industrial impact is via Opteran. Opteran is a spinout of the University using insect brain-derived algorithms to solve challenging problems in the control of robots. The company has already raised ~£12M in private investment, employs around 50 people between its offices in Sheffield and London, and continues to grow. The company is an ideal vehicle for many of the early insights developed in this research to inspire new approaches that can be deployed in commercial robots. Moreover, the company presents a destination for staff looking to move into industry with examples including James Marshall, the Sheffield PI who is a founder of Opteran and seconded there full time, Blayze Millward who worked part-time for 6 months at Opteran during his PhD studies, and Dr Mike Mangan who has successfully received a £1.3M Future Leader Fellowship which will be hosted in the company. Future avenues of collaboration being explored include Opteran supporting research grants, and PhD CASE studentships.
Sectors Aerospace

Defence and Marine

Agriculture

Food and Drink

Digital/Communication/Information Technologies (including Software)

Transport

Other

URL https://cater.cvmls.org
 
Description Work from the grant has led to a successful FLF bid led by M. Mangan and hosted by Opteran Technologies. Through this grant Opteran will commercialise brain-inspired models for robotics. The research team has seen two senior moves to Opteran Technologies Limited. Philippides has also collaborated with Opteran and performed biorobotics consultancy with them and others. The work also led to £500k industrial funding to Sussex (under NDA) In addition, we note four of the 5 PDRAs on the grant have gone on to research positions: two as permanent lecturers, one postdoc and one in industry. The remaining PDRA is still employed at Sussex on other research projects. This is excellent impact
First Year Of Impact 2022
Sector Digital/Communication/Information Technologies (including Software),Electronics,Other
Impact Types Economic

 
Description 3B: brains beat brawn
Amount £1,287,730 (GBP)
Funding ID 900305 
Organisation United Kingdom Research and Innovation 
Sector Public
Country United Kingdom
Start 03/2024 
End 02/2028
 
Description Efficient spike-based machine learning on existing HPC hardware
Amount £17,215 (GBP)
Funding ID CPQ-2417168 
Organisation Oracle Corporation 
Sector Private
Country United States
Start 03/2022 
End 04/2023
 
Description Emergent embodied cognition in shallow, biological and artificial, neural networks
Amount £200,036 (GBP)
Funding ID BB/X01343X/1 
Organisation Biotechnology and Biological Sciences Research Council (BBSRC) 
Sector Public
Country United Kingdom
Start 02/2023 
End 08/2024
 
Description Leverhulme Doctoral Scholarships in "be.AI - biomimetic embodied Artificial Intelligence"
Amount £1,350,000 (GBP)
Funding ID DS-2020-065 
Organisation The Leverhulme Trust 
Sector Charity/Non Profit
Country United Kingdom
Start 08/2021 
End 08/2027
 
Description Training efficient rSNNs for Loihi using Eventprop
Amount $77,075 (USD)
Organisation Intel Corporation 
Sector Private
Country United States
Start 06/2023 
End 06/2024
 
Description Using Data Driven Artificial Intelligence to Reveal Pesticide Induced Changes in Pollinator Behaviour
Amount £311,449 (GBP)
Organisation University of Sheffield 
Sector Academic/University
Country United Kingdom
Start 02/2024 
End 12/2026
 
Title CATER: Combined Animal Tracking and Environment Reconstruction 
Description A video analysis software for tracking of animals and reconstruction of their environment to allow high precision and high temporal resolution analysis. 
Type Of Material Improvements to research infrastructure 
Year Produced 2023 
Provided To Others? Yes  
Impact The method has already been adopted by a number of other research labs, and featured in at least 1 other paper at the time of writing. 
URL https://www.science.org/doi/10.1126/sciadv.adg2094
 
Title New tool to rapidly and accurately reconstruct compound vision systems 
Description New tool to rapidly and accurately reconstruct compound vision systems. The tool uses modern ray tracing graphics technologies to produce entirely new levels of accuracy. Tool is open sourced via github. 
Type Of Material Technology assay or reagent 
Year Produced 2022 
Provided To Others? Yes  
Impact tba 
URL https://github.com/BrainsOnBoard/compound-ray
 
Title Data for 'Estimating orientation in Natural scenes: A Spiking Neural Network Model of the Insect Central Complex' (2024) 
Description Data for paper published in PLOS Computational Biology (Aug 2024)AbstractThe central complex of insects contains cells, organised as a ring attractor, that encode head direction. The `bump' of activity in the ring can be updated by idiothetic cues and external sensory information. Plasticity at the synapses between these cells and the ring neurons, that are responsible for bringing sensory information into the central complex, has been proposed to form a mapping between visual cues and the heading estimate which allows for more accurate tracking of the current heading, than if only idiothetic information were used.In Drosophila, ring neurons have well characterised non-linear receptive fields. In this work we produce synthetic versions of these visual receptive fields using a combination of excitatory inputs and mutual inhibition between ring neurons. We use these receptive fields to bring visual information into a spiking neural network model of the insect central complex based on the recently published Drosophila connectome. Previous modelling work has focused on how this circuit functions as a ring attractor using the same type of simple visual cues commonly used experimentally. While we initially test the model on these simple stimuli, we then go on to apply the model to complex natural scenes containing multiple conflicting cues. We show that this simple visual filtering provided by the ring neurons is sufficient to form a mapping between heading and visual features and maintain the heading estimate in the absence of angular velocity input. The network is successful at tracking heading even when presented with videos of natural scenes containing conflicting information from environmental changes and translation of the camera.All code used for this project has been made publicly available on GitHub: https://github.com/stenti/stentiford_cx_ra. Data required to run the code can be found here.mp4 files: all raw videos used as input to the model. '3rev_static' indicated videos recorded with the camera rotating for 3 revolutions in a stationary position. 'circling' idicates videos recorded using the spidercam robot either rotation on the spot 'static' or moving in a circle 'super'. (to be loaded by cx_ra_rn.py)pkl files: simple stimuli input that does not require preprocessing (to be loaded by cx_ra_rn.py)npy files: Weight matrices between different populations of cells (to be loaded by cx_ra_rn.py) 
Type Of Material Database/Collection of data 
Year Produced 2024 
Provided To Others? Yes  
URL https://sussex.figshare.com/articles/dataset/Data_for_Estimating_orientation_in_Natural_scenes_A_Spi...
 
Title Data for paper: Wood ants learn the magnetic direction of a route but express uncertainty because of competing directional cues 
Description Data for paper published in Journal of Experimental Biology July 2022 The data for each Ant in the experiments described in all but Figure 7 is held in matlab files with the name as follows: AntU_LN22WESTtest_1522_31072019_Published.mat The data for the experiments with two triangles is held in the zip file TrianglesData.zip which has individual files in the same format as above There is a detailed description of the variables in the file ants_magnets_philippides_dataset_description.pdf Abstract Wood ants were trained indoors to follow a magnetically specified route that went from the centre of an arena to a drop of sucrose at the edge. The arena, placed in a white cylinder, was in the centre of a 3D coil system generating an inclined Earth-strength magnetic field in any horizontal direction. The specified direction was rotated between each trial. The ants' knowledge of the route was tested in trials without food. Tests given early in the day, before any training, show that ants remember the magnetic route direction overnight. During the first 2 seconds of a test, ants mostly faced in the specified direction, but thereafter were often misdirected, with a tendency to face briefly in the opposite direction. Uncertainty about the correct path to take may stem in part from competing directional cues linked to the room. In addition to facing along the route, there is evidence that ants develop magnetically directed home and food vectors dependent upon path integration. A second experiment asked whether ants can use magnetic information contextually. In contrast to honeybees given a similar task, ants failed this test. Overall, we conclude that magnetic directional cues can be sufficient for route learning. 
Type Of Material Database/Collection of data 
Year Produced 2022 
Provided To Others? Yes  
Impact the data supported the results in the paper 
URL https://sussex.figshare.com/articles/dataset/Data_for_paper_Wood_ants_learn_the_magnetic_direction_o...
 
Title Data for paper: Wood ants learn the magnetic direction of a route but express uncertainty because of competing directional cues 
Description Data for paper published in Journal of Experimental Biology July 2022 The data for each Ant in the experiments described in all but Figure 7 is held in matlab files with the name as follows: AntU_LN22WESTtest_1522_31072019_Published.mat The data for the experiments with two triangles is held in the zip file TrianglesData.zip which has individual files in the same format as above There is a detailed description of the variables in the file ants_magnets_philippides_dataset_description.pdf Abstract Wood ants were trained indoors to follow a magnetically specified route that went from the centre of an arena to a drop of sucrose at the edge. The arena, placed in a white cylinder, was in the centre of a 3D coil system generating an inclined Earth-strength magnetic field in any horizontal direction. The specified direction was rotated between each trial. The ants' knowledge of the route was tested in trials without food. Tests given early in the day, before any training, show that ants remember the magnetic route direction overnight. During the first 2 seconds of a test, ants mostly faced in the specified direction, but thereafter were often misdirected, with a tendency to face briefly in the opposite direction. Uncertainty about the correct path to take may stem in part from competing directional cues linked to the room. In addition to facing along the route, there is evidence that ants develop magnetically directed home and food vectors dependent upon path integration. A second experiment asked whether ants can use magnetic information contextually. In contrast to honeybees given a similar task, ants failed this test. Overall, we conclude that magnetic directional cues can be sufficient for route learning. 
Type Of Material Database/Collection of data 
Year Produced 2022 
Provided To Others? Yes  
URL https://sussex.figshare.com/articles/dataset/Data_for_paper_Wood_ants_learn_the_magnetic_direction_o...
 
Title Dataset for paper "mlGeNN: Accelerating SNN inference using GPU-Enabled Neural Networks" 
Description Dataset for paper accepted in IOP Neuromorphic Computing and Engineering March 2022 Dataset contains trained weights from TensorFlow 2.4.0 for the following models:- vgg16_imagenet_tf_weights.h5 - VGG-16 model trained on ImageNet ILSVRC dataset - vgg16_tf_weights.h5 - VGG-16 model trained on CIFAR-10 dataset- resnet20_cifar10_tf_weights.h5 - ResNet-20 model trained on CIFAR-10 dataset- resnet34_imagenet_tf_weights.h5 - ResNet-34 model trained on ImageNet ILSVRC Abstract "In this paper we present mlGeNN - a Python library for the conversion of artificial neural networks (ANNs) specified in Keras to spiking neural networks (SNNs). SNNs are simulated using GeNN with extensions to efficiently support convolutional connectivity and batching. We evaluate converted SNNs on CIFAR-10 and ImageNet classification tasks and compare the performance to both the original ANNs and other SNN simulators. We find that performing inference using a VGG-16 model, trained on the CIFAR-10 dataset, is 2.5x faster than BindsNet and, when using a ResNet-20 model trained on CIFAR-10 with FewSpike ANN to SNN conversion, mlGeNN is only a little over 2x slower than TensorFlow." FundingBrains on Board grant number EP/P006094/1ActiveAI grant number EP/S030964/1Unlocking spiking neural networks for machine learning research grant number EP/V052241/1 European Union's Horizon 2020 research and innovation program under Grant Agreement 945539 
Type Of Material Database/Collection of data 
Year Produced 2022 
Provided To Others? Yes  
URL https://sussex.figshare.com/articles/dataset/Dataset_for_paper_mlGeNN_Accelerating_SNN_inference_usi...
 
Title Dataset for paper "mlGeNN: Accelerating SNN inference using GPU-Enabled Neural Networks" 
Description Dataset for paper accepted in IOP Neuromorphic Computing and Engineering March 2022Dataset contains trained weights from TensorFlow 2.4.0 for the following models:- vgg16_imagenet_tf_weights.h5 - VGG-16 model trained on ImageNet ILSVRC dataset - vgg16_tf_weights.h5 - VGG-16 model trained on CIFAR-10 dataset- resnet20_cifar10_tf_weights.h5 - ResNet-20 model trained on CIFAR-10 dataset- resnet34_imagenet_tf_weights.h5 - ResNet-34 model trained on ImageNet ILSVRCAbstract"In this paper we present mlGeNN - a Python library for the conversion of artificial neural networks (ANNs) specified in Keras to spiking neural networks (SNNs). SNNs are simulated using GeNN with extensions to efficiently support convolutional connectivity and batching. We evaluate converted SNNs on CIFAR-10 and ImageNet classification tasks and compare the performance to both the original ANNs and other SNN simulators. We find that performing inference using a VGG-16 model, trained on the CIFAR-10 dataset, is 2.5x faster than BindsNet and, when using a ResNet-20 model trained on CIFAR-10 with FewSpike ANN to SNN conversion, mlGeNN is only a little over 2x slower than TensorFlow."FundingBrains on Board grant number EP/P006094/1ActiveAI grant number EP/S030964/1Unlocking spiking neural networks for machine learning research grant number EP/V052241/1European Union's Horizon 2020 research and innovation program under Grant Agreement 945539 
Type Of Material Database/Collection of data 
Year Produced 2022 
Provided To Others? Yes  
Impact the data supported the results in the paper 
URL https://sussex.figshare.com/articles/dataset/Dataset_for_paper_mlGeNN_Accelerating_SNN_inference_usi...
 
Title Desert Ant Ontogeny Dataset 
Description The entire foraging life of desert ant documented in a series of videos, provided with the tracking software, the tracks, and an environment reconstruction 
Type Of Material Database/Collection of data 
Year Produced 2023 
Provided To Others? Yes  
Impact New insights were made in the publication associated with the dataset that are driving new research questions. 
URL https://cater.cvmls.org/
 
Title EvDownsampling dataset 
Description This dataset is used in the publication "EvDownsampling: A Robust Method For Downsampling Event Camera Data", ECCV Workshop on Neuromorphic Vision: Advantages and Applications of Event Cameras [29/09/2024].This dataset contains event streams of highly dynamic real-world scenes collected using two DVS cameras of different spatial resolutions - a DVXplorer (640×480 px) and a Davis346 (346×260 px). Both cameras simultaneously recorded each scene with negligible parallax error. The dataset is provided to test event-based spatio-temporal downsampling techniques through comparing downsampled higher-resolution recordings with matching lower-resolution recordings, as explained in our publication above.There are four classes {class_folder} of scenes:Traffic: natural lighting. Bus and car moving across camera visual field with several pedestrians. 6 seconds long.HandGestures: fluorescent lighting. Person either waving their hand, waving their arms or doing jumping jacks. 12-15 seconds long.Corridor: fluorescent lighting. Moving through corridors. One corridor scene (Pevensey) has a carpet which provides texture, while the other scene (Arundel) does not have a carpet. 18-24 seconds long.Cars: natural lighting. Car moving across camera visual field with few pedestrians. 3-5 seconds long.Each dataset/{class_folder} contains two folders consisting of:Videos of the scene recordings captured by both DVS cameras placed side-by-side (.mp4)Raw event data information in the form of (x, y, timestamp, polarity) in AEDAT 4 format (.aedat4).The script dualCam_dvRead.py can be used to convert the .aedat4 files into a NumPy format and to generate frame reconstructions. The syntax to call the script from the command-line is:python3 dualCam_dvRead.py --data_folder {class_folder} --input {scene_recording} --publisher_rate {publisher_rate}class_folder is the class of the scene recording e.g. corridorscene_recording is the specific recording in that class e.g. Pevenseypublisher_rate determines frame rate of images published (in fps) e.g. 1000.More information is available at: https://github.com/anindyaghosh/EvDownsampling.The conference website is: https://sites.google.com/view/nevi2024/home-page. 
Type Of Material Database/Collection of data 
Year Produced 2024 
Provided To Others? Yes  
URL https://sussex.figshare.com/articles/dataset/EvDownsampling_dataset/26528146
 
Title Hoverfly (Eristalis tenax) descending neurons respond to pursuits of artificial targets 
Description Many animals use motion vision information to control dynamic behaviors. Predatory animals, for example, show an exquisite ability to detect rapidly moving prey followed by pursuit and capture. Such target detection is not only used by predators but can also play an important role in conspecific interactions. Male hoverflies (Eristalis tenax), for example, vigorously defend their territories against conspecific intruders. Visual target detection is believed to be subserved by specialized target-tuned neurons that are found in a range of species, including vertebrates and arthropods. However, how these target-tuned neurons respond to actual pursuit trajectories is currently not well understood. To redress this, we recorded extracellularly from target selective descending neurons (TSDNs) in male Eristalis tenax hoverflies. We show that the neurons have dorso-frontal receptive fields, with a preferred direction up and away from the visual midline, with a clear division into a TSDNLeft and a TSDNRight cluster. We next reconstructed visual flow-fields as experienced during pursuits of artificial targets (black beads). We recorded TSDN responses to six reconstructed pursuits and found that each neuron responded consistently at remarkably specific time points, but that these time points differed between neurons. We found that the observed spike probability was correlated with the spike probability predicted from each neuron's receptive field and size tuning. Interestingly, however, the overall response rate was low, with individual neurons responding to only a small part of each reconstructed pursuit. In contrast, the TSDNLeft and TSDNRight populations responded to substantially larger proportions of the pursuits, but with lower probability. This large variation between neurons could be useful if different neurons control different parts of the behavioral output. 
Type Of Material Database/Collection of data 
Year Produced 2023 
Provided To Others? Yes  
Impact Current biology paper 
URL https://datadryad.org/stash/dataset/doi:10.5061/dryad.tdz08kq4d
 
Title Neuromorphic sequence learning with an event camera on routes through vegetation 
Description code and dataset for paper 'Neuromorphic sequence learning with an event camera on routes through vegetation'. 
Type Of Material Database/Collection of data 
Year Produced 2023 
Provided To Others? Yes  
Impact Publication in Science Advances, garnering good press coverage and follow-on work 
URL https://zenodo.org/record/8289546
 
Title Research data for paper "Geosmin suppresses defensive behaviour and elicits unusual neural responses in honey bees" 
Description Research data for paper published in Scientific Reports journal March 2023 This data explores the effects of Geosmin on honeybees and has three separate parts: 1. Behavioural data on the stinging response of bees towards a dummy in the presence of the alarm pheromone and Geosmin 2. Electro-antennogram (EAG) data of the response to Geosmin on bees' antenna 3. Calcium imaging data illustrating the neural response to Geosmin in the honey bee antennal lobe. Data & File Overview File List: Bee_Aggression_Behaviour.csv: Bees' stinging responses towards a rotating dummy Bee_AL_Calcium_imaging.csv: Calcium imaging time traces for individual odour stimuli and glomeruli in all bees Bee_EAG_Data.csv: Electroantennography response amplitudes for individual odour stimuli in all bees Methodological Information all Methods described in: Scarano F, Deivarajan Suresh M, Tiraboschi E, Cabirol A, Nouvian M, Nowotny T, Haase A. Geosmin suppresses defensive behaviour and elicits unusual neural responses in honey bees. Sci Rep 13:3851 (2023) Data-specific Information for: 'Bee_Aggression_Behaviour.csv' The data file contains observational data from an aggression assay, where 325 bee dyads were inserted into an arena with a rotating dummy. Data indicates whether a behaviour was observed or not from one or both bees. Further details in the Methods sections of Scarano et al. 2021 Number of variables/columns: 17 Number of cases/rows: 324 Variable List: A) Day: Date in which bees were tested, DD/MM/YYYY B) Weather: Describes the weather on the trial date, Sunny / Cloudy C) Hive: Hive colour recognised by the paint colour from which the bees have been acquired, Orange / White / Green / Yellow D) Dummy: Which of two arenas used in that behavioural trial, A / B E) Group: Represents the odours - VOCs and its mixtures with the respective concentrations, IAA / Geosmin6 / Geosmin3 / IAAGeo3 / IAAGeo6 / Control, Concentrations: 3 (10^-3) & 6 (10^-6) F) Sting.Bee1: Represents if bee one exhibits stinging behaviour, 0 (no) / 1 (yes), binary G) Sting.Bee2: Represents if bee two exhibits stinging behaviour, 0 (no) / 1 (yes), binary H) Recruit.Bee1: Represents if bee one exhibits recruiting behaviour, 0 (no) / 1 (yes), binary I) Recruit.Bee2: Represents if bee two exhibits recruiting behaviour, 0 (no) / 1 (yes), binary J) Grooming/Calm: Represents if the bees exhibit grooming or calm behaviour, 0 (none) /1 /2, number of bees K) Batch: Represents in which time batch the trials were conducted, 1 (Morning) / 2 (Afternoon), numerical - time period L) number.sting: Represents number of bees stinging in a trial, 0/1/2, number of bees M) number.recruit: Represents number of bees recruiting in a trial, 0/1/2, number of bees N) sting.first: Represents if the stinging behaviour was exhibited first in a trial, 0 (no) / 1 (yes), binary O) recruit.first: Represents if the recruiting behaviour was exhibited first in a trial, 0 (no) / 1 (yes), binary P) b.sting: Represents if there was any stinging behaviour exhibited by the two bees during that trial, 0 (no) / 1 (yes), binary Q) b.recruit: Represents if there was any recruiting behaviour exhibited by the two bees during that trial, 0 (no) / 1 (yes), binary Data-specific Information for: 'Bee_EAG_Data.csv' Data are electroantennography responses for 24 antennas exposed to different odour stimuli. Data points are voltage change amplitudes with respect to the baseline, averaged over the 1s stimulus duration. Further datails in the Methods sections of Scarano et al. 2021 Number of variables/columns: 16 Number of cases/rows: 24 Variable List: A) Bee: subject numbers B-P) Responses to different Odours and Concentrations Values are average potential change (over 10 repetitions) in response to the presented odour stimuli in units of Volt Odours presented: Control = pure mineral oil (solvent) Geo = geosmin in mineral oil at a concentration indicated in brackets IAA = Isoamil acetate in mineral oil at a concentration indicated in brackets B) Control mineral oil C) Geo (10^-6) D) Geo (10^-5) E) Geo (10^-4) F) Geo (10^-3) G) IAA (10^-3) H) IAA (10^-3)+ Geo (10^-6) I) IAA (10^-3)+ Geo (10^-5) J) IAA (10^-3)+ Geo (10^-4) K) IAA (10^-3)+ Geo (10^-3) L) IAA (10^-1) M) IAA (10^-1)+ Geo (10^-6) N) IAA (10^-1)+ Geo (10^-5) O) IAA (10^-1)+ Geo (10^-4) P) IAA (10^-1)+ Geo (10^-3) Data-specific Information for: 'Bee_AL_Calcium_imaging.csv' The data file contains response curve from up to 19 glomeruli in the antennal lobe of 14 bees, data are relative changes in fluorescence averaged over the glomerular area. The fluorescence changes stem from projection neurons that were stained by backfill injection with the calcium-sensitive dye fura-dextrane. Further details in Methods sections of Scarano et al. 2021 Number of variables/columns: 95 Number of cases/rows: 1720 Variable List: A) Bee-id: Subject number B) Glo_id: Glomerulus number following bee antennal lobe standard atlas nomenclature for tract T1 C) Odour: Stimulus odour type Geo6 = Geosmin at concentration 10^-6 in mineral oil Geo3 = Geosmin at concentration 10^-3 in mineral oil IAA = Isoamyl acetate at concentration 10^-1 in mineral oil 3Hex = 3-hexanol at concentration 5x10^-3 in mineral oil acetoph = Acetophenone at concentration 5x10^-3 in mineral oil non = 1-nonanol at concentration 5x10^-3 in mineral oil IAA-Glo are mixtures of both odours D) Response: Automatized response classification (1 activated, 2 background activity, 2 inhibited) E-CQ) Frame_1-91: Glomerular response curves, 91 frames, 10.033 frames/s, Curves cover a 1s prestimulus interval (frames 1-30), 1s stimulus (frames 31-60), and 1s post stimulus (frames 61-91). Values are fluorescence changes in percent, averaged over the glomerular area, background subtracted and normalized with respect to the background. Article Abstract Geosmin is an odorant produced by bacteria in moist soil. It has been found to be extraordinarily relevant to some insects, but the reasons for this are not yet fully understood. Here we report the first tests of the effect of geosmin on honey bees. A stinging assay showed that the defensive behaviour elicited by the bee's alarm pheromone component isoamyl acetate (IAA) is strongly suppressed by geosmin. Surprisingly, the suppression is, however, only present at very low geosmin concentrations, and disappears at higher concentrations. We investigated the underlying mechanisms at the level of the olfactory receptor neurons by means of electroantennography, finding the responses to mixtures of geosmin and IAA to be lower than to pure IAA, suggesting an interaction of both compounds at the olfactory receptor level. Calcium imaging of the antennal lobe (AL) revealed that neuronal responses to geosmin decreased with increasing concentration, correlating well with the observed behaviour. Computational modelling of odour transduction and coding in the AL suggests that a broader activation of olfactory receptor types by geosmin in combination with lateral inhibition could lead to the observed non-monotonic increasing-decreasing responses to geosmin and thus underlie the specificity of the behavioural response to low geosmin concentrations. Links to other resources relating to the data Computational modelling software that aims at reproducing the Calicum and EAG data indcluded here, is available on https://github.com/tnowotny/bee_al_2021 
Type Of Material Database/Collection of data 
Year Produced 2023 
Provided To Others? Yes  
URL https://sussex.figshare.com/articles/dataset/Research_data_for_paper_Geosmin_suppresses_defensive_be...
 
Title Research data for paper "Geosmin suppresses defensive behaviour and elicits unusual neural responses in honey bees" 
Description Research data for paper published in Scientific Reports journal March 2023 This data explores the effects of Geosmin on honeybees and has three separate parts: 1. Behavioural data on the stinging response of bees towards a dummy in the presence of the alarm pheromone and Geosmin 2. Electro-antennogram (EAG) data of the response to Geosmin on bees' antenna 3. Calcium imaging data illustrating the neural response to Geosmin in the honey bee antennal lobe. Data & File Overview File List: Bee_Aggression_Behaviour.csv: Bees' stinging responses towards a rotating dummy Bee_AL_Calcium_imaging.csv: Calcium imaging time traces for individual odour stimuli and glomeruli in all bees Bee_EAG_Data.csv: Electroantennography response amplitudes for individual odour stimuli in all bees Methodological Information all Methods described in: Scarano F, Deivarajan Suresh M, Tiraboschi E, Cabirol A, Nouvian M, Nowotny T, Haase A. Geosmin suppresses defensive behaviour and elicits unusual neural responses in honey bees. Sci Rep 13:3851 (2023) Data-specific Information for: 'Bee_Aggression_Behaviour.csv' The data file contains observational data from an aggression assay, where 325 bee dyads were inserted into an arena with a rotating dummy. Data indicates whether a behaviour was observed or not from one or both bees. Further details in the Methods sections of Scarano et al. 2021 Number of variables/columns: 17 Number of cases/rows: 324 Variable List: A) Day: Date in which bees were tested, DD/MM/YYYY B) Weather: Describes the weather on the trial date, Sunny / Cloudy C) Hive: Hive colour recognised by the paint colour from which the bees have been acquired, Orange / White / Green / Yellow D) Dummy: Which of two arenas used in that behavioural trial, A / B E) Group: Represents the odours - VOCs and its mixtures with the respective concentrations, IAA / Geosmin6 / Geosmin3 / IAAGeo3 / IAAGeo6 / Control, Concentrations: 3 (10^-3) & 6 (10^-6) F) Sting.Bee1: Represents if bee one exhibits stinging behaviour, 0 (no) / 1 (yes), binary G) Sting.Bee2: Represents if bee two exhibits stinging behaviour, 0 (no) / 1 (yes), binary H) Recruit.Bee1: Represents if bee one exhibits recruiting behaviour, 0 (no) / 1 (yes), binary I) Recruit.Bee2: Represents if bee two exhibits recruiting behaviour, 0 (no) / 1 (yes), binary J) Grooming/Calm: Represents if the bees exhibit grooming or calm behaviour, 0 (none) /1 /2, number of bees K) Batch: Represents in which time batch the trials were conducted, 1 (Morning) / 2 (Afternoon), numerical - time period L) number.sting: Represents number of bees stinging in a trial, 0/1/2, number of bees M) number.recruit: Represents number of bees recruiting in a trial, 0/1/2, number of bees N) sting.first: Represents if the stinging behaviour was exhibited first in a trial, 0 (no) / 1 (yes), binary O) recruit.first: Represents if the recruiting behaviour was exhibited first in a trial, 0 (no) / 1 (yes), binary P) b.sting: Represents if there was any stinging behaviour exhibited by the two bees during that trial, 0 (no) / 1 (yes), binary Q) b.recruit: Represents if there was any recruiting behaviour exhibited by the two bees during that trial, 0 (no) / 1 (yes), binary Data-specific Information for: 'Bee_EAG_Data.csv' Data are electroantennography responses for 24 antennas exposed to different odour stimuli. Data points are voltage change amplitudes with respect to the baseline, averaged over the 1s stimulus duration. Further datails in the Methods sections of Scarano et al. 2021 Number of variables/columns: 16 Number of cases/rows: 24 Variable List: A) Bee: subject numbers B-P) Responses to different Odours and Concentrations Values are average potential change (over 10 repetitions) in response to the presented odour stimuli in units of Volt Odours presented: Control = pure mineral oil (solvent) Geo = geosmin in mineral oil at a concentration indicated in brackets IAA = Isoamil acetate in mineral oil at a concentration indicated in brackets B) Control mineral oil C) Geo (10^-6) D) Geo (10^-5) E) Geo (10^-4) F) Geo (10^-3) G) IAA (10^-3) H) IAA (10^-3)+ Geo (10^-6) I) IAA (10^-3)+ Geo (10^-5) J) IAA (10^-3)+ Geo (10^-4) K) IAA (10^-3)+ Geo (10^-3) L) IAA (10^-1) M) IAA (10^-1)+ Geo (10^-6) N) IAA (10^-1)+ Geo (10^-5) O) IAA (10^-1)+ Geo (10^-4) P) IAA (10^-1)+ Geo (10^-3) Data-specific Information for: 'Bee_AL_Calcium_imaging.csv' The data file contains response curve from up to 19 glomeruli in the antennal lobe of 14 bees, data are relative changes in fluorescence averaged over the glomerular area. The fluorescence changes stem from projection neurons that were stained by backfill injection with the calcium-sensitive dye fura-dextrane. Further details in Methods sections of Scarano et al. 2021 Number of variables/columns: 95 Number of cases/rows: 1720 Variable List: A) Bee-id: Subject number B) Glo_id: Glomerulus number following bee antennal lobe standard atlas nomenclature for tract T1 C) Odour: Stimulus odour type Geo6 = Geosmin at concentration 10^-6 in mineral oil Geo3 = Geosmin at concentration 10^-3 in mineral oil IAA = Isoamyl acetate at concentration 10^-1 in mineral oil 3Hex = 3-hexanol at concentration 5x10^-3 in mineral oil acetoph = Acetophenone at concentration 5x10^-3 in mineral oil non = 1-nonanol at concentration 5x10^-3 in mineral oil IAA-Glo are mixtures of both odours D) Response: Automatized response classification (1 activated, 2 background activity, 2 inhibited) E-CQ) Frame_1-91: Glomerular response curves, 91 frames, 10.033 frames/s, Curves cover a 1s prestimulus interval (frames 1-30), 1s stimulus (frames 31-60), and 1s post stimulus (frames 61-91). Values are fluorescence changes in percent, averaged over the glomerular area, background subtracted and normalized with respect to the background. Article Abstract Geosmin is an odorant produced by bacteria in moist soil. It has been found to be extraordinarily relevant to some insects, but the reasons for this are not yet fully understood. Here we report the first tests of the effect of geosmin on honey bees. A stinging assay showed that the defensive behaviour elicited by the bee's alarm pheromone component isoamyl acetate (IAA) is strongly suppressed by geosmin. Surprisingly, the suppression is, however, only present at very low geosmin concentrations, and disappears at higher concentrations. We investigated the underlying mechanisms at the level of the olfactory receptor neurons by means of electroantennography, finding the responses to mixtures of geosmin and IAA to be lower than to pure IAA, suggesting an interaction of both compounds at the olfactory receptor level. Calcium imaging of the antennal lobe (AL) revealed that neuronal responses to geosmin decreased with increasing concentration, correlating well with the observed behaviour. Computational modelling of odour transduction and coding in the AL suggests that a broader activation of olfactory receptor types by geosmin in combination with lateral inhibition could lead to the observed non-monotonic increasing-decreasing responses to geosmin and thus underlie the specificity of the behavioural response to low geosmin concentrations. Links to other resources relating to the data Computational modelling software that aims at reproducing the Calicum and EAG data indcluded here, is available on https://github.com/tnowotny/bee_al_2021 
Type Of Material Database/Collection of data 
Year Produced 2023 
Provided To Others? Yes  
Impact the data supported the results in the paper 
URL https://sussex.figshare.com/articles/dataset/Research_data_for_paper_Geosmin_suppresses_defensive_be...
 
Title Research data for paper "Loss shaping enhances exact gradient learning with Eventprop in Spiking Neural Networks" 
Description The data in this repository was generated in the context of training spiking neural networks for keyword recognition using the Eventprop algorithm. It accompanies the paper 'Loss shaping enhances exact gradient learning with Eventprop in Spiking Neural Networks' Neuromorphic Computing and Engineering (09 Jan, 2025)The data relates to two benchmarks:Spiking Heidelberg Digits (SHD) (Cramer et al. 2022)Spiking Speech Commands, derived from Google Speech Commands (Warden et al. 2018).The data was generated and analysed with the code available on GitHub at https://github.com/tnowotny/genn_eventpropThe data is organised into 6 zip volumes each of which corresponds to a parameter scan of networks trained on the SHD data set (4 scans) and SSC data set (2 scans).scan_SHD_base_xval.zipResult from leave-one-speaker-out cross-validation runs on the "base SHD models", i.e. networks trained with Eventprop, including regularisation but no augmentations and only one hidden layer. There were 160 parameter combinations:Four different loss types - LOSS_TYPE: sum, sum_weigh_exp, first_spike_exp, max; each with individual best settings for HIDDEN_OUTPUT_MEAN, HIDDEN_OUTPUT_STD, LBD_UPPER, ETAscaling of LBD_UPPER from base value by: 0.1, 0.5, 1.0, 5.0, 10.0RECURRENT: False, TrueTAU_MEM: 20, 40TAU_SYN: 5,10For each of the combinations, there are two files:SHD_xval_xxxx.json:A JSON file containing the used parameter settings.SHD_xval_xxxx_results.txt:An ASCII file containing in each row the metrics after each training epoch, separated by blanks:Epochtraining accuracytraining lossvalidation accuracyvalidation lossmean number of spikes in the hidden layerstandard deviation of number of spikes in the hidden layerminimum of number of spikes in the hidden layermaximum of number of spikes in the hidden layermean number of spikes per neuron per trial across a mini-batchstandard deviation of number of spikes per neuron per trial across a mini-batchminimum number of spikes per neuron per trial across a mini-batchmaximal number of spikes per neuron per trial across a mini-batchnumber of silent neuronstime (s) since training startscan_SHD_base_traintest.zipResults from training the base models on the SHD training set, interleaved with testing on the test set. This uses the 8 parameter combinations from scan_SHD_base_xval.zip that use the four LOSS_TYPE choices and RECURRENT False or True. For each of these 8 cases, the other parameters from scan_SHD_base_xval.zip were taken from the run with the best mean cross-validation score. Each of the 8 runs was repeated 8 times with different random seeds.The files included are:SHD_tt_xxxx.jsonParameter settings as above.SHD_tt_xxxx_results.txtResults file with columns as above except that columns 4 and 5 now relate to test accuracy and -loss respectively.SHD_tt_xxxx_best.txtThe best result across epochs (same data format as SHD_tt_xxxx_results.txt)SHD_tt_xxxx_w_input_hidden_best.npyThe weight matrix of input to hidden connections at the epoch where the best training accuracy was achieved (early stopping upon training accuracy). The weights are arranged in a "pre-major" order, i.e. entries 1 to n_hidden are the weights from input neuron 0 to all hidden neurons, followed by the weight from input neuron 2 and so on. All weight matrices are stored in this way.SHD_tt_xxxx_w_hidden_output_best.npyThe weight matrix of hidden to output connections at the best epoch.If the network is recurrent, there is alsoSHD_tt_xxxx_w_hidden0_hidden0_best.npyThe recurrent weight matrix from the hidden layer to itself.scan_SHD_final_xval.zipResults from the ablation experiments on the full SHD models. Leave-one-speaker-out cross-validation runs were performed to determine the best regularisation strength LBD_UPPER for each of the following parameter combinations (512 combinations):DT_MS: 1,2,5,10,20NUM_HIDDEN: 64, 128, 256, 512, 1024 (for DT= 1 or 2), 256, 1024 (for other DT)N_INPUT_DELAY: 0, 10AUGMENTATION: None, blend: [0.5, 0.5], random_shift: 40.0, blend, blend & shiftHIDDEN_NEURON_TYPE: LIF, hetLIFTRAIN_TAU: False, True5 different LBD_UPPER values were tested for 2 repeats with different random seed each (total 5120 runs).The files included are:SHD_xval_xxxx.jsonParameter settings as above.SHD_xval_results.txtResults file with columns as described for scan_SHD_base_xval above.scan_SHD_final_traintest.zipResults from training on the SHD training set, interleaved with testing on the test set. This was done for 320 different parameter settings, corresponding to dt=1,2 only and choosing the best LBD_UPPER as determined by the run in scan_SHD_final_xval where the average validation error in the epochs of best training error in each fold was best. For each of the 320 combinations, 8 independent runs with different random seeds were executed (2560 total runs).For each of the runs, there are 3 files:SHD_tt_xxxx.jsonA JSON file with the used parameter settings.SHD_tt_xxxx_results.txtThe results file with columns as described before, columns 4 and 5 relate to the accuracy and loss on the test set.SHD_tt_xxxx_best.txtThe values from the epoch when the test accuracy was best. Same columns as SHD_tt_xxxx_results.txt.In addition, for the runs that had the best test results (within the 8 repeats), we also includeSHD_tt_0004_w_input_hidden_best.npyThe weights from input to hidden layer.SHD_tt_0004_w_hidden_output_best.npyThe weights from hidden to output layer.SHD_tt_0004_w_hidden0_hidden0_best.npyThe recurrent weights from the hidden layer to itself (in this scan all networks are recurrent).scan_SSC_final.zipResults from the ablation experiments on SSC. We ran the same parameter combinations as for scan_SHD_final_xval but as SSC has a dedicated validation set, the runs were performed as training epochs interleaved with testing on the validation set (5120 runs). The provided files areSSC_xxxx.jsonThe parameter values used.SSC_xxxx_results.txtThe results of the training/validation run.SSC_5118_best.txtThe row from SSC_xxxx_results.txt with the best validation error.We subsequently ran testing on the trained network from the epoch where the validation error was best. From these we haveSSC_5118_test.jsonThe parameter settings of the test run.SSC_5118_test_results.txtThe results of the test run. This has the same columns as the training runs, except that columns 2 and 3 are devoid meaning.For the runs of a given parameter setting that were best across LBD_UPPER and random seed values, we also provideSSC_xxxx_w_input_hidden_best.npyThe weights from input to hidden layer for the epoch where the validation error was best. These are the connection weights used for the testing run.SSC_xxxx_w_hidden_output_best.npyThe corresponding weights from hidden to output layer.SSC_xxxx_w_hidden0_hidden0_best.npyThe corresponding recurrent weights from the hidden layer to itself (in this scan all networks are recurrent).scan_SSC_final_repeats.zipIn this scan we made 6 more repeated runs for all parameter combinations from scan_SSC_final with the best performing LBD_UPPER values (1920 runs). The files provided are exactly as for scan_SSC_final.Relationship to the publicationFigure 2 of the publication is based on scan_SHD_base_xval and scan_SHD_base_traintest and the panels of the figure can be generated using the scripts plot_SHD_base_curves.py and plot_SHD_base_summary.py.Figure 3 of the publication is based on the data scan_SHD_final_traintest and its panels can be generated with the script plot_final_ablation.py with the argument "SHD".Figure 4 of the publication is based on data scan_SSC_final and scan_SSC_final_repeats and can be generated with the script plot_final_ablation.py with the argument "SSC". 
Type Of Material Database/Collection of data 
Year Produced 2025 
Provided To Others? Yes  
URL https://sussex.figshare.com/articles/dataset/Research_data_for_paper_Loss_shaping_enhances_exact_gra...
 
Title Stanmer Park outdoor navigational data 
Description This dataset contains omnidirectional 1440?1440 resolution images taken using a Kodak Pixpro SP360 camera paired with RTK GPS information obtained using a simple RTK2B - 4G NTRIP kit and fused yaw, pitch and roll data recorded from a BNO055 IMU. The data was collected using a 4 wheel ground robot that was manually controlled by a human operator. The robot was driven 15 times along a route at Stanmer Park (shown in map.png). The route consists mostly of open fields and a narrow path through a forest and is approximately 700m long. The recordings took place at various days and times starting in March 2021, with the date and time indicated by the filename. For example '20210420_135721.zip' corresponds to a route driven on 20/03/2021 starting at 13:57:21 GMT. During the recordings the weather varied from clear skies and sunny days to overcast and low light conditions. Each recording consists of an mp4 video of the camera footage for the route, and a database_entries.csv file with the following columns:Timestamp of video frame (in ms)X, Y and Z coordinate (in mm) and zone representing location in UTM coordinates from GPSHeading, pitch and roll (in degrees) from IMU. In some early routes, the IMU failed and when this occurs these values are recorded as "NaN".Speed and Steering angle commands being sent to robot at that timeGPS quality (1=GPS, 2=DGNSS, 4=RTK Fixed and 5=RTK Float)X, Y and Z coordinates (in mm) fitted to a degree one polynomial to smooth out GPS noiseHeading (in degrees) derived from smoothed GPS coordinatesIMU heading (in degrees) with discontinuities resulting from IMU issues fixedFor completeness, each folder also contains a database_entries_original.csv containing the data before pre-processing. The pre-processing is documented in more detail in pre_processing_notes.pdf. 
Type Of Material Database/Collection of data 
Year Produced 2024 
Provided To Others? Yes  
Impact Conference paper in preparation 
URL https://sussex.figshare.com/articles/dataset/Stanmer_Park_outdoor_navigational_data/25118383
 
Title UoS campus and Stanmer park outdoor navigational data 
Description This dataset contains omnidirectional 1440?1440 resolution images taken using a Kodak Pixpro SP360 camera paired with RTK GPS information obtained using a simple RTK2B - 4G NTRIP kit and fused yaw, pitch and roll data recorded from a BNO055 IMU. The data was collected using a 4 wheel ground robot (SuperDroid IG42-SB4-T) that was manually controlled by a human operator. The robot was driven 10 times along a route on the University of Sussex campus (shown in campus.png) and 10 times at the adjacent Stanmer Park (shown in stanmer.png). The first route is a mix of urban structures (university buildings), small patches of trees and paths populated by people and is approximately 700m long. The second route consists mostly of open fields and a narrow path through a forest and is approximately 600m long. The recordings took place at various days and times starting in May 2023, with the date and time indicated by the filename. For example 'campus_route5_2023_11_22_102925.zip' corresponds to the 5th route recorded on the Sussex campus on 22/11/2023 starting at 10:29:25 GMT. During the recordings the weather varied from clear skies and sunny days to overcast and low light conditions. Each recording consists of the .jpg files that make up the route, and a .csv file with the following columns:X, Y and Z coordinate (in mm) and zone representing location in UTM coordinates from GPSHeading, pitch and roll (in degrees) from IMU. In some early routes, the IMU failed and when this occurs these values are recorded as "NaN".Filename of corresponding camera imageLatitude (in decimal degrees west) and Longitude (in decimal degrees north) and Altitude (in m) from GPSGPS quality (1=GPS, 2=DGNSS, 4=RTK Fixed and 5=RTK Float) horizontal dilation (in mm)Timestamp (in ms) 
Type Of Material Database/Collection of data 
Year Produced 2023 
Provided To Others? Yes  
Impact Conference paper in preparation 
URL https://sussex.figshare.com/articles/dataset/UoS_campus_and_Stanmer_park_outdoor_navigational_data/2...
 
Title Validation data for paper "SSSort 2.0: A semi-automated spike detection and sorting system for single sensillum recordings" 
Description Research data for paper published in Journal of Neuroscience Methods, Volume 415, March 2025.This data was used to validate the sorting accuracy of SSSort 2.0 and other spike sorting methods used for comparisons in this paper. This data includes the SSR trace recordings, as well as the 'ground truth' spike times used in this analysis.Ground truth data sets were generated using the Spike2 data acquisition and analysis package from Cambridge Electronic Design, Ltd. (https://ced.co.uk/). In order to improve the usefulness of this data, we have included both the original Spike2 file formats (smrx and smr), as well as the complete datasets in a widely compatible format (txt) and the individual test files in a Python readable format (dll).Data & File OverviewFile List:Synthetic 'ground truth' data in 64bit Spike2 format (smrx):· SSSort_doubleAB.smrx· SSSort_singleA.smrx· SSSort_singleB.smrxSynthetic 'ground truth' in spreadsheet text format (txt):· SSSort_doubleAB.txt· SSSort_singleA.txt· SSSort_singleB.txtMerged data files in 32bit Spike2 format (smr):· SSSort_doubleAB_asymXX.smr· SSSort_singleA_asymXX.smr· SSSort_singleB_asymXX.smrMerged data files in Python readable format (dll):· SSSort_doubleAB_asymXX.dll· SSSort_singleA_asymXX.dll· SSSort_singleB_asymXX.dllData-specific Information for: 'SSSort_doubleAB.smrx', 'SSSort_singleA.smrx' & 'SSSort_singleB.smrx'These data files each contain seven Waveform channels and two Event+ channels:· Channel 1: Waveform of A spiking data· Channel 2: Waveform of B spikes· Channel 3: Event+ of A spike events· Channel 4: Event+ of B spiking data· Channels 5-9: Waveforms of merged synthetic SSR data with 0.3 to 0.7 B:A asymmetriesData-specific Information for: 'SSSort_doubleAB.txt, 'SSSort_singleA.txt' & 'SSSort_singleB.txt'These data files each contain the same data as the above '.smrx' files, but in a more openly readable format.Data-specific Information for: 'SSSort_doubleAB_asymXX.smr', 'SSSort_singleA_asymXX.smr' & 'SSSort_singleA_asymXX.smr'These data files each contain a single Waveform channel of merged synthetic SSR data at XX B:A asymmetry. This format can be converted to dll with the 'smr2dill.py' provided on the SSSort GitHub repository.Data-specific Information for: 'SSSort_doubleAB_asymXX.dll, 'SSSort_singleA_asymXX.dll' & 'SSSort_singleA_asymXX.dll'These data files each contain the same single Waveform channel data as the above '.smr' files and are suitable for direct analysis in SSSort 2.0.AbstractSingle-sensillum recordings are a valuable tool for sensory research which, by their nature, access extra-cellular signals typically reflecting the combined activity of several co-housed sensory neurons. However, isolating the contribution of an individual neuron through spike-sorting has remained a major challenge due to firing rate-dependent changes in spike shape and the overlap of co-occurring spikes from several neurons. These challenges have so far made it close to impossible to investigate the responses to more complex, mixed odour stimuli. Here we present SSSort 2.0, a method and software addressing both problems through automated and semi-automated signal processing. We have also developed a method for more objective validation of spike sorting methods based on generating surrogate ground truth data and we have tested the practical effectiveness of our software in a user study. We find that SSSort 2.0 typically matches or exceeds the performance of expert manual spike sorting. We further demonstrate that, for novices, accuracy is much better with SSSort 2.0 under most conditions. Overall, we have demonstrated that spike-sorting with SSSort 2.0 software can automate data processing of SSRs with accuracy levels comparable to, or above, expert manual performance. 
Type Of Material Database/Collection of data 
Year Produced 2025 
Provided To Others? Yes  
URL https://sussex.figshare.com/articles/dataset/Validation_data_for_paper_SSSort_2_0_A_semi-automated_s...
 
Description Bruno van Swinderen 
Organisation University of Queensland
Country Australia 
Sector Academic/University 
PI Contribution They are project partners on the ActiveAI grant. We will have reciprocal visits of PIs and Postdocs when they are allowed. We are helping them with modelling and computational analysis of data.
Collaborator Contribution They are project partners on the ActiveAI grant. We will have reciprocal visits of PIs and Postdocs when they are allowed. They are providing us with results from insect experiments which will inform our robotic models
Impact the collaboration is multi-disciplinary: we do robotics and computational modelling, they do insect neuroscience
Start Year 2019
 
Description Collaboration project with Dr. Cywn Solve (Macquarie University) to investigate impact of brain size on learning speed in temporal learning tasks. 
Organisation Macquarie University
Country Australia 
Sector Academic/University 
PI Contribution AO, EV, MM and LM developed computational models that supported data captured from the collaborators in bumble bees and hummingbirds. All contributed to a paper submitted to Science in 2021
Collaborator Contribution Collaborators provided behavioural data related to learning speeds in two species of bee and hummingbirds. Wrote collaborative paper.
Impact Redrafting paper following feedback from Science. Multidisciplinary research with Sheffield providing the machine learning / computational neuroscience expertise and our Australian partners the behavioral and neuroscience expertise
Start Year 2021
 
Description Collaboration with Dr Nicholas Szczecinski (West Virginia University) 
Organisation West Virginia University
Country United States 
Sector Academic/University 
PI Contribution I collaborated with Dr Szczecinski to propose and run a workshop at the 10th Anniversary Living Machines Conference. INVERTEBRATE ROBOTICS, AKA "NO BACKBONE, NO PROBLEM" - 29 JULY 2021. We then further collaborated on a review paper that summarised the key themes of the workshop.
Collaborator Contribution I collaborated with Dr Szczecinski to propose and run a workshop at the 10th Anniversary Living Machines Conference. INVERTEBRATE ROBOTICS, AKA "NO BACKBONE, NO PROBLEM" - 29 JULY 2021. We then further collaborated on a review paper that summarised the key themes of the workshop.
Impact The workshop that we ran was a great success at the conference which was held during COVID lockdown. The summary outcomes were presented in a review paper published in Bioinspiration & Biomimetics in 2023.
Start Year 2021
 
Description Collaboration with Macquarie University 
Organisation Macquarie University
Country Australia 
Sector Academic/University 
PI Contribution They are project partners on the ActiveAI grant. We will have reciprocal visits of PIs and Postdocs when they are allowed. We are helping them with modelling and computational analysis of data.
Collaborator Contribution They are project partners on the ActiveAI grant. We will have reciprocal visits of PIs and Postdocs when they are allowed. They are providing us with results from insect experiments which will inform our robotic models
Impact the collaboration is multi-disciplinary: we do robotics and computational modelling, they do insect neuroscience
Start Year 2019
 
Description JM collaboration with Andy Barron at Maquarie 
Organisation Macquarie University
Country Australia 
Sector Academic/University 
PI Contribution Reciprocal research visits between the academics. JM gave a talk in Australia. Publications on the mismatch between current AI and brains
Collaborator Contribution Reciprocal research visits between the academics. JM gave a talk in Australia. Publications on the mismatch between current AI and brains
Impact JM gave a talk at Macquarie in 2024. Upcoming paper
Start Year 2022
 
Description Karin Nordstrom 
Organisation Flinders University
Country Australia 
Sector Academic/University 
PI Contribution They are project partners on the ActiveAI grant. We will have reciprocal visits of PIs and Postdocs when they are allowed. We are helping them with modelling and computational analysis of data.
Collaborator Contribution They are project partners on the ActiveAI grant. We will have reciprocal visits of PIs and Postdocs when they are allowed. They are providing us with results from insect experiments which will inform our robotic models
Impact the collaboration is multi-disciplinary: we do robotics and computational modelling, they do insect neuroscience
Start Year 2019
 
Description MM and AP new collaboration on novel bee tracking methods with Dr Mike Smith (University of Sheffield, UK) 
Organisation University of Sheffield
Department Department of Computer Science
Country United Kingdom 
Sector Academic/University 
PI Contribution Aiding design of new tracking methods, experimental design, establishing animal tracking network, co-supervision of PhDs
Collaborator Contribution Aiding design of new tracking methods, experimental design, establishing animal tracking network, co-supervision of PhDs
Impact TBA
Start Year 2023
 
Title CATER: combined animal tracking and environment reconstruction 
Description Method to extract animal positions from video data and to embed those tracks in a reconstructed background allowing high spatiotemporal analysis of animal behaviour in the wild and with reference to the habitat. 
Type Of Technology New/Improved Technique/Technology 
Year Produced 2024 
Open Source License? Yes  
Impact Insights raised from analysis linked to methods paper Multiple lab groups now using the tool New studies have been inspired by these outcomes. 
URL https://www.science.org/doi/10.1126/sciadv.adg2094
 
Title Code related to paper - EchoVPR: Echo State Networks for Visual Place Recognition 
Description Code to train and apply Echo-State-Networks to the problem of visual place recognition. 
Type Of Technology New/Improved Technique/Technology 
Year Produced 2022 
Open Source License? Yes  
Impact Acceptance to leading robotics conference ICRA 2022, and publication in leading robotics journal Robotics and Automation Letters. 
URL https://anilozdemir.github.io/EchoVPR/
 
Title CompoundRay: An open-source tool for high-speed and high-fidelity rendering of compound eye 
Description CompoundRay is new open-source renderer that accurately renders the visual perspective of insect eyes at over 5,000 frames per second in a 3D mapped natural environment. It supports ommatidial arrangements at arbitrary positions with per-ommatidial heterogeneity. 
Type Of Technology New/Improved Technique/Technology 
Year Produced 2021 
Impact The basis for investigation of insect shape on visual homing tasks - Blayze Millward. A new collaboration with researchers at Flinders University (Dr Karin Nordstrum) A new collaboration with researchers at DeepMind (Dr Chrisantha Fernando) 
URL https://www.biorxiv.org/content/10.1101/2021.09.20.461066v1
 
Title genn-team/genn: GeNN 4.8.0 
Description Release Notes for GeNN 4.8.0 This release adds a number of significant new features to GeNN as well as including a number of bug fixes that have been identified since the 4.7.1 release. User Side Changes Custom updates extended to work on SynapseMatrixWeight::KERNEL weight update model variables (#524). Custom updates extended to perform reduction operations across neurons as well as batches (#539). PyGeNN can now automatically find Visual Studio build tools using functionality in setuptools.msvc.msvc14_get_vc_env (#471) GeNN now comes with a fully-functional Docker image and releases will be distributed via Dockerhub as well as existing channels. Special thanks to @Stevinson , @jamesturner246 and @bdevans for their help on this (see the README for more information) (#548 and #550). Bug fixes Fixed bug relating to merging of synapse groups which perform presynaptic "revInSyn" updates (#520). Added missing parameter to PyGeNN. pygenn.genn_model.create_custom_postsynaptic_class function so postsynaptic models with extra global parameters can be created (#522). Correctly substitute 0 for \$(batch) when using single-threaded CPU backend (#523). Fixed issues building PyGeNN with Visual Studio 2017 (#533). Fixed bug where model might not be rebuilt if sparse connectivity initialisation snippet was changed (#547). Fixed longstanding bug in the gen_input_structured tool -- used by some userprojects -- where data was written outside of array bounds (#551). Fixed issue with debug mode of genn-buildmodel.bat when used with single-threaded CPU backend (#551). Fixed issue where, if custom update models were the only part of a model that required an RNG for initialisation, one might not be instantiated (#540). 
Type Of Technology Software 
Year Produced 2022 
Open Source License? Yes  
Impact As well as ongoing impact of helping user community, specific advances used in both accepted conference paper at NICE 2023 and preprint arxiv ID 2212.01232v1 
URL https://zenodo.org/record/7267620
 
Title genn-team/genn: GeNN 5.0.0 
Description Release Notes for GeNN 5.0.0 This is a very large update to GeNN that has fixed a large number of longstanding bugs and we hope will make GeNN easier to use and enable various exciting new features in the near future. The licence has also been switched from GPL to LGPL making it mildly more liberal by allowing PyGeNN to be used as a component in closed-source systems. This release breaks backward compatibility so all models are likely to require updating but the documentation has also been completely re-done and the pre-release version is at https://genn-team.github.io/genn/documentation/5/. This includes a guide to updating existing models New features GeNN has a whole new code generator. This gives much better quality error messages to the user about syntax/typing errors in code strings and will enable use to do smarter optimisations in future but it does restrict user code to a well-defined subset of C99 (https://github.com/genn-team/genn/pull/595)** As well as simulation kernels, GeNN 4.x generated large amounts of boilerpalte for allocating memory and copying from device to host. This resulted in very long compile times with large models. In GeNN 5 we have replaced this with a new runtime which reduces compilation time by around 10x on very large models (https://github.com/genn-team/genn/pull/602) In GeNN 4.X, parameters were always "scalar" type. This resulted in poor code generation when these are used to store integers. Parameters now have types and can also be made dynamic to allow them to be changed at runtime (https://github.com/genn-team/genn/pull/607) Weight update models now have postsynaptic spike-like events, allowing a wider class of learning rules to be implemented (https://github.com/genn-team/genn/pull/609)** Bug fixes PyGeNN only really works with precision set to float (#289) Refine global - register -global transfers (#55) Avoiding creating unused variables enhancement (#47) PyGeNN doesn't correctly handle neuron variables with delay slots (#393) assign_external_pointer overrides should use explicitly sized integer types (#288) Repeat of spike-like-event conditions in synapse code flawed (#379) Dangerous conflict potential of user and system code (#385) Accessing queued pre and postsynaptic weight update model variables (#402) Linker-imposed model complexity limit on Windows (#408) Got 'error: duplicate parameter name' when ./generate_run test in userproject/Izh_sparse_project bug (#416) Issues with merging synapse groups where pre or postsynaptic neuron parameters are referenced (#566) Presynaptic Synapse Variable undefined in Event Threshold Condition (#594) 
Type Of Technology Software 
Year Produced 2024 
Open Source License? Yes  
Impact No direct impacts but the work contained in this release are going to be vital for subsequent workpackages of my fellowship 
URL https://zenodo.org/doi/10.5281/zenodo.11032927
 
Title genn-team/genn: GeNN 5.1.0 
Description Release Notes for GeNN 5.1.0 This release adds a number of significant new features to GeNN as well as including a number of bug fixes that have been identified since the 5.0.0 release. User Side Changes Updated CUDA block size optimiser to support SM9.0 (#627) Access to postsynaptic variables with heterogeneous delay (#629) Special variable references for zeroing internals (#634) Stop Windows CUDA compilation relying on correct order of CUDA + Visual Studio installation (#639) Bug fixes Fixed issues building GeNN on Mac OS/Clang (#623) Fixed bug when using dendritic delays in batched models (#630) Fixed issues with new version of setuptools 74 and newer (#636,#640) Fixed bug with merging/fusing of neuron groups with multiple spike-like event conditions (#638) 
Type Of Technology Software 
Year Produced 2024 
Open Source License? Yes  
Impact The updates in this release were key to our recent preprint 10.48550/arXiv.2501.07331 
URL https://zenodo.org/doi/10.5281/zenodo.14051978
 
Title genn-team/genn: GeNN v4.7.0 
Description Release Notes for GeNN v4.7.0 This release adds a number of significant new features to GeNN as well as including a number of bug fixes that have been identified since the 4.6.0 release. User Side Changes While a wide range of convolutional type connectivity can be implemented using SynapseMatrixConnectivity::PROCEDURAL, the performance is often worse than sparse connectivity. SynapseMatrixConnectivity::TOEPLITZ provides a more efficient solution with InitToeplitzConnectivitySnippet::Conv2D and InitToeplitzConnectivitySnippet::AvgPoolConv2D implementing some typical connectivity patterns (#484). Shared weight kernels had to be previously provided as extra global parameters via the InitVarSnippet::Kernel variable initialisation snippet. This meant kernels had to be manually allocated to the correct size and couldn't be initialised using standard functionality. SynapseMatrixWeight::KERNEL allows kernels to be treated as standard state variables (#478). Some presynaptic updates need to update the state of presynaptic neurons as well as postsynaptic. These updates can now be made using the \$(addToPre,...) function from presynaptic update code and the destination additional input variable can be specified using SynapseGroup::setPreTargetVar (#479). On Windows, all models in the same directory would build their generated code into DLLs with the same name, prevented the the caching system introduced in v4.5.0 working properly. CodeGenerator::PreferencesBase::includeModelNameInDLL includes the name of the model in the DLL filename, resolving this problem. This is now the default behaviour in PyGeNN but, when using GeNN from C++, the flag must be manually set and MSBuild projects updated to link to the correct DLL (#476). Neuron code can now sample the binomial distribution using \$(gennrand_binomial) and this can be used to initialise variables with InitVarSnippet::Binomial (#498). In the latest version of Windows Subsystem for Linux, CUDA is supported but libcuda is mounted in a non-standard location. GeNN's CUDA backend now adds this location to the linker paths (#500). Bug fixes: Fixed issues with some configurations of InitSparseConnectivitySnippet::Conv2D when stride > 1 which caused incorrect connectivity to be instantiated as well as crashes when this snippet was used to generate sparse connectivity (#489, #491). Fixed issue where, if \$(addToInSynDelay) was used in spike-like event code, it was not detected and dendritic delay structures were not correctly created (#494). Fixed issue where precision wasn't being correctly applied to neuron additional input variable and sparse connectivity row build state variable initialisation meaning double precision code could unintentially be generated (#489). 
Type Of Technology Software 
Year Produced 2022 
Open Source License? Yes  
Impact As well as ongoing impact of helping user community, specific advances were key to results in 10.1088/2634-4386/ac5ac5 and initial work towards preprint arxiv ID 2212.01232v1 
URL https://zenodo.org/record/6047460
 
Description Andy Phillipides was interviewed by BBC south east about a 'robotic' grape harvester 
Form Of Engagement Activity A press release, press conference or response to a media enquiry/interview
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Public/other audiences
Results and Impact TV interview with local media to discuss a 'robotic' grape harvester
Year(s) Of Engagement Activity 2020
 
Description Article in The Times following AAAS 2020 
Form Of Engagement Activity A press release, press conference or response to a media enquiry/interview
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact The Times published an article on the project 'Bees help drones to find their bearings', on 17/02/2020, following on from us exhibiting at AAAS 2020 with URKI on 15/02/2020.
Year(s) Of Engagement Activity 2020
URL https://www.thetimes.co.uk/article/bees-help-drones-to-find-their-bearings-jfnfgs8x2
 
Description Article published by The Telegraph following on from AAAS 2020 
Form Of Engagement Activity A press release, press conference or response to a media enquiry/interview
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact The Telegraph published an article on the project 'Bees are being mapped to help develop driverless cars and drones by scientists glueing tiny antennas to their heads', on 17/02/2020, following on from us exhibiting at AAAS 2020 with URKI on 15/02/2020.
Year(s) Of Engagement Activity 2020
URL https://www.telegraph.co.uk/news/2020/02/17/bees-mapped-help-develop-driverless-cars-drones-scientis...
 
Description Catalan government AI funding report 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Policymakers/politicians
Results and Impact I contributed to a report on AI funding for digital innovation hubs presented to the Barcelona Chamber of commerce and the Catalan Regional Government
Year(s) Of Engagement Activity 2021
 
Description Exhibit at AAAS 2020 International Reception hosted by UKRI 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Industry/Business
Results and Impact The project was invited by UKRI to showcase at their VIP International Reception, hosted as part of AAAS 2020 in Seattle, February 2020. James Marshall, Alex Cope, Jamie Knight and Joe Woodgate exhibited the project,providing an overview of the research and technology emerging from the project, and demonstrations of drone and ground-based robotics. This resulted in significant international media coverage of the project.
Year(s) Of Engagement Activity 2020
URL https://www.ukri.org/aaas/
 
Description Financial Times article following AAAS 2020 
Form Of Engagement Activity A press release, press conference or response to a media enquiry/interview
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact The Financial Times published an article on the project 'Scientists look to bees to develop drone technology', on 17/02/2020, following on from us exhibiting at AAAS 2020 with URKI on 15/02/2020.
Year(s) Of Engagement Activity 2020
URL https://www.ft.com/content/bf3c83fe-5081-11ea-8841-482eed0038b1
 
Description Interview with Sky News following AAAS 2020 
Form Of Engagement Activity A press release, press conference or response to a media enquiry/interview
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact James Marshall was interviewed about the project by Sky News on 18/02/2020, following on the project exhibiting with UKRI at AAAS 2020, 15/02/2020.
Year(s) Of Engagement Activity 2020
 
Description Invited speaker at IROS2023 workshop: Closing the Loop on Localization: What Are We Localizing For, and How Does That Shape Everything We Should Do 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Workshop at top robotics conference (audience 100+) that included academic and industrial researchers, robotics companies, funders etc. Significant interest in the alternative brain-based approach to autonomy that was proposed, and followup commercial opportunities initiated.
Year(s) Of Engagement Activity 2023
URL https://oravus.github.io/vpr-workshop/
 
Description MM gave keynote talk at UKRAS 2024 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Presented natural intelligence approach as an alternative to AI to 300+ robotics reserachers, industry, funders, policymakers etc at the IEEE UKRAS conference 2024.
Year(s) Of Engagement Activity 2024
URL https://www.sheffield.ac.uk/sheffieldrobotics/7th-ieee-uk-ireland-ras-conference-ras-2024#:~:text=Ta...
 
Description MM gave talk at European Robotics Forum - Bioinspired Robotics Topic Group 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Gave a presentation about natural intelligence approach at the European Robotics Forum topic group session on bioinspired robotics.
Year(s) Of Engagement Activity 2023
URL https://erf2023.sdu.dk/
 
Description MM interviewed for French magazine - La Vie 
Form Of Engagement Activity A magazine, newsletter or online publication
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Public/other audiences
Results and Impact MM gave an interview about insect inspired intelligence to French magazine La Vie
Year(s) Of Engagement Activity 2024
 
Description MM organised 1st "Small Animal Tracking" workshop, Univ of Sheffield, Jan 2024 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact MM organised the 1st Small Animal Tracking Workshop that was held in Sheffield in Jan, 2024. Attendees from across Europe.
Year(s) Of Engagement Activity 2024
 
Description MM organised 1st workshop on small animal tracking. AP and JK gave talks 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Workshop bringing together researchers, students, research software engineer and businesses interested small animal tracking from across the EU. Multiple follow on funding and outreach activities discussed and to follow.
Year(s) Of Engagement Activity 2024
 
Description NATO Autonomy Workshop 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Policymakers/politicians
Results and Impact Four members of the project team (James Marshall, Alex Cope, Joe Woodgate & Jamie Knight) attended as invited speakers and panellists on a workshop provided by NATO.
Year(s) Of Engagement Activity 2020
 
Description New Scientist Comment by Prof. James Marshall 
Form Of Engagement Activity A magazine, newsletter or online publication
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Public/other audiences
Results and Impact Comment piece in the 'Life' section of the publication by Prof' James Marshall.
Year(s) Of Engagement Activity 2021
URL https://www.newscientist.com/article/mg24933220-100-insect-brains-will-teach-us-how-to-make-truly-in...
 
Description Organised workshop at Living Machines Conference: INVERTEBRATE ROBOTICS, AKA "NO BACKBONE, NO PROBLEM" 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Workshop on INVERTEBRATE ROBOTICS, AKA "NO BACKBONE, NO PROBLEM" as part of the 10th anniversary Living Machines conference series. Mainly to other academics, inspired a lively discussion and a follow on review paper.
Year(s) Of Engagement Activity 2021
URL https://livingmachinesconference.eu/2021/conference/invertebrate-robotics/
 
Description Organiser and Chair of INVITED SYMPOSIUM - NEW TOOLS TO STUDY BEHAVIOUR IN THE FIELD: INSIGHTS FROM INSECT NAVIGATION at ICN 2022 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Proposed, organised and chaired workshop on NEW TOOLS TO STUDY BEHAVIOUR IN THE FIELD: INSIGHTS FROM INSECT NAVIGATION at the International Congress of Neuroethology, 2022. Very well attended workshop with lively debate. Discussions afterwards on how biologists and engineers could collaborate more on future research projects. Also, ECRs selected to speak giving career opportunnties.
Year(s) Of Engagement Activity 2022
URL https://www.neuroethology.org/Meetings
 
Description PIP talk on AI 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach Local
Primary Audience Public/other audiences
Results and Impact I gave a public engagement talk on the imapct of AI to a (mainly local) group
Year(s) Of Engagement Activity 2021
 
Description RSS2023 Workshop on Rapid and Robust Robotic Active Learning (R3AL): AP/DS organisers, MM invited speaker 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact International researchers attended workshop on rapid learning in robots, with panel discussion and extended debate.
Year(s) Of Engagement Activity 2023
URL https://r3al.sdu.dk/category/uncategorized/
 
Description Speaker at 60 years of Sussex Research Partnership Conference 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Other audiences
Results and Impact I was selected to present the research of Brains on Board and ActiveAI as part of the 60 years of Sussex Research Partnership Conference to highlight academice partnerships
Year(s) Of Engagement Activity 2022
URL https://www.sussex.ac.uk/about/60-years-of-sussex/news-and-events?id=57433
 
Description UKRI-BBSRC Expert Working Group on the Use of Models in research 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Policymakers/politicians
Results and Impact I was a member of the UKRI-BBSRC Expert Working Group on the Use of Models in research. We debated the subject which was made into a report which we fed back one
Year(s) Of Engagement Activity 2021
 
Description Virtual Insect Navigation Workshop Aug 4th-6th 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Presentation at the Virtual Insect Navigation Workshop Aug 4th-6th
Year(s) Of Engagement Activity 2020
 
Description work featured on Youtube on scishow 
Form Of Engagement Activity A broadcast e.g. TV/radio/film/podcast (other than news/press)
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact My bumblebee research was focussed on in an episode of Scishow on Youtube
Year(s) Of Engagement Activity 2021
URL https://www.youtube.com/watch?v=qqIPe3Ya8y0