ONR-15-FOA-0011 MURI Topic #3 - Closed-Loop Multisensory Brain-Computer Interface for Enhanced Decision Accuracy

Lead Research Organisation: University of Essex
Department Name: Computer Sci and Electronic Engineering

Abstract

The goals of our interdisciplinary effort are to develop new methodologies for modeling multimodal neural activity underlying multisensory processing and decision making, and to use those methodologies to design closed-loop adaptive algorithms for optimized exploitation of multisensory data for brain-computer communication. We are motivated by the observation that a dismounted soldier or a tank driver routinely makes decisions in time-pressured and stressful conditions based on a multiplicity of multisensory information presented in cluttered and distracting environments. We envision a closed-loop brain-computer interface (BCI) architecture for enhancing decision accuracy. The architecture will collect multimodal neural, physiological, and behavioral data, decode mental states such as attention orientation and situational awareness, and use the decoded states as feedback to adaptively change the multisensory cues provided to the subject, thus closing the loop. To realize such an architecture we will make fundamental advances on four fronts, constituting our research Thrusts: (1) modeling multisensory integration, attention, and decision making, and the associated neural mechanisms; (2) machine-learning algorithms for high-dimensional multimodal data fusion; (3) adaptive tracking of the neural and behavioral models during online operation of the BCI; and (4) adaptive BCI control of multisensory cues for optimized performance. We have assembled a multidisciplinary team with expertise spanning engineering, computer science, and neuroscience. We will take a fully integrated approach to address these challenges by combining rare state-of-the-art experimental capabilities with novel computational modeling. Complementary experiments in rodents, monkeys, and humans will collect multimodal data to study and model multisensory integration, attention, and decision making, and to prototype a BCI for enhanced decision accuracy. Our modeling efforts will span Bayesian inference, stochastic control, adaptive signal processing, and machine learning to develop: novel Bayesian and control-theoretic models of the brain mechanisms; new stochastic models of multimodal data and adaptive inference algorithms for this data; and novel adaptive stochastic controllers of multisensory cues based on the feedback of users' cognitive state.

Planned Impact

DoD/MoD

A soldier makes decisions in time-pressured and stressful conditions, based on a cluttered visual and auditory scene containing moving objects and flashes, exposed to different lighting conditions, and various auditory cues. By discovering the fundamental mechanisms underlying multisensory integration, attention, and decision making, and by developing new machine learning and control-theoretic methods to model, decode, and control the neural mechanisms underlying key mental states, we will develop a closed-loop BCI to enhance decision accuracy under such adverse scenarios. This will significantly advance DoD/MoD efforts to increase situational awareness and national security.

Civilian Impacts

It will also serve civilian and commercial needs, such as pilot, vehicular, and control command interfaces. Given the truly interdisciplinary nature of this work, which spans neuroscience, engineering, and computer science, this proposal will generate new programs of study to train graduate students in these emerging interdisciplinary areas. We will train 14 graduate students and postdocs per year. These highly qualified trainees will have a positive impact on the US/UK economy, science and engineering.

Publications

10 25 50

publication icon
Amadori P (2022) HammerDrive: A Task-Aware Driving Visual Attention Model in IEEE Transactions on Intelligent Transportation Systems

publication icon
Amadori P (2022) Predicting Secondary Task Performance: A Directly Actionable Metric for Cognitive Overload Detection in IEEE Transactions on Cognitive and Developmental Systems

publication icon
Bhattacharyya S (2019) Collaborative Brain-Computer Interfaces to Enhance Group Decisions in an Outpost Surveillance Task. in Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference

 
Description This is a joint project between three UK sites (Essex, coordinating node, EP/P009204/1; Imperial College, EP/P008461/; UCL, EP/P009069/1) and several US sites (University of Southern California, University of California Berkeley, UCLA, Harvard University and New York University). Below we report the results obtained at the three UK sites:

Imperial College

(a) We developed deep learning models that predict the focus of attention of a driver from scene information and telemetry data. Our studies have shown that manoeuvre-awareness is greatly beneficial for human visual attention prediction and that it can be leveraged using telemetry data to achieve robust and reliable predictions. Being able to anticipate the focus of attention of a driver can have great benefits to human safety, such as inattention prevention. We have also designed a sequential model that uses physiological information, i.e., gaze and head pose, in a sequence to sequence learning paradigm to infer the likelihood of incoming mistakes from a driver on a cognitive task. Our work has shown that gaze patterns and head movements correlate with decision-making mistakes in highly cluttered and dynamic driving environments, and that sequential deep learning models can exploit such correlation to anticipate the likelihood of secondary task mistakes up to 2 seconds in advance. Finally, we have demonstrated that we can utilise physiological and behavioural data from human drivers to develop personalised models for cognitive workload prediction.

(b) We have established that SpO2 levels are reliably monitored from our in-ear-canal sensor. The in-ear-canal reading of SpO2 shows faster detection times (12.4 s faster on average) for decreases in whole-body SpO2 than conventional wrist/hand measures. The sensor is designed to be worn for long periods of time, in uncontrolled, real-world environments, without causing discomfort to the user or significantly impeding activity. Due to the ear canal's proximity to the brain's major blood supplying vessels, the in-ear-canal SpO2 reading is highly sensitive to brain metabolism. Our study showed that our in-ear-canal SpO2 sensor can reliably identify varying levels of cognitive workload (higher cognitive workload = higher oxygen consumption = lower in-ear-canal SpO2).


UCL

UCL is working on machine learning theory, in particular methods for meta-learning and learning to learn. Our work has been featured at NeurIPS, ICML, and the UAI conferences and is receiving increasing attention by the research community. The work has led to principle algorithms to learn estimators that work well on a class of supervised learning tasks. However recently we began to explore applications to user modelling problems in the context of the project.

Essex

Brain signals are very variable from person to person, from day to day and from mental task to mental task. So, for a brain-computer interface (BCI) to perform well, one needs to retrain the BCI at least to some degree before every use, to adapt it to the user and task. In the project we found that confidence (a person's own evaluation of the probability of their decision being correct) is highly correlated with actual decision-making performance and developed techniques that make it possible to estimate the decision confidence of users performing perceptual decision making tasks, on a decision by decision basis, without the need to train the BCI for the specific person or decision-making task. Our solution uses an artificial neural network to predict the confidence in each decision from EEG data and response times, which is trained with data acquired from other participants performing a variety of decision making tasks. In the work we also unveiled why this is possible: confidence and its degree are encoded in a similar manner in the brain across individuals and decision-making tasks.

Furthermore we found that decision-making accuracy of users engaged in the discrimination of difficult realistic targets can be improved by using audio-visual cues preceding the target appearance, provided that their timing is right. Also, we were able to consistently identify crossmodal correspondences, which are associations between features of different sensory modalities, where what is experienced in one sensory modality is affected by perceptual experiences from a different modality, through an event-related potential analysis of EEG, showing that this is more sensitive than the traditional methods to identify crossmodal correspondences.

Also, we investigated the possibility of improving the performance of confidence estimation in decision making by using meta learning methods in collaboration with UCL to combine their expertise in meta-learning and our expertise in BCIs. We found that by using meta-learning techniques it is possible to significantly improve the results for predicting confidence based on data from other subjects.

Finally, in collaboration with New York University, we have been exploring the neural basis of decision making. We created a model that is able to predict what the subject intends to respond even before he/she does it. Until now, this kind of results were only obtainable using invasive technologies, we were able to do it using only EEG.
Exploitation Route Decision making is pervasive in today's world, with literally billions of decisions being taken every day. This project is developing technologies (e.g., brain-computer interfaces) that have the potential to significantly improve decision making in a variety of domains.

Work in this project (including its outpost and urban exploration experiments) have contributed to a prestigious $5M project entitled "Adaptive joint cognitive systems for complex and strategic decision making: building trust in human-machine teams through brain-computer-interface augmentation, social interaction and mutual learning" under the US Department of Defence's inaugural Bilateral Academic Research Initiative (BARI). The project was in partnership between Essex (which was UK and general lead) and USC (US lead), UCB, Harvard Medical School, UMAS and Oxford. The UK MoD funded the UK partners.
Sectors Aerospace, Defence and Marine,Digital/Communication/Information Technologies (including Software),Security and Diplomacy

 
Description This EPSRC project is a component of a prestigious 5-year $11.5M project funded under the US Department of Defence Multidisciplinary University Research Initiative (MURI) within the theme "Modeling and Analysis of Multisensory Neural Information Processing for Direct Brain-Computer Communications". The project was carried out in partnership with the University of Southern California, the University of California Berkeley, Harvard University, New York University, Cold Spring Harbor Laboratory, Essex (via EP/P009204/1), Imperial College London (via EP/P008461/1) and University College London (via EP/P009069/1). The US partners of the project were funded by the US Department of Defence while the UK partners were funded by the UK Ministry of Defence through EPSRC. The involvement of the MoD in this project was through the Defence Science and Technology Laboratory (Dstl) which associated technical partners to the project which followed closely the developments in the project. The benefits for the Dstl/MoD (as well as for the US DOD) in funding and were associated with understanding the current limits and future possibilities associated with human augmentation via neuro technologies (in the form of brain-computer interfaces). For this reason the experimental scenarios and decision making tasks chosen at Essex were of military interest, involving the rapid recognition of military vs non-military targets in both poorly lit urban environments and military outposts at night (with night-vision equipment). At Imperial College London the experimental scenarios involved the use of novel wearable sensors (in-ear EEG) and eye tracking to determine the cognitive workload in tasks that put users under pressure (simulated racing while performing secondary tasks), simulating defence scenarios that involve use of complex machinery under distractions. UCL developed advanced machine learning algorithms for these and other applications. Additional information can be found on the ResearchFish entries for the Imperial College London and UCL subprojects. The UK team participated in a number of stakeholders events to communicate the benefits of the new technology developed to Military audiences. In 2018 the positive experience gained during this project led to a second US DOD/UK MoD funded project on human-machine teaming mediated by BCIs. This was a prestigious $5M project under the US Department of Defence's inaugural Bilateral Academic Research Initiative (BARI). The project was in partnership between Essex (which was UK and general lead) and USC (US lead), UCB, Harvard Medical School, UMAS and Oxford. The project was entitled "Adaptive joint cognitive systems for complex and strategic decision making: building trust in human-machine teams through brain-computer-interface augmentation, social interaction and mutual learning". The UK partners in the project were funded by the UK Ministry of Defence. The Essex share of the funding was approximately £930,000.
First Year Of Impact 2018
Sector Aerospace, Defence and Marine,Security and Diplomacy
 
Description Bilateral Academic Research Initiative programme
Amount £3,700,000 (GBP)
Funding ID DSTLX1000128890 
Organisation Defence Science & Technology Laboratory (DSTL) 
Sector Public
Country United Kingdom
Start 11/2018 
End 10/2021
 
Description Bioelectric Signals for Warfighter Lethality
Amount £1,174,655 (GBP)
Organisation US Army 
Sector Public
Country United States
Start 10/2020 
End 09/2023
 
Description UKRI Trustworthy Autonomous Systems Node in Trust
Amount £3,056,751 (GBP)
Funding ID EP/V026682/1 
Organisation Engineering and Physical Sciences Research Council (EPSRC) 
Sector Public
Country United Kingdom
Start 11/2020 
End 04/2024
 
Description Bioelectric Signals for Warfighter Lethality 
Organisation US Army
Country United States 
Sector Public 
PI Contribution The project is to evaluate Hearables in a realistic environment, relevant to the military. We provide sensors and algorithms.
Collaborator Contribution The collaborators test them on auditory paradigms, on a treadmill and while moving.
Impact Collaborative design of new hardware for the latest hearable device prototype.
Start Year 2020
 
Description Collaboration between Essex and UCL on improving decision confidence estimates via BCIs and meta-learning 
Organisation University College London
Department Department of Computer Science
Country United Kingdom 
Sector Academic/University 
PI Contribution The Essex team has provided EEG and other physiological data acquired in decision-making experiments, and all the required signal processing and feature extraction techniques.
Collaborator Contribution Dr Dimitri Stamos and Prof Massimiliano Pontil (UCL) have provided their state of the art meta-learning algorithms
Impact A manuscript is in preparation.
Start Year 2020
 
Description Collaboration between Essex and the Pesaran Lab (NYU Center for Neural Science) on the prediction of decision intentions from EEG using log-likelihood evidence-accumulation models 
Organisation New York University
Country United States 
Sector Academic/University 
PI Contribution In this collaboration we are exploring, for the first time, to what degrees log-likelihood evidence-accumulation type of models can be used to predict decisions in humans based on brain-activity data acquired non-invasively through EEG. Essex has decades of experience in building EEG based brain-computer interfaces and so we have all the necessary knowledge on EEG processing and also the experimental data to carry out this investigation.
Collaborator Contribution Prof Pesaran's lab is a leader in applying log-likelihood evidence-accumulation type of models to brain signals invasively acquired from monkeys. They bring their algorithms and experience to the collaboration.
Impact Joint journal article is in an advanced stage of preparation
Start Year 2021
 
Description Collaboration between Essex and the Pesaran Lab (NYU Center for Neural Science) on the prediction of decision intentions from EEG using log-likelihood evidence-accumulation models 
Organisation University of Essex
Country United Kingdom 
Sector Academic/University 
PI Contribution In this collaboration we are exploring, for the first time, to what degrees log-likelihood evidence-accumulation type of models can be used to predict decisions in humans based on brain-activity data acquired non-invasively through EEG. Essex has decades of experience in building EEG based brain-computer interfaces and so we have all the necessary knowledge on EEG processing and also the experimental data to carry out this investigation.
Collaborator Contribution Prof Pesaran's lab is a leader in applying log-likelihood evidence-accumulation type of models to brain signals invasively acquired from monkeys. They bring their algorithms and experience to the collaboration.
Impact Joint journal article is in an advanced stage of preparation
Start Year 2021
 
Description Collaboration on BCI with Prof Shanechi's team at University of Southern California 
Organisation University of Southern California
Country United States 
Sector Academic/University 
PI Contribution Prof Shanechi is a collaborator in this MURI project, which is a US-UK collaborative project. She and her team did not make use of EEG equipment, being more focused on invasive BCIs. The Essex team helped her set up such a facility, including acquiring the same type of system as we use and making our software available to them, later we visited the lab to train them in the use of the equipment and collaborated in the acquisition of joint data (with some participants being acquired at USC and some at Essex) and performed joint analysis and publication.
Collaborator Contribution Prof Shanechi acquired a higher resolution system (256 channels) than the ones we have at Essex, making it possible to do finer analyses. In addition she has developed complementary analysis techniques and she has access to electrocorticography data.
Impact Fernandez-Vargas, Jacobo and Valeriani, Davide and Cinel, Caterina and Sadras, Nitin and Ahmadipour, Parima and Shanechi, Maryam M and Citi, Luca and Poli, Riccardo (2020) Confidence Prediction from EEG Recordings in a Multisensory Environment. In: ICBET 2020: 2020 10th International Conference on Biomedical Engineering and Technology, 2020-09 - ?, Tokyo. All of our research (including this) is multi-disciplinary: disciplines Engineering, Computer science, neurosciences, psychology, biomedical engineering, ergonomics
Start Year 2018
 
Description Collaboration on Machine Learning with Prof Michael Jordan at University of California Berkeley 
Organisation University of California, Berkeley
Department Department of Statistics
Country United States 
Sector Academic/University 
PI Contribution Dr Citi (Essex) started a collaboration with Michael Jordan at UCB, with a visit in Berkeley from June 27 to September 24, 2019. In complex scenarios, decision-makers face situations in which their decisions depend on a balance between their strategy and the behaviour of other elements in the system. An extensive literature is devoted to zero-sum games of resource allocation in "defense and attack models". Of particular interest in this context are situations where each player's strategy is a best response to the strategies of the opponents: a Nash equilibrium is an assignment of strategies to the players, with the property that no player can gain by unilaterally changing their own strategy. Standard gradient-based algorithms cannot guarantee convergence to local Nash equilibria due to the existence of non-Nash stationary points. During his visit to UC Berkeley, Prof. Citi developed with Prof. Jordan a new gradient-based algorithm for finding the local Nash equilibria of two-player zero-sum games and proved that the only stationary points to which the algorithm can converge are local Nash equilibria. The new approach improves over the state of the art because it only requires gradient information, i.e. it does not require second order derivatives which may be difficult or computationally expensive to obtain.
Collaborator Contribution See above. The work was collaborative.
Impact Publications in preparation.
Start Year 2019
 
Description Collaboration with Imperial Dementia Research Centre 
Organisation UK Dementia Research Institute
Country United Kingdom 
Sector Charity/Non Profit 
PI Contribution We are part of the new Imperial Dementia Research Centre, a 20 million GBP centre funded by DRI Care & Technology Programme, 2019-2015
Collaborator Contribution Our in-ear EEG system will be further developed and used to detect and predict the progress of dementia, in a 24/7 fashion
Impact No outputs yet, just started
Start Year 2019
 
Description Collaboration with SONY company, Japan 
Organisation SONY
Country Japan 
Sector Private 
PI Contribution This is a funded work on the analysis of Ear-EEG data for entertainment systems
Collaborator Contribution SONY provided funding for research into artefact removal from ear-EEG
Impact N/A
Start Year 2020
 
Description Collaboration with SONY: Motion artefact removal and artefact classification in ear-EEG 
Organisation SONY
Country Japan 
Sector Private 
PI Contribution Characterised three key sources of artefact in daily activity as measured by our multi-modal artefact removal sensor: head movement, facial expressions, and walking/chewing/speaking. Successfully demonstrated feasibility of removing artefacts from ear-EEG using the multi-modal sensor in conjunction with LMS and RLS methods.
Collaborator Contribution SONY collaborated on design of the experimental protocol, and helped guide the method of evaluation for the proposed motion artefact removal system.
Impact The first stage of the project commenced in July 2020 and finished in September 2020. The aim of this project was to establish feasibility of the proposed motion artefact removal system. Following the success of the first stage of the project, we are now in the process of collaborating on the next stage, which will aim to use develop more advanced methods of artefact removal in conjunction with the multi-modal sensor, as well as collect more data. We are yet to publish results from this project.
Start Year 2020
 
Title BIOSENSING ELECTRODES 
Description A dual modality sensor comprises a tissue-contact electrode having a first surface configured for receiving an electrical signal from a user's tissue when attached thereto; and a mechanical sensor overlying the cutaneous electrode and configured to sense a mechanical displacement of the first surface through the electrode. The electrode and the mechanical sensor thereby provide electrical and mechanical signals which originate from precisely the same tissue location. 
IP Reference EP3094235 
Protection Patent granted
Year Protection Granted 2016
Licensed No
Impact A new collaboration with SONY corporation
 
Title BIOSENSING ELECTRODES 
Description A dual modality sensor comprises a tissue-contact electrode having a first surface configured for receiving an electrical signal from a user's tissue when attached thereto; and a mechanical sensor overlying the cutaneous electrode and configured to sense a mechanical displacement of the first surface through the electrode. The electrode and the mechanical sensor thereby provide electrical and mechanical signals which originate from precisely the same tissue location. 
IP Reference US2016331328 
Protection Patent granted
Year Protection Granted 2016
Licensed No
Impact We have established a new collaboration with Sony
 
Title This is an open source Python library for manipulation of big data 
Description HOTTBOX is a Python library for exploratory analysis and visualisation of multi-dimensional arrays of data, also known as tensors. It comprises methods ranging from standard multi-way operations through to multi-linear algebra based tensor decompositions and sophisticated algorithms for generalised multi-linear classification and data fusion such as Support Tensor Machine (STM) and Tensor Ensemble Learning (TEL). For user convenience, HOTTBOX offers a unifying API which establishes a self-sufficient ecosystem for various forms of efficient representation of multi-way data and corresponding decomposition and association algorithms. Particular emphasis is placed on scalability and interactive visualisation, to support multidisciplinary data analysis communities working on big data and tensors. HOTTBOX also provides means for integration with other popular data science libraries for visualisation and data manipulation. The source code, examples and documentation ca be found at https://github.com/hottbox/hottbox. 
Type Of Technology Software 
Year Produced 2021 
Open Source License? Yes  
Impact It has been downloaded and used more than 1000 times 
URL https://github.com/hottbox/hottbox
 
Description DSTL MURI Show and Tell 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Other audiences
Results and Impact Dstl and EPSRC are jointly funding 4 collaborative basic research projects with the US DOD under their Multidisciplinary University Research Initiative (MURI). This supports 'world leading' scientists and engineers, from relevant and complementary disciplines, to perform the research, in
order to increase understanding and to stimulate the emergence of new technology in multi-disciplinary areas of value to defence and security.

A 'show and tell' event has been arranged for each of the projects on the 28th June 2018 at Porton Down. The aim was to provide a detailed technical overview of the research to date, have poster stands depicting some elements of their work with their researchers ready to field queries, and participate in more focused technical discussions in groups in the afternoon to explore specific research interests of Dstl staff.
Year(s) Of Engagement Activity 2018
 
Description DSTL Neuroadaptive Systems Workshop 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Industry/Business
Results and Impact Presentation to a DSTL-organised workshop on Neuroadaptive systems, with practitioners from governmental organisations
Year(s) Of Engagement Activity 2019
 
Description Dstl/MoD Future Workforce and Human Performance Programme Showcase Event (Defence Academy of the UK, Shrivenham, Swindon) 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Other audiences
Results and Impact Prof Poli gave a plenary presentation. The FWAHP Showcase Event reported on some of the key outputs of the Dstl Future Workforce and Human Performance (FWAHP) programme, within the MOD CSA research portfolio. Members of the Dstl FWAHP programme, alongside external partners (including universities, such as Essex), presented, demonstrated and summarised key research activities over the last few years. The event also presented further opportunities to exploit the research and maximise impact for Defence and Security. The meeting was attended by FWAHP Programme Board members, members of our Research Exploitation Working Groups, and other MOD & OGD stakeholders with an interest in the Future Workforce, Training & Education, Humans in Systems, Human Performance & Augmentation, and future opportunities & threats to the human capability.
Year(s) Of Engagement Activity 2021
 
Description Show and Tell event organised by Dstl 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Other audiences
Results and Impact PIs from each US-UK MURI project gave a presentation on the progress so far. Prof Poli gave a presentation on this project and a presentation on the related BARI project.
Year(s) Of Engagement Activity 2021
 
Description Tutorial, "Recurrent Neural Networks: From universal function approximation to a Big Data tool" 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact The tutorial was part of the Winter School on Machine Learning (WISMAL), part of the 3rd International Conference on Applications of Intelligent Systems (APPIS). The students were deeply interested and have formed new professional relationships with the students in our own research group, yielding sharing and development of ideas between the two groups.
Year(s) Of Engagement Activity 2020
URL http://appis.webhosting.rug.nl/2020/wismal-2020/