COG-MHEAR: Towards cognitively-inspired 5G-IoT enabled, multi-modal Hearing Aids

Lead Research Organisation: Edinburgh Napier University
Department Name: School of Computing

Abstract

Currently, only 40% of people who could benefit from Hearing Aids (HAs) have them, and most people who have HA devices don't use them often enough. There is social stigma around using visible HAs ('fear of looking old'), they require a lot of conscious effort to concentrate on different sounds and speakers, and only limited use is made of speech enhancement - making the spoken words (which are often the most important aspect of hearing to people) easier to distinguish. It is not enough just to make everything louder!

To transform hearing care by 2050, we aim to completely re-think the way HAs are designed. Our transformative approach - for the first time - draws on the cognitive principles of normal hearing. Listeners naturally combine information from both their ears and eyes: we use our eyes to help us hear. We will create "multi-modal" aids which not only amplify sounds but contextually use simultaneously collected information from a range of sensors to improve speech intelligibility. For example, a large amount of information about the words said by a person is conveyed in visual information, in the movements of the speaker's lips, hand gestures, and similar. This is ignored by current commercial HAs and could be fed into the speech enhancement process. We can also use wearable sensors (embedded within the HA itself) to estimate listening effort and its impact on the person, and use this to tell whether the speech enhancement process is actually helping or not.

Creating these multi-modal "audio-visual" HAs raises many formidable technical challenges which need to be tackled holistically. Making use of lip movements traditionally requires a video camera filming the speaker, which introduces privacy questions. We can overcome some of these questions by encrypting the data as soon as it is collected, and we will pioneer new approaches for processing and understanding the video data while it stays encrypted. We aim to never access the raw video data, but still to use it as a useful source of information. To complement this, we will also investigate methods for remote lip reading without using a video feed, instead exploring the use of radio signals for remote monitoring.

Adding in these new sensors and the processing that is required to make sense of the data produced will place a significant additional power and miniaturization burden on the HA device. We will need to make our sophisticated visual and sound processing algorithms operate with minimum power and minimum delay, and will achieve this by making dedicated hardware implementations, accelerating the key processing steps. In the long term, we aim for all processing to be done in the HA itself - keeping data local to the person for privacy. In the shorter term, some processing will need to be done in the cloud (as it is too power intensive) and we will create new very low latency (<10ms) interfaces to cloud infrastructure to avoid delays between when a word is "seen" being spoken and when it is heard. We also plan to utilize advances in flexible electronics (e-skin) and antenna design to make the overall unit as small, discreet and usable as possible.

Participatory design and co-production with HA manufacturers, clinicians and end-users will be central to all of the above, guiding all of the decisions made in terms of design, prioritisation and form factor. Our strong User Group, which includes Sonova, Nokia/Bell Labs, Deaf Scotland and Action on Hearing Loss will serve to maximise the impact of our ambitious research programme. The outcomes of our work will be fully integrated, software and hardware prototypes, that will be clinically evaluated using listening and intelligibility tests with hearing-impaired volunteers in a range of modern noisy reverberant environments. The success of our ambitious vision will be measured in terms of how the fundamental advancements posited by our demonstrator programme will reshape the HA landscape over the next decade.

Planned Impact

Significant impact beyond the academic environment is envisaged through this multi-disciplinary programme:

*Impact on people with hearing loss*
Over 10 million people in the UK (~350 million worldwide) currently suffer from debilitating hearing loss, at a cost of ~£450M/year to the NHS, and this figure is expected to rise to 14.5 million by 2031. People with serious hearing loss often find themselves socially isolated with a range of adverse health consequences. Even a modest improvement in hearing however, can have a significant impact on an individual's social and work life. Our proposed technologies will transform real-time, privacy-preserving and domain-independent learning capabilities, to deliver robust speech intelligibility enhancement and end-user cognitive load management, in the hearing aids (HAs) of 2050. Our technical work programme is focused on this contribution, and the wide number of released societal and individual benefits that follow from it. For example, the data we can obtain from our pilot (on/off-chip) HA fitting and clinical validation, in smart assistive care homes and other real-life environments, could potentially enable: remote fitting, and usage training of HAs for end-users and audiologists - resulting in resource savings and relevance in developing countries. In care homes, where hearing loss affects ~90%, a well-functioning communication channel (even by remote communication) in which the emotional state can be securely sensed and transported, would be an ambitious clinically relevant use case. This would also benefit the visually impaired as it complements the visual processing in speech perception.

*Hearing aid industry*
Our proposed audio-visual (AV) HAs can have a considerable impact on the HA industry, as demand for future AV aids should rapidly complement inferior Audio-only devices. The UK's global reputation in hearing research could thus be transformed simulating major global HA manufacturing. There are clear precedents for hearing science rapidly transforming hearing technology, e.g. multiple microphone processing and frequency compression have been commercialised to great effect. COG-MHEAR foresees AV processing as the next timely step forward, as previous barriers to AV processing are being overcome: wireless 5G and Internet of Things (IoT) technologies can free computation from having to be performed on the device itself, and wearable computing devices are becoming powerful enough to perform real-time face tracking and feature extraction. AV HAs will also impact on industry standards for HA evaluation and clinical standards for hearing loss assessment. Plans for realising industrial impacts are detailed in the Pathways to Impact and Workplan.

*Applications beyond hearing aids*
We foresee impact in several areas (see Impact Pathways), including cochlear implant signal processing, automatic speech recognition systems, multisensory integration, general auditory systems engineering, and clinical, computational, cognitive and auditory neuroscience. Beyond HAs, novel multimodal ecological momentary assessment tools could be developed, transforming existing sparse, unimodal commercial systems of our User Group members, e.g. Sonova. These could be exploited to personalise the design and usability of other medical instruments to enhance personal product experience. Our proposed wireless-based emotion detection system could extend to emotion-sensitive robotic assistants/companions, that could be of interest to smart care homes. Beyond health, our research will deliver a step change in the critical mass of UK engineering and physical science skills to tackle emerging challenges in signal processing. The potential of our disruptive technology can be exploited in teleconferencing and extremely noisy environments e.g. dynamic environments and situations where ear defenders are worn, such as emergency and disaster response and battlefield environments.

Publications

10 25 50

publication icon
Adeel A (2023) Unlocking the Potential of Two-Point Cells for Energy-Efficient and Resilient Training of Deep Nets in IEEE Transactions on Emerging Topics in Computational Intelligence

publication icon
Adi SE (2021) Design and optimization of a TensorFlow Lite deep learning neural network for human activity recognition on a smartphone. in Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference

publication icon
Ali SM (2022) Low-profile Button Sensor Antenna Design for Wireless Medical Body Area Networks. in Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference

publication icon
Alkhamees M (2021) User trustworthiness in online social networks: A systematic review in Applied Soft Computing

publication icon
Anwar U (2023) Design and Evaluation of Wearable Multimodal RF Sensing System for Vascular Dementia Detection in IEEE Transactions on Biomedical Circuits and Systems

publication icon
Areeb Q (2023) Filter bubbles in recommender systems: Fact or fallacy-A systematic review in WIREs Data Mining and Knowledge Discovery

publication icon
Chouikhi N (2022) Novel single and multi-layer echo-state recurrent autoencoders for representation learning in Engineering Applications of Artificial Intelligence

publication icon
Comminiello D (2023) A New Class of Efficient Adaptive Filters for Online Nonlinear Modeling in IEEE Transactions on Systems, Man, and Cybernetics: Systems

publication icon
Gao F (2022) Ellipse Encoding for Arbitrary-Oriented SAR Ship Detection Based on Dynamic Key Points in IEEE Transactions on Geoscience and Remote Sensing

publication icon
Garg N (2022) Generalized Superimposed Training Scheme in IRS-Assisted Cell-Free Massive MIMO Systems in IEEE Journal of Selected Topics in Signal Processing

publication icon
Garg N (2022) Generalized Superimposed Training Scheme in Cell-Free Massive MIMO Systems in IEEE Transactions on Wireless Communications

publication icon
Hameed H (2023) Recognizing British Sign Language Using Deep Learning: A Contactless and Privacy-Preserving Approach in IEEE Transactions on Computational Social Systems

publication icon
Hameed H (2022) Privacy-Preserving British Sign Language Recognition Using Deep Learning. in Annual International Conference of the IEEE Engineering in Medicine and Biology Society. IEEE Engineering in Medicine and Biology Society. Annual International Conference

publication icon
He Y (2021) Learning Polar Encodings for Arbitrary-Oriented Ship Detection in SAR Images in IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing

publication icon
Kaur I (2021) An Integrated Approach for Cancer Survival Prediction Using Data Mining Techniques. in Computational intelligence and neuroscience

 
Description Key findings and achievements of the COG-MHEAR Award to-date include:

(1) The design, development and ongoing evaluation of a first-of-its-kind real-time, multi-modal speech enhancement prototype that exploits lip reading cues to contextually enhance speech in real noisy environments. The prototype was showcased (to around 40 participants) at an international COG-MHEAR Workshop organized as part of the premier 2022 IEEE Engineering in Medicine and Biology Society Conference (EMBC) in Glasgow, 11-15 July 2022. A commercial participant expressed an interest in discussing external investment opportunities in COG-MHEAR innovation plans. This has led to on-going discussions through relevant institutional departments.

(2) Development of the world's first open web-based demonstrator tool that shows how recordings of speech in noisy environments can be multi-modally processed to remove background noise and make the speech easier to hear. The ground breaking demonstrator tool works for sound only, as well as video recordings, and can be utilized by researchers to develop innovative multi-modal assistive hearing and speech communication systems. Users can listen to sample recordings and upload their own personal (noisy) videos or audio files to hear the difference after audio-visual processing using a deep neural network model. No uploaded data is stored. User data is erased as soon as the web page is refreshed or closed. The open demonstrator tool is being used by clinicians, academic and industry researchers, innovators and endusers globally, incuding through user workshops and activities organied by our COG-MHEAR User Group.

(3) The ambitious development of a robust privacy-preserving radio frequency (RF) based lip-reading framework to address privacy concerns associated with use of conventional (e.g. video camera based) sensing approaches in future multi-modal hearing aids. The new framework has a unique ability to read lips under face masks for which it employs WiFi and radars as enablers of RF sensing technology. The pioneering study has been published in the prestigious Nature Communications journal (2022) and datasets have been made openly available.

(4) A novel privacy-preserving radio-frequency based approach has been developed for British Sign Language (BSL) detection, to aid communication of hearing-aid users who use sign language.

(5) COG-MHEAR has led to the development of reliable, cost-effective, broad coverage and energy-efficient user and environmental context-aware solutions for future multi-modal hearing-aid users with off-the-shelf WiFi communication networks. Specifically, WiFi signals have been successfully leveraged for contextual activity monitoring and indoor localisation via intelligent wireless walls, with key findings published in the prestigious Nature Light journal. Associated datasets have been made openly available as new benchmarks for the global research community.

(6) Organisation of the world's first large-scale audio-visual speech enhancement (AVSEC) Challenge (utilising real-world TED Talks) as part of the leading IEEE Spoken Language Technology (SLT) Workshop, Qatar, 9-12 Jan 2023. Our teams developed a new baseline pre-trained deep neural network model which was made openly available to participants, along with raw and pre-processed audio-visual datasets - derived from real-world TED talk videos - for training and development of new audio-visual models to perform speech enhancement and speaker separation at signal to noise (SNR) levels that were significantly more challenging than typically used in audio-only scenarios. The Challenge evaluation utilised established objective measures (such as STOI and PESQ, for which scripts were provided to participants) as well as a new audio-visual intelligibility evaluation method developed by the COG-MHEAR teams for subjective evaluation with human subjects. The new baseline model, real-world datasets and subjective audio-visual intelligibility testing method are continuing to be exploited by researchers in speech and natural language communication and hearing assistive technology applications. Our COG-MHEAR team based at Edinburgh Napier University won first place in the Challenge on the basis of an independent subjective audio-visual evaluation carried out by researchers at the University of Edinburgh. This demonstrated the efficacy of our novel audio-visual speech enhancement models compared to global benchmark models.

(7) Organisation of the 2022 UK Speech Conference in Edinburgh, with 180+ participants. This showcased COG-MHEAR's world-leading research to the wider UK speech technology community and led to ongoing development of new collaborations and networks, including a new UK Special Interest Group on Speech-based Multi-Modal Processing (co-led by the COG-MHEAR PI).

(8) COG-MHEAR has led to more than 30 key research publications (including in numerous several leading IEEE Transactions), 5 journal special issues, 2 special sessions (at 2022 IEEE WCCI and 2023 Interspeech) and 10 further funding awards to-date.
Exploitation Route The hands-on demonstration of our real-time speech enhancement prototype at the 2022 IEEE EMBC workshop stimulated discussions on current trends, future research and innovation, clinical evaluation and commercialisation challenges and opportunities to transform the current commercial Hearing Aid landscape. We made our prototype demonstrator freely available as an open testbed and it is continuing to be used by the global community for further research, evaluation and benchmarking.

COG-MHEAR led to the organisation of the world's first large-scale audio-visual speech enhancement challenge (AVSEC), with new benchmark datasets (utilizing real-world TED talks), baseline speech enhancement models and a novel subjective audio-visual intelligibility evaluation method made openly available. These are continuing to be used by researchers world-wide.

The ongoing work has led to more than 30 key research publications to-date which are being cited by researchers globally, and also several further funding awards that are continuing to broaden the impact of COG-MHEAR in a number of interdisciplinary research and application areas. The latter include six grants awarded over the past year to COG-MHEAR investigators in collaboration with external partners, for complementary research and innovation activities as part of our sustainability strategy.

The COG-MHEAR user group ensures that end-users, clinicians, and Hearing Aid industry representatives are included in all stages of research, as part of our interdisciplinary programme of participatory co-design.

Highlights over the past year included:

(i) Continuing expansion of the user group to include clinicians, audiologists, and local hearing aid users.

(ii) A COG-MHEAR PhD student was selected by the Turing Institute for one of their award winning Data Study Groups. He is working on a Challenge run by UCL, about prediction of the annoyance ratings of urban soundscapes. This is primarily addressing a public health problem where noise pollution affects at least 80 million EU citizens, with substantial impacts on public health which are not well addressed by conventional noise control methods.

(iii) Accessible COG-MHEAR updates are regularly shared with our user group members (through our website, social media, email and meetings) giving details of progress and to invite input and feedback. Institutional pump priming is leveraged where possible, for example, for funding summer placements for undergraduate and postgraduate students and visiting researchers.

(iv) Collaborative and interdisciplinary training opportunities for our PhD students and postdoctoral researchers are key to ongoing sustainability of research strands that are developing in the COG-MHEAR programme. Monthly networking and progress update Workshops for PhD and postdoctoral researchers provide additional training opportunities, including through talks by external guest speakers.

(v) The range of training opportunities are widened through interdisciplinary PhD co-supervision arrangements, with students benefitting through complementary input from several institutions in addition to the User Group.
Sectors Communities and Social Services/Policy,Digital/Communication/Information Technologies (including Software),Education,Electronics,Healthcare,Leisure Activities, including Sports, Recreation and Tourism,Transport,Other

URL https://cogmhear.org/
 
Description COG-MHEAR's overall impact strategy aims to shorten the time to translation for our pioneering research work; ensuring the fundamental research performed is ultimately relevant to hearing health care and practice. Within this, we aim for broad engineering, AI, health and social care impact. Our research to-date has included work applicable to healthcare areas of economic and social importance, including computer vision, speech and natural language dialogue systems, embedded robots, neuroscience, flexible electronics, and wireless systems engineering. This has led to significant further funding awarded to COG-MHEAR investigators, broadening COG-MHEAR's national and global impact in a number of key research and innovation areas. For example, a new Defense and Security Accelerator funding award by the Defence Science and Technology Laboratory has led to ongoing development of ULTRA-Earswitch: innovative Tactical in-ear ultrasound-driven headphones enabling communication, noise protection and hands-free control without reducing situational awareness. The COG-MHEAR award has led to two patents for which licensing discussions are ongoing with companies. COG-MHEAR has led to the innovative design, development and ongoing evaluation of a first-of-its-kind real-time, multi-modal speech enhancement prototype that exploits lip reading cues to effectively enhance speech in real noisy environments. The prototype was showcased (to around 40 participants) at an international COG-MHEAR Workshop organized as part of the premier 2022 IEEE Engineering in Medicine and Biology Society Conference (EMBC) in Glasgow, 11-15 July 2022. A commercial participant expressed an interest in discussing external investment opportunities in COG-MHEAR innovation plans. This has led to on-going discussions through relevant institutional departments. The COG-MHEAR teams organised the world's first last-scale Audio-Visual Speech Enhancement (AVSE) Challenge as part of the 2023 IEEE Spoken Language Technology (SLT) Workshop, Qatar, 9-12 January 2023. The Challenge brought together wider computer vision, hearing and speech research communities from academia and industry to explore novel approaches to multimodal speech-in-noise processing. Our teams developed a new baseline pre-trained deep neural network model and made this openly available to participants, along with raw and pre-processed audio-visual datasets - derived from real-world TED talk videos - for training and development of new audio-visual models to perform speech enhancement and speaker separation at signal to noise (SNR) levels that were significantly more challenging than typically used in audio-only scenarios. The Challenge evaluation utilised established objective measures (such as STOI and PESQ, for which scripts were provided to participants) as well as a new audio-visual intelligibility testing method developed by the COG-MHEAR teams for subjective evaluation with human subjects. The new baseline model, real-world datasets and audio-visual intelligibility testing method are continuing to be exploited by researchers in multi-modal hearing assistive technology and speech communication applications. The COG-MHEAR teams organised the 2022 UK Speech Conference in Edinburgh, with 180+ participants. This showcased COG-MHEAR's world-leading research to the wider UK speech technology community and led to ongoing development of new collaborations and networks, including a new UK Special Interest Group on Speech-based Multi-Modal Processing (co-led by the COG-MHEAR PI). Ongoing COG-MHEAR work is likely to generate impact in wider clinical, therapeutic and diagnostic system applications in several areas, e.g. cochlear-implant signal processing, human-robot interaction, 5G-IoT, artificial intelligence and augmented/virtual reality enabled remote clinical video-consultations and self-care; improved audio-visual speech recognition systems for people with disordered speech, and development of audio-visually enhanced speech-based communication aids. The latter could improve the quality of life of people with communicative disorders, enabling them to play a full part in society. We also anticipate impact on future generation lightweight security algorithms and semantic communication systems that will be based on privacy-preserving multi-modal approaches, enabling integration of meaning, context and communication.
First Year Of Impact 2023
Sector Digital/Communication/Information Technologies (including Software),Electronics,Healthcare,Other
Impact Types Societal,Economic

 
Description Artificial Intelligence (AI) - powered dashboard for Covid-19 related public sentiment and opinion mining in social media platforms
Amount £135,104 (GBP)
Funding ID COV/NAP/20/07 
Organisation Chief Scientist Office 
Sector Public
Country United Kingdom
Start 05/2020 
End 10/2020
 
Description Closed-loop Neural Interface Technologies (Close-NIT) Network Plus
Amount £1,106,216 (GBP)
Funding ID EP/W035081/1 
Organisation Engineering and Physical Sciences Research Council (EPSRC) 
Sector Public
Country United Kingdom
Start 08/2022 
End 07/2025
 
Description Empowering Practical Interfacing of Quantum Computing (EPIQC)
Amount £2,448,091 (GBP)
Funding ID EP/W032627/1 
Organisation Engineering and Physical Sciences Research Council (EPSRC) 
Sector Public
Country United Kingdom
Start 04/2022 
End 04/2026
 
Description Facilitating health and wellbeing by developing systems for early recognition of urinary tract infections - Feather
Amount £1,100,918 (GBP)
Funding ID EP/W031493/1 
Organisation Engineering and Physical Sciences Research Council (EPSRC) 
Sector Public
Country United Kingdom
Start 11/2022 
End 10/2025
 
Description Millimetre-wave and Terahertz On-chip Circuit Test Cluster for 6G Communications and Beyond (TIC6G)
Amount £2,629,606 (GBP)
Funding ID EP/W006448/1 
Organisation Engineering and Physical Sciences Research Council (EPSRC) 
Sector Public
Country United Kingdom
Start 01/2022 
End 12/2023
 
Description Natural Language Generation for Low-resource Domains
Amount £416,848 (GBP)
Funding ID EP/T024917/1 
Organisation Engineering and Physical Sciences Research Council (EPSRC) 
Sector Public
Country United Kingdom
Start 03/2021 
End 02/2024
 
Description SNOW: Wearable Nano-Opto-electro-mechanic Systems
Amount £246,178 (GBP)
Funding ID EP/X034690/1 
Organisation Engineering and Physical Sciences Research Council (EPSRC) 
Sector Public
Country United Kingdom
Start 11/2022 
End 10/2025
 
Description ULTRA-Earswitch: Tactical in-ear ultrasound driven headphones- communication/ biometrics/ noise protection and hands free control without reducing situational awareness: Awarded to Prof Mathini Sellathurai
Amount £60,000 (GBP)
Funding ID Contract Number: DSTLX1000169225 
Organisation Defence Science & Technology Laboratory (DSTL) 
Sector Public
Country United Kingdom
Start 03/2022 
End 08/2022
 
Description Unmute: Opening Spoken Language Interaction to the Currently Unheard
Amount £970,668 (GBP)
Funding ID EP/T024976/1 
Organisation Engineering and Physical Sciences Research Council (EPSRC) 
Sector Public
Country United Kingdom
Start 12/2020 
End 11/2023
 
Description Unpacking the black box of interventions such as peer support designed to optimize mental health outcomes of family caregivers
Amount £484,380 (GBP)
Funding ID EP/X000788/1 
Organisation Engineering and Physical Sciences Research Council (EPSRC) 
Sector Public
Country United Kingdom
Start 09/2022 
End 08/2024
 
Title Multi-modal Speech Enhancement Demonstrator Tool 
Description We developed the world's first open web-based demonstrator tool that shows how recordings of speech in noisy environments can be multi-modally processed to remove background noise and make the speech easier to hear. The demonstrator tool works for sound only, as well as video recordings, and enables researchers to develop innovative multi-modal speech and natural language communication applications. Users can listen to sample recordings and upload their own personal (noisy) videos or audio files to hear the difference after audio-visual processing using a deep neural network model. No uploaded data is stored. User data is erased as soon as the web page is refreshed or closed. 
Type Of Material Improvements to research infrastructure 
Year Produced 2023 
Provided To Others? Yes  
Impact This innovative demonstrator tool was showcased at an international workshop organised as part of the 2022 IEEE Engineering in Medicine and Biology Society Conference (EMBC) in Glasgow, 11-15 July. Around 40 Workshop participants (including clinical, academic and industry researchers) were provided with an interactive hands-on demonstration of the audio-visual speech enhancement tool. The tool demonstrated, for the first time, the technical feasibility of developing audio-visual algorithms that can enhance speech quality and intelligibility, with the aid of video input and low-latency combination of audio and visual speech information. This served to educate participants and demonstrated the potential of such transformative tools to extract salient information from the pattern of the speaker's lip movements and to contextually employ this information as an additional input to speech enhancement algorithms, in future multi-modal communications and hearing assistive technology applications. 
URL https://demo.cogmhear.org/
 
Title World's first large-scale Audio-Visual Speech Enhancement Challenge (AVSEC): New baseline Deep Neural Network Model, Real-world Datasets and Audio-visual Intelligibility Testing Method 
Description We developed and made openly available, a new benchmark pre-trained deep neural network model, real-world (TED video) datasets and a novel subjectve audio-visual intelligibility evaluation method as part of the world's first large-scale Audio-Visual Speech Enhancement Challenge. Details of the benchmark model, datasets and intelligibility testing method were published in peer-reviewed proceedings of the 2023 IEEE Spoken Language Technology (SLT) Workshop (https://ieeexplore.ieee.org/abstract/document/10023284). 
Type Of Material Improvements to research infrastructure 
Year Produced 2022 
Provided To Others? Yes  
Impact The new benchmark pre-trained model code and training and evaluation datasets were made openly available as part of the world's first large-scale Audio-Visual Speech Enhancement (AVSE) Challenge organised by our COG-MHEAR teams as part of the 2023 IEEE Spoken Language Technology (SLT) Workshop, Qatar, 9-12 January 2023. The Challenge brought together wider computer vision, hearing and speech research communities from academia and industry to explore novel approaches to multimodal speech-in-noise processing. Our teams developed a new baseline pre-trained deep neural network model and made this openly available to participants, along with raw and pre-processed audio-visual datasets - derived from real-world TED talk videos - for training and development of new audio-visual models to perform speech enhancement and speaker separation at signal to noise (SNR) levels that were significantly more challenging than typically used in audio-only scenarios. The Challenge evaluation utilised established objective measures (such as STOI and PESQ, for which scripts were provided to participants) as well as a new audio-visual intelligibility testing method developed by the COG-MHEAR teams for subjective evaluation with human subjects. The new baseline model, real-world datasets and subjective audio-visual intelligibility testing method are continuing to be exploited by researchers in speech and natural language communication and hearing assistive technology applications. 
URL https://challenge.cogmhear.org/#/download
 
Title 5G-Enabled Contactless Multi-User Presence and Activity Detection for Independent Assisted Living 
Description The dataset represents a combination of activities captured through wireless channel state information, using two USRP X300/X310 devices, to serve a system that was designed to detect presence and activities amongst multiple subjects. The dataset was divided into 16 classes, each represents a particular number of subjects and activities. More details can be found in the readme file. 
Type Of Material Database/Collection of data 
Year Produced 2021 
Provided To Others? Yes  
Impact This benchmark dataset has enabled the COG-MHEAR team to develop and evaluate a new-generation contactless 5G-based radio-frequency (RF) sensing system to detect the presence and activities of multiple persons. The developed system operates in the 5G frequency band (3.75 GHz) and has demonstrated significant potential to estimate the environmental context. This complements audio-visual (AV) speech enhancement research being conducted in COG-MHEAR by enabling environmental context estimation in a privacy-preserving manner. 
URL http://researchdata.gla.ac.uk/id/eprint/1151
 
Title COG-MHEAR IoT (Internet-of-Things) Transceiver Demo 
Description A video demonstrating a first-of-its-kind prototype of a 5G Internet of Things (IoT) enabled hearing-aid. In the demo, the universal software radio peripheral (USRP) on the left, acts as an IoT device (hearing aid), and the USRP on the right, acts as an access point/base station and server for Cloud-based implementation of machine learning algorithms. The channel between the left IoT device to the access point is termed as an uplink channel and the channel between the access point and the IoT device is termed as a downlink channel. To enable real-time communication of audio-visual (AV) information from the IoT device to the access point, the uplink channel supports varying data-rates and hence utilises a long-term evolution (LTE) based modified frame structure developed for uplink data transmission. This supports 1.4 MHz and 3 MHz bandwidth with different modulation and code-rates for error correction codes. On the other hand, the access point only transmits audio information to the IoT device and hence supports a fixed data rate, by utilising an LTE-based modified frame structure with 1.4 MHz bandwidth. 
Type Of Material Computer model/algorithm 
Year Produced 2022 
Provided To Others? Yes  
Impact Collaborative work between COG-MHEAR partners has led to successful integration and evaluation of our audio-based Minimal Viable Product (MVP) model with the 5G-IoT Transceiver prototype, as part of an initial real-time Cloud-based AV speech enhancement framework. The IoT transceiver has also been effectively integrated with novel chaos-based lightweight encryption schemes, further demonstrating its potential for implementing future privacy-preserving multi-modal hearing aids. 
URL https://vimeo.com/675527544
 
Title Intelligent Wireless Walls for Contactless In-Home Monitoring 
Description The dataset is about monitoring human activities in complex Non-line-of-sight (Non-LOS) environments. Radio frequency (RF) sensing was employed in particular to collect unique channel fluctuations induced by multiple activities. The data collection hardware consists of two USRP devices one used as a transmitter (Tx) and one as receiver (Rx). Both USRPs are placed in a position where Tx and Rx were not in LOS. One was corner scenario, and the other was multifloor scenario. In the corner scenario, Tx was in one corridor while the Rx was in the other corridor and reflecting intelligent surface (RIS) was placed at corner to steer the beam towards the subject. The activities were performed between Tx and RIS. In multifloor scenario, Tx was on 5th floor and Rx was 3rd floor along with RIS. Activities were performed between RIS and Rx. Two subjects participated in experiments where each activity was performed for 6 seconds. The considered activities were sitting, standing, walking and empty. 
Type Of Material Database/Collection of data 
Year Produced 2022 
Provided To Others? Yes  
Impact An important development in monitoring of human activity using RF. 
URL http://researchdata.gla.ac.uk/id/eprint/1281
 
Title Intelligibility-Oriented Audio-Visual Speech Enhancement model 
Description A first-of-its-kind intelligibility-oriented deep neural network-based model has been developed for audio-visual (AV) speech enhancement. Model codes and datasets have been made available via the COG-MHEAR website to serve as a benchmark resource for the research community. 
Type Of Material Computer model/algorithm 
Year Produced 2021 
Provided To Others? Yes  
Impact Our innovative AV speech enhancement model and dataset have been made publicly available for benchmark evaluation by the research community. The model was presented at the EPSRC Clarity Workshop (16-17 Sep 2021) and also disseminated via Youtube: https://www.youtube.com/watch?v=2XU-OpfIlxY&list=PLNqx4n2qXsY_22KVZFoy9LxT6_ssxfSAS?dex=16 The interactive Workshop presentation stimulated lively discussions afterwards, with some participants requesting more information, and others expressing an interest to exploit our innovative intelligibility-oriented AV processing approach in their respective research and industry-led projects and activities. Plans for new collaborations were also discussed with some participants. 
URL https://github.com/cogmhear
 
Title Interactive COG-MHEAR AV (Audio-Visual) MVP (Minimum Viable Product) Demonstrator 
Description Demonstration of an initial laptop-based minimum viable product (MVP) of our multi-modal speech enhancement technology being developed in the COG-MHEAR research programme. This first-of-its-kind interactive prototype operates in real-time in constrained web-based video conferencing environments, using both audio-only and audio-visual (lip-reading) modalities as part of a low-latency, context-aware AV speech enhancement framework. This can generalise to a range of visual and acoustic noises by addressing the practical issues of visual imperfections in real environments. 
Type Of Material Computer model/algorithm 
Year Produced 2021 
Provided To Others? Yes  
Impact The developed real-time audio-based and audio-visual (AV) MVP demonstrators were showcased at our industry and user-focused workshops organised in 2021 and also at our first annual multi-stakeholders workshop organised in Feb 2022. The workshops provided a forum to showcase the groundbreaking work conducted in our COG-MHEAR project, and attracted multi-disciplinary audiences including national and international academics, clinicians, hearing-aid users, industry experts and enduser organisations. The 'live' MVP demonstrations stimulated lively discussions afterwards, with some participants requesting more information. Others expressed an interest to exploit our context-aware multi-modal processing approaches in their respective academic and clinical research and industry-led projects and activities. Plans for new collaborations were discussed with some participants. The interactions between multi-disciplinary Workshop participants stimulated fresh ideas for new and complementary research directions in multi-modal hearing assistive technology. These included exploiting our wireless radio frequency (RF) and machine learning based privacy-preserving technology for british sign language detection and lip-reading in the presence of face masks. The MVPs have been made openly available to the research and enduser community via the COG-MHEAR website, to solicit further feedback from end-users for continuing development, evaluation and optimisation. 
URL http://demo.cogmhear.org
 
Title Non-invasive Localization using Software-Defined Radios 
Description The dataset is about locating human activities in an office environment. Radio frequency (RF) sensing was employed in particular to collect unique channel fluctuations induced by multiple activities. The data collection hardware consists of two USRP devices that communicate with each other when activity takes place inside their coverage region. The USRPs are based on the National Instrument (NI) X310/X300 models, which are connected to two PCs by 1G Ethernet connections and have extended bandwidth daughterboard slots that cover DC-6 GHz and up to 120 MHz of baseband bandwidth. The two PCs were equipped with Intel(R) Core (TM) i7 7700.360 GHz processors, 16 GB RAM, and the Ubuntu 16.04 virtual operating system. For wireless communication, the USRPs were equipped with VERT2450 omnidirectional antennae. One participant performed in a room environment for the duration of the experiment, collecting 4300 samples for seven different activities in three zones and locations. 
Type Of Material Database/Collection of data 
Year Produced 2022 
Provided To Others? Yes  
Impact An important development in use of RF for location of human activities in an office environment. 
URL https://researchdata.gla.ac.uk/1283/
 
Title Pushing the Limits of Remote RF Sensing: Reading Lips Under Face Mask 
Description The dataset is about reading lips in a privacy preserving manner. In particular, radio frequency (RF) sensing was used to capture unique channel variation due to lip movements. USRP x300 was utilised equipped with the VERT2450 omnidirectional antenna and HyperLOG 7040 X used for reception and transmission respectively. Further, the same experiment was repeated with a Xethru UWB radar, where doppler frequency shifts due to lip movements are captured. We consider six classes for lip movements. Five vowels (a, e, i, o, u) and one empty class where lips were not moving. We are able to read lips even under face masks. Three subjects 1 male and 2 females participated in the experiments. 
Type Of Material Database/Collection of data 
Year Produced 2022 
Provided To Others? Yes  
Impact This is an important step in the ability to lip read whilst maintaining privacy and hygiene using RF. This data relates to a paper published in Nature Communications: Hameed, H., Usman, M., Tahir, A. et al. Pushing the limits of remote RF sensing by reading lips under the face mask. Nat Commun 13, 5168 (2022). https://doi.org/10.1038/s41467-022-32231-1 
URL https://researchdata.gla.ac.uk/1282/
 
Description COG-MHEAR industry partnership with Sonova 
Organisation Sonova
Country United States 
Sector Private 
PI Contribution Proposed new directions in hearing assistive technology, including the ambitious development of truly cognitively-inspired, multimodal Hearing Aids. These will autonomously adapt to the nature and quality of their visual and acoustic environmental inputs, leading to enhanced intelligibility in noise, with potentially reduced listening effort. Our overall goal is to collaborative develop, test and clinically evaluate real-time, personalised privacy-preserving audio-visual hearing-aid prototypes, including hardware and software implementations.
Collaborator Contribution Providing access to industry experts, end-users and focus groups, and advising on the commercial relevance and feasibility of innovative hearing technology being developed in the COG-MHEAR research programme. Also contributing to each work package, while also providing a route to impact by benchmarking our multi-modal prototypes with commercial hearing-aid functionality, throughout all design, development and evaluation stages.
Impact Three industry-led workshops were organised in 2021. Two of these were attended by multiple stakeholders from the COG-MHEAR user group comprising hearing-aid manufacturers, clinicians and enduser organisations, in addition to the COG-MHEAR research team including computer scientists, wireless and communications engineers, speech processing and hearing science researchers. The multi-disciplinary collaborations have enabled COG-MHEAR researchers to learn from industry experts and work closely with endusers to holistically address a full range of technical, privacy and usability challenges related to user-led co-design, evaluation and commercialization of future multi-modal hearing technology. The engagements have further led to identification of applications beyond hearing-aids that could benefit from related COG-MHEAR technology, such as novel multimodal ecological momentary assessment tools to transform existing sparse, unimodal commercial systems used by Sonova. These could enable the personalisation of design and usability of other medical instruments to enhance personal product experience. Further workshops in 2022 involved representatives from Sonova in user group discussions, as well as key advice from Sonova about development of the COG-MHEAR minimum viable product. This includes a workshop on 5 August 2022 (2 - 4pm) held with Dr Peter Derleth of Sonova, for expert feedback on scaling up our real-time Minimum Viable Product demonstrator. A visit to the Sonova labs is also planned to connect the ENU team with Sonova's AI team to pursue collaborative research discussions.
Start Year 2021
 
Description Dr Cosimo Ieracitano and Prof Carlo Morabito 
Organisation University of Reggio Calabria
Country Italy 
Sector Academic/University 
PI Contribution Collaboratively explored new explainable deep neural network (DNN) based approaches to inform audio-visual speech enhancement research as part of workpackage (WP) 1.
Collaborator Contribution Complementary expertise in development of explainable multi-modal DNNs.
Impact One jointly-authored research paper has resulted from the collaboration to-date, as part of Workpackage (WP) 1 of our COG-MHEAR research programme: Ieracitano, C., Mammone, N., Hussain, A. Morabito F.C,. A novel explainable machine learning approach for EEG-based brain-computer interface systems. Neural Comput & Applic (2021). https://doi.org/10.1007/s00521-020-05624-w
Start Year 2021
 
Description Dr Faiyaz Doctor 
Organisation University of Essex
Country United Kingdom 
Sector Academic/University 
PI Contribution Hosted a talk by Dr Faiyaz Doctor, giving details of his work on Fuzzy Systems.
Collaborator Contribution Dr Faiyaz Doctor, School of Computer Science and Electronic Engineering at the University of Essex, gave a talk about fuzzy systems to the COG-MHEAR teams. This led to ongoing collaborations with the COG-MHEAR team which is exploiting fuzzy logic to learn the environment and user context and enhance interpretability of deep neural network based audio-visual speech enhancement models.
Impact Ongoing
Start Year 2022
 
Description Dr Robert Adam, Heriot-Watt University 
Organisation Heriot-Watt University
Country United Kingdom 
Sector Academic/University 
PI Contribution Dr Robert Adam, Associate Professor in Linguistics, Interpreting, BSL and Deaf Studies at Heriot Watt University, gave an invited talk at the COG-MHEAR Workshop about deaf culture and history. This led to follow-on collaborative research discussions on deaf people's interactions through and with technologies, including new networking opportunities with the Signs Group at HWU, which includes deaf and hearing researchers from the UK, Belgium, Denmark, Finland, India, Norway, Australia and the US.
Collaborator Contribution Dr Robert Adam, Lecturer in Linguistics, Interpreting, BSL and Deaf Studies at Heriot Watt University, who gave a talk about deaf culture and history. His experience and insights as a deaf academic, particaulrly in the area of technology use, are helpful to the COG-MHEAR research programme.
Impact Ongoing.
Start Year 2023
 
Description Dr Simone Scardapane 
Organisation Sapienza University of Rome
Country Italy 
Sector Academic/University 
PI Contribution Collaboratively explored the challenge of addressing fairness with graph representation learning, as part of workpackage (WP) 1 of our COG-MHEAR research programme.
Collaborator Contribution Complementary expertise in fair and interpretable artificial intelligence (AI) models, led to the collaborative development of a novel approach to distributed "fairer" models, in the form of a biased data augmentation technique that modifies the training data to reduce the predictability of its sensitive attributes.
Impact One jointly-authored research paper has resulted from the collaboration to-date, as part of WP 1 of our COG-MHEAR research programme: I. Spinelli, S. Scardapane, A. Hussain and A. Uncini, "FairDrop: Biased Edge Dropout for Enhancing Fairness in Graph Representation Learning," in IEEE Transactions on Artificial Intelligence, doi: 10.1109/TAI.2021.3133818 (2021)
Start Year 2021
 
Description Earswitch 
Organisation Earswitch Ltd
Country United Kingdom 
Sector Private 
PI Contribution Ongoing discussions about the way in which the Earswitch technology could be used in multi-modal hearing technolgy.
Collaborator Contribution A talk about his work, and ongoing discussions about further collaboration on use of the technolgy.
Impact Ongoing.
Start Year 2022
 
Description Enduser organisations 
Organisation Action on Hearing Loss
Country United Kingdom 
Sector Charity/Non Profit 
PI Contribution Discussions on transformative hearing-assistive technology demonstrators being developed in COG-MHEAR. Participatory co-design with endusers is a central philosophy of our work programme, with the COG-MHEAR User Group continuously recruited to represent different stakeholder perspectives. This is essential for maximising usability and uptake prospects, and for understanding end-user privacy issues.
Collaborator Contribution Proactive engagement and participation of end-users to shape the design, delivery, dissemination, implementation and impact of our research. Endusers feedback on acceptability has affected key design and technology choices - ranging from usability choices to privacy-preserving properties of algorithms deployed in our minimum-viable prototype demonstrator. This has enabled the consortium to identify and address potential usability barriers, for increased uptake of our envisaged technology.
Impact Two user-led workshops were organised over the past year where our technology prototypes were also showcased. The interactive demonstrations and networking discussions enabled end-users to be involved in all stages of our research in a programme of participatory design in order to help meet end-user expectations and design specifications. Our clinical research partners provided advice on the experimental design and analysis aspects of prototype evaluation and our industrial partners provided technical support for benchmarking our new multi-modal prototypes with commercial hearing-aid functionality, throughout the development and validation stages. The user-led Workshops also suggested new practical applications and use cases to evaluate our developed prototypes for wireless sensing and multi-modal speech enhancement, including automatic speech recognition, privacy-preserved British Sign Language detection, lip-reading in the presence of face-masks, human health and activity monitoring, and therapeutic and diagnostic systems. These have led to successful pilot studies with findings being reported in research papers currently in preparation.
Start Year 2021
 
Description IEEE UKRI Industry Applications Society (IAS) Chapter 
Organisation IEEE Industry Applications Society
Country United States 
Sector Charity/Non Profit 
PI Contribution Prof Amir Hussain is the chair of the IEEE UKRI Industry Applications Society (IAS) Chapter
Collaborator Contribution Sponsorship of the monthly COG-MHEAR workshops, plus provision of speakers and partners from industry.
Impact -
Start Year 2022
 
Description Institute for Integrated Micro and Nano Systems 
Organisation University of Edinburgh
Department Institute for Integrated Micro and Nano Systems
Country United Kingdom 
Sector Academic/University 
PI Contribution Ongoing discussions about the use of the Institute's technology and facilities.
Collaborator Contribution Prof Adam Stokes, the Institute for Integrated Micro and Nano Systems, University of Edinburgh, gave a talk with details of the range of robots and associated technologies that are being developed. The Institute is also now a research partner.
Impact Ongoing.
Start Year 2022
 
Description Prof Ashiq Anjum and Prof Huiyu Zhou 
Organisation University of Leicester
Country United Kingdom 
Sector Academic/University 
PI Contribution Collaborative development of a real-time Edge-based Artificial Intelligence (AI) platform as part of workpackage (WP) 2 of our COG-MHEAR research programme.
Collaborator Contribution Collaborative PhD project has led to the development of an innovative Cloud-based video analytics system using orientation fusion and convolutional neural networks for scalable object recognition. This has demonstrated significantly improved visual recognition accuracy under challenging conditions. This has informed ongoing work in WP2 of our research programme.
Impact One jointly-authored research paper has resulted from the collaboration to-date, as part of WP 2 of our COG-MHEAR research programme: Yaseen M.U, Anjum A, Fortino G, Liotta A, Hussain A, Cloud based scalable object recognition from video streams using orientation fusion and convolutional neural networks, Pattern Recognition, Volume 121, Jan 2022, https://doi.org/10.1016/j.patcog.2021.108207.
Start Year 2021
 
Description Prof Bin Luo 
Organisation Anhui University
Country China 
Sector Academic/University 
PI Contribution Collaborative work exploring computer vision approaches to more effectively extract and track visual features as part of WorkPackage (WP) 1 of our research programme.
Collaborator Contribution Complementary expertise in attribute-guided deep neural architectures and enhanced deep neural networks for visual tracking.
Impact Two jointly-authored research papers have resulted from the collaboration to-date, complementing Workpackage (WP) 1 of our CO-MHEAR research programme. Specifically: (i) Collaborative formulation of an attribute-guided deep neural architecture to address object re-identification challenges arising from large intra-class variation caused by view variations and illumination changes, and inter-class similarity [1]. [1] Li H., Lin X, Zheng A, Li C; Luo B; He R; Hussain A, "Attributes Guided Feature Learning for Vehicle Re-Identification," in IEEE Transactions on Emerging Topics in Computational Intelligence, (2021) doi: 10.1109/TETCI.2021.3127906. (ii) Collaborative development of an enhanced deep neural network (DNN) approach for visual tracking, termed the domain activation mapping guided network [2] addresses challenges of conventional DNN-based visual trackers being easily influenced by imbalanced background and foreground information in limited training samples. This informed audio-visual speech enhancement work in WP1. [2] Tu Z, Zhou A, Gan C, Jiang B, Hussain A, Luo B, A novel domain activation mapping-guided network (DA-GNT) for visual tracking, Neurocomputing, Volume 449, 2021, Pages 443-454, https://doi.org/10.1016/j.neucom.2021.03.05
Start Year 2021
 
Description Prof Hui Yu 
Organisation University of Portsmouth
Country United Kingdom 
Sector Academic/University 
PI Contribution Follow-on discussions, impacted/fed into the continuing review/development of our Minimum Viable Product Roadmap
Collaborator Contribution A talk about relevant aspects of Prof Hui Yu's research given to the COG-MHEAR teams, explaining advances in immersive and augmented reality, especially emotion sensing and portrayal.
Impact Ongoing
Start Year 2022
 
Description Prof João Paulo Papa 
Organisation Sao Paulo State University
Country Brazil 
Sector Academic/University 
PI Contribution Collaborative pioneering work on development of low-energy cortical graph neural-networks for multi-modal speech enhancement.
Collaborator Contribution Provided complementary expertise to evaluate the robustness of novel low-energy cortical graphical models with potential for on-chip hearing-aid implementation.
Impact Jointly-authored research paper submitted to a peer-reviewed journal (pre-print available at: https://arxiv.org/pdf/2202.04528.pdf)
Start Year 2021
 
Description Prof Kaizhu Huang 
Organisation Duke Kunshan University
Country China 
Sector Academic/University 
PI Contribution Collaboratively explored enhancements to address training, optimisation and generalisation capabilities of deep neural network (DNN) models to inform the development of an innovative real-time audio-visual (AV) speech enhancement framework as part of workpackage (WP) 1 of our COG-MHEAR research programme.
Collaborator Contribution Complementary expertise in utilising latent distributions to enhance generative adversarial networks (GANs) and formulation of generalised zero-shot learning methods for low-latency audio-visual (AV) speech enhancement. This informed the development of our real-time audio-visual (AV) minimum viable product (MVP) demonstrator as part of WP1.
Impact Three jointly-authored research papers have resulted from the collaboration to-date, complementing Workpackage (WP) 1 of our CO-MHEAR research programme. Specifically: (i) Simple latent distributions to enhance GANs [1]. [1] Zhang, S., Huang, K., Qian, Z. Hussain, A. Improving generative adversarial networks with simple latent distributions. Neural Comput & Applic 33, 13193-13203 (2021). https://doi.org/10.1007/s00521-021-05946-3 (ii) An artificial immune networks-based approach [2] is to optimise machine learning models [2] [2] Kanwal S, Hussain A, Huang K, Novel Artificial Immune Networks-based optimization of shallow machine learning (ML) classifiers, Expert Systems with Applications, Volume 165, 2021 https://doi.org/10.1016/j.eswa.2020.113834 (iii) To address practical challenges of limited availability of labelled image samples, as well as overfitting issues with conventional zero-shot learning (ZSL) methods, a coarse-grained generalised ZSL method developed with a self-focus mechanism, specifically, a focus-ratio that introduces the importance of each dimension into the model optimization process [3]. [3] Yang G, Huang K, Zhang R, Goulermas J.Y, Hussain A, Coarse-grained generalised zero-shot learning with efficient self-focus mechanism, Neurocomputing, Volume 463, 2021, Pages 400-410, https://doi.org/10.1016/j.neucom.2021.08.027.
Start Year 2021
 
Description Prof Yu Tsao 
Organisation Academia Sinica
Country Taiwan, Province of China 
Sector Academic/University 
PI Contribution Collaboratively explored ideas for development and multi-lingual of a real-time framework based on deep neural network models for audio-visual speech enhancement, as part of workpackage (WP) 1 of our COG-MHEAR research programme.
Collaborator Contribution Complementary expertise in multi-modal speech processing led to the collaborative development of a novel deep neural network model integrating local and global attention, with promising audio-based speech enhancement results.
Impact One jointly-authored research paper has been submitted to a peer-reviewed journal, as part of WP1 of our COG-MHEAR research programme: Hussain, T., Wang, W. C., Gogate, M., Dashtipour, K., Tsao, Y., Lu, X., Adeel A., and Hussain A., "A novel Temporal Attentive Pooling based Convolutional Recurrent Neural Network For Acoustic Signal Enhancement," revision submitted to IEEE Transactions on Artificial Intelligence, 2021. (pre-print available at: https://arxiv.org/abs/2201.09913
Start Year 2021
 
Description Signs at Heriot-Watt University 
Organisation Heriot-Watt University
Country United Kingdom 
Sector Academic/University 
PI Contribution Invited talk by Dr Robert Adam of Heriot-Watt University about deaf culture and history at the COG-MHEAR Workshop.
Collaborator Contribution The invited talk by Dr Robert Adam led to proposed interaction on the Signs@HWU project: 'Deaf people's interactions through and with technologies (2023 - 2027). This has opened new networking opportunities with the Signs Group which includes deaf and hearing researchers from the UK, Belgium, Denmark, Finland, India, Norway, Australia and the US.
Impact Project due to start in May 2023
Start Year 2023
 
Description The Collaborative Research Centre (CRC) Hearing Acoustics 
Organisation Carl von Ossietzky University of Oldenburg
Country Germany 
Sector Academic/University 
PI Contribution COG-MHEAR researchers have visited Prof Volker Hohmann and colleagues at the Collaborative Research Centre (CRC) Hearing Acoustics at the University of Oldenburg, Germany to exchange ideas about creation of ecologically-valid virtual conversational scenarios for use in designing and testing audio-visual speech enhancement models for future multimodal hearing technology.
Collaborator Contribution An talk by Prof Hohmann to the COG-MHEAR teams about the unique lab setup and function, plus an invitation to visit the lab in person.
Impact Ongoing.
Start Year 2022
 
Description The EPSRC Clarity project organising audio-based Hearing Aid Challenges 
Organisation University of Nottingham
Country United Kingdom 
Sector Academic/University 
PI Contribution We collaborated with the EPSRC Clarity Challenge through Prof M. Akeroyd who is CI of both COG-MHEAR and the Clarity project.
Collaborator Contribution Attendance and presentations at Workshop meetings. At the 2021 Clarity Challenge, we demonstrated a promising alternative to conventional (e.g. mean squared error based) cost functions, specifically, a first-of-its-kind 'intelligibility-oriented' deep neural network-based audio-visual (AV) speech enhancement model. This has underpinned our real-time AV minimum-viable product (MVP) demonstrator.
Impact Conference paper: Hussain, T., Gogate, M., Dashtipour, K. and Hussain, A., 2021. Towards Intelligibility-Oriented Audio-Visual Speech Enhancement. in: The Clarity Workshop on Machine Learning Challenges for Hearing Aids (Clarity-2021) https://claritychallenge.github.io/clarity2021-workshop/papers/Clarity_2021_paper_hussain.pdf
Start Year 2021
 
Description The Microelectronics lab (meLAB) 
Organisation University of Glasgow
Country United Kingdom 
Sector Academic/University 
PI Contribution Knowlege of the meLAB facilities and research can assist in the development of COG-MHEAR research, especially wearable and implantable technology.
Collaborator Contribution Prof Hadi Heidari, Professor of Nanoelectronics and founder of the Microelectronics lab (meLAB), University of Glasgow described his group's work that includes surgical and wearable applications.
Impact Ongoing
Start Year 2022
 
Description 13 June 2022: Mini Audio-Visual Hearing Aid workshop 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Professional Practitioners
Results and Impact An audiologist who uses hearing aids took part in an online discussion with COG-MHEAR researchers. This gave key information on the practicalities of introducing new technology to hearing aid users; and indications of ways in which hardware should be developed to ensure maximum uptake.
Year(s) Of Engagement Activity 2022
URL https://cogmhear.org/index.html
 
Description 7 December 2022: Pilot in-person hearing aid user workshop 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Study participants or study members
Results and Impact The COG-MHEAR teams showed their hardware and software developments to 3 hearing-aid users who attended in person. They gave their views about the main problems with their existing hearing technology. which helped in ensuring that the COG-MHEAR RESEARCH is developed in useful directions. They also gave feedback on the developing technology demonstrations. This was very useful in showing the researchers that various ways of interacting with the demonstrations are needed, depending on the preferences of each individual hearing aid user.
Year(s) Of Engagement Activity 2022
URL https://blog.cogmhear.org/
 
Description Annual reviews 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Other audiences
Results and Impact Annual review of COG-MHEAR research to gain feedback from stakeholders and the International Advisory Board. This ensures that experts in the field check the research progress and outcomes, and comment on the research direction for the coming year.
Year(s) Of Engagement Activity 2022,2023
URL https://blog.cogmhear.org/first-year-of-cogmhear-research
 
Description Blog posts 
Form Of Engagement Activity Engagement focused website, blog or social media channel
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Patients, carers and/or patient groups
Results and Impact A series of blogs were published to provide an informal and engaging means of updating visitors to the COG-MHEAR website. Initial blogs summarised key points from the main workshops held in the first year of the project. The @cogmhear Twitter account provided another channel for networking and disseminating our user-led events, monthly researcher workshops, and project publications. These provided further opportunities for engagement and participation to enable end-users of research and other stakeholders to shape its design, delivery, dissemination, implementation and impact. The published blogs and social media engagement led to requests on how COG-MHEAR technology can be harnessed by future hearing-aid end-users. Our COG-MHEAR researchers and collaborators reported increased interest in exploiting wireless and multi-modal processing in related subject areas, including automatic speech recognition, British Sign Language detection, human health and activity monitoring, and therapeutic and diagnostic systems.
Year(s) Of Engagement Activity 2021,2022
URL https://blog.cogmhear.org/
 
Description Blogs about COG-MHEAR activities and speakers 
Form Of Engagement Activity Engagement focused website, blog or social media channel
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Public/other audiences
Results and Impact A series of blogs were published to provide an informal and engaging means of updating visitors to the COG-MHEAR website. Initial blogs summarised key points from the main workshops held in the first year of the project. The series of blogs has developed with the addition of plain English blogs about talks given to the COG-MHEAR teams, as well as engagement activities. The LinkedIn channel: linkedin.com/in/cog-mhear-research-programme-55a016223 and @cogmhear Twitter account provided other dissemination and networking channels for details of events, publications and opportunities to take part in the research. The published blogs and social media engagement led to requests on how COG-MHEAR technology can be harnessed by future endusers. Our COG-MHEAR researchers and collaborators also reported increased interest in exploiting wireless and multi-modal processing in related subject areas, including automatic speech recognition, British Sign Language detection, human health and activity monitoring, and therapeutic and diagnostic systems.
Year(s) Of Engagement Activity 2022,2023
URL https://blog.cogmhear.org/blog
 
Description COG-MHEAR website 
Form Of Engagement Activity Engagement focused website, blog or social media channel
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Other audiences
Results and Impact Website to disseminate the COG-MHEAR research, including blogs. Reached around 200 people in the first year (March 2021-January 2022).
Year(s) Of Engagement Activity 2021,2022
URL https://cogmhear.org/
 
Description First annual COG-MHEAR User Engagement Workshop 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Third sector organisations
Results and Impact The aim of the workshop was to discuss the technology and consider how potential barriers to accepting the new hearing technology could be overcome; to develop privacy preserving models; and to explore public perceptions of the proposed hearing aid technology.
Year(s) Of Engagement Activity 2021
URL https://blog.cogmhear.org/
 
Description Industry-centred Workshop 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Industry/Business
Results and Impact Representatives from global hearing-aid manufacturers, Sonova provided key insights and recommendations on technical hearing-aid (HA) design and usability aspects, including listening with binaural HAs and the importance of information from both ears, as well as highlighting considerations for wireless information transmission. They also provided advice on clinical evaluation and the processes and challenges of moving from concept to market. An initial version of an audio-based minimum-viable product (MVP) demonstrator of our envisaged multi-modal HA and its future collaborative development plan was showcased by COG-MHEAR researchers for early feedback from industry experts and endusers. The presentations stimulated lively discussions afterwards, with some participants requesting more information, and others expressing an interest to exploit our innovative, contextual multi-modal processing approach in academic and industry-led projects and activities. Plans for new collaborations were also discussed with some participants.
Year(s) Of Engagement Activity 2021
URL https://blog.cogmhear.org/
 
Description July 2022 EMBC Workshop 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Other audiences
Results and Impact A workshop at the Engineering and Medicine and Biology Conference. This was a showcase for the range of COG-MHEAR work in progress, including a real-time multi-modal speech enhancement prototype that can exploit lip reading cues to effectively enhance speech in real noisy environments.
Year(s) Of Engagement Activity 2022
URL https://blog.cogmhear.org/hearing-technology-showcase-embc-2022
 
Description Multistakeholder workshop 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Other audiences
Results and Impact Our first annual multi-stakeholder workshop was organized on 23 Feb 2022 which served as a multidisciplinary showcasing forum for our innovative COG-MHEAR research. The Workshop programme included keynote talks by the project PI and Work-Package Leads, an interactive poster session showcasing collaborative research by postdoctoral and doctoral researchers, and a real-time demonstration of our multi-modal minimum-viable product (MVP) operating in live web-based video conferencing environments. This gained appreciation from participants, including clinicians, end users and industry representatives, and leading national and international academics and researchers. The presentations stimulated lively discussions afterwards, with some participants requesting more information, and others expressing an interest to exploit our innovative, contextual multi-modal processing approach in their respective academic and clinical research and industry-led projects and activities. Plans for new collaborations were also discussed with some participants.
Year(s) Of Engagement Activity 2022
URL https://cogmhear.org/
 
Description September 2022 UK Speech Workshop 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Other audiences
Results and Impact Presenting at poster sessions during the 2022 UK Speech conference. This resulted in new contacts and collaborative opportunities with industry and academia.
Year(s) Of Engagement Activity 2022
URL https://blog.cogmhear.org/blog
 
Description Wearing audio-visual hearing aids: a workshop with audiologists, clinicians and an industry representative, most of whom were hearing aid users 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Professional Practitioners
Results and Impact A discussion about the practicalities of wearing the multi-modal hearing aids that are being developed in the COG-MHEAR research programme. Most of the participants were hearing aid users. They were clinicians, audiologists and an industry representative. They joined the workshop remotely on Teams. They included clinicians and audiologists, plus an industry expert from Sonova. The wealth of expertise from their own and their client's use of hearing aids was evident and valuable.
The suggestions and queries that arose have been important in focusing the research programme to ensure that privacy is preserved with the new hearing aids, and to address issues around wearability and usability.
The results of the workshop were presented as a paper and poster at UK Speech 2022 on 6th September 2022 at the University of Edinburgh.
Year(s) Of Engagement Activity 2022
URL https://cogmhear.org/
 
Description Workshop with Sonova 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Other audiences
Results and Impact Interactive discussions on the ongoing development of a multi-modal hearing-aid prototype between industry partners at Sonova and the COG-MHEAR research team and user group members. Researchers took onboard expert advice and feedback from industry experts related to commercial hearing-aid design, evaluation and usability challenges. These informed the continuing development, optimisation and evaluation of our minimum-viable prototype (MVP) demonstrator that was showcased at the Workshop. Plans for a future hearing-aid enduser led workshop were also discussed and agreed.
Year(s) Of Engagement Activity 2022