Machine Learning for Hearing Aids: Intelligent Processing and Fitting

Lead Research Organisation: University of Cambridge
Department Name: Engineering

Abstract

Current hearing aids suffer from two major limitations:

1) hearing aid audio processing strategies are inflexible and do not adapt sufficiently to the listening environment,
2) hearing tests and hearing aid fitting procedures do not allow reliable diagnosis of the underlying nature of the hearing loss and frequently lead to poor fitting of devices.

This research programme will use new machine learning methods to revolutionise both of these aspects of hearing aid technology, leading to intelligent hearing devices and testing procedures which actively learn about a patient's hearing loss enabling more personalised fitting.

Intelligent audio processing

The optimal audio processing strategy for a hearing aid depends on the acoustic environment. A conversation held in a quiet office, for example, should be processed in a different way from one held in a busy reverberant restaurant. Current high-end hearing aids do switch between a small number of different processing strategies based upon a simple acoustic environment classification system that monitors simple aspects of the incoming audio. However, the classification accuracy is limited, which is one of the reasons why hearing devices perform very poorly in noisy multi-source environments. Future intelligent devices should be able to recognise a far larger and more diverse set of audio environments, possibly using wireless communication with a smart phone. Moreover, the hearing aid should use this information to inform the way the sound is processed in the hearing aid. The purpose of the first arm of the project is to develop algorithms that will facilitate the development of such devices.

One of the focuses will be on a class of sounds called audio textures, which are richly structured, but temporally homogeneous signals. Examples include: diners babbling at a restaurant; a train rattling along a track; wind howling through the trees; water running from a tap. Audio textures are often indicative of the environment and they therefore carry valuable information about the scene that could be harnessed by a hearing aid. Moreover, textures often corrupt target signals and their suppression can help the hearing impaired. We will develop efficient texture recognition systems that can identify the noises present in an environment. Then we will design and test bespoke real-time noise reduction strategies that utilise information about the audio textures present in the environment.


Intelligent hearing devices

Sensorineural hearing loss can be associated with many underlying causes. Within the cochlea there may be dysfunction of the inner hair cells (IHCs) or outer hair cells (OHCs), metabolic disturbance, and structural abnormalities. Ideally, audiologists should fit a patient's hearing aid based on detailed knowledge of the underlying cause of the hearing loss, since this determines the optimal device settings or whether to proceed with the intervention at. Unfortunately, the hearing test employed in current fitting procedures, called the audiogram, is not able to reliably distinguish between many different forms of hearing loss.

More sophisticated hearing tests are needed, but it has proven hard to design them. In the second arm of the project we propose a different approach that refines a model of the patient's hearing loss after each stage of the test and uses this to automatically design and select stimuli for the next stage that are particularly informative. These tests will be be fast, accurate and capable of determining the form of the patient's specific underlying dysfunction. The model of a patient's hearing loss will then be used to setup hearing devices in an optimal way, using a mixture of computer simulation and listening test.

Planned Impact

Hearing aid users will be a major beneficiary of the new technologies developed in this project. The impact on this user-group will be mediated through impacts on hearing aid manufactures (who will incorporate the new intelligent audio processing technologies into their devices) and audiologists (who will adopt the new intelligent listening tests).

Industrial and Societal Impact

Hearing aid manufacturers are one of the key beneficiaries for the intelligent audio processing technologies developed in the first arm of the project. We anticipate that the audio processing strategies we develop will translate to improved performance of these devices in noisy environments, currently a major limitation. Our project partner Dr.~Stefan Launer, Senior Vice President in charge of research at Phonak, will identify possible opportunities for commercialisation of the new techniques. In addition, Co-PI Prof. Brian Moore has strong links with other hearing aid companies including GNReSound, Starkey, and Oticon and he has a track record of commercialising academically developed technology.

Signal processing and machine learning are keystone technologies: the development of new techniques and methodologies triggers advances in a range of downstream application domains. The new methods developed in our proposal for audio recognition and noise removal have the potential to influence the broader field of machine hearing. For example, audio-recognition can be used to to tag sound tracks for audio- and video-search applications, whilst noise-suppression methods can be used for audio restoration for the digital industries. The PI has on going collaborations with industrial partners, including Google, and will consider potential applications of the methods developed in a broad context, not limited to hearing impairment.

Public Health and Societal Impact

Audiologists are a key user-group for the intelligent listening tests developed in the second arm of the project. We will make current tests faster and more accurate than current tests, freeing up the time of the audiologist. Importantly we will also develop new more powerful tests that will reveal the precise dysfunction that underlies a patient's hearing loss. Our project partner Dr.~David Baguley is Head of the Audiology Department and Cochlear Implant Centre at Addenbrookes Hospital and Honorary Professor of Audiology at Anglia Ruskin University. He will help ensure that the new hearing tests we develop will have a large impact on the field of audiology, both through his influence on clinical practice and his influence on teaching new audiologists.

This research grant is focused in improving hearing aid technology. However, cochlear implant technology and fitting suffers from a similar set of limitations as hearing aid technology and fitting. The advanced audio processing schemes, listening tests and fitting methods developed in the grant also have the potential to advance cochlear implant technology, and fitting. Project Partner Dr. Carlyon's role is to help identify and exploit these opportunities. Dr.Carlyon works closely with several cochlear implant companies including Cochlear and Advanced Bionics.

Publications

10 25 50

publication icon
Moore BC (2016) A review of the perceptual effects of hearing loss for frequencies above 3 kHz. in International journal of audiology

publication icon
Wallaert N (2016) Comparing the effects of age on amplitude modulation and frequency modulation detection. in The Journal of the Acoustical Society of America

publication icon
Moore B (2016) Effects of Sound-Induced Hearing Loss and Hearing Aids on the Perception of Music in Journal of the Audio Engineering Society

publication icon
Moore BC (2016) Evaluation of a method for enhancing interaural level differences at low frequencies. in The Journal of the Acoustical Society of America

publication icon
Hernández-Lobato J.M. (2016) Black-Box a-divergence minimization in 33rd International Conference on Machine Learning, ICML 2016

 
Description We have developed a suite of new intelligent hearing tests that we have tested on hearing impaired listeners. The experiments indicate that the new tests are much more efficient that those currently used by audiologists -- they take far fewer trials to achieve the same accuracy -- and they do not require manual intervention, being fully automated. The more sophisticated tests attempt to diagnose more complex forms of hearing impairment that current tests which will be useful when fitting hearing aids. We have published five journal papers on these topics and have further work under review. We have additionally developed fundamental new machine learning tools to support this research including methods to handle larger datasets, methods to model hearing loss and encapsulate experimental uncertainty about that hearing loss, and methods to efficiently learn incrementally from data as the hearing test develops. These new methods have been published in the top machine learning conferences and journals (Journal of Machine Learning Research, Neural Information Processing Systems, International Conference on Machine Learning and the International Conference on Learning Representations). In addition to using these methods to support hearing tests, we have also applied them widely across a suite of machine learning tasks.
Exploitation Route Brian Moore's group has engaged audiologists in order to establish how to get these tests into clinic. We have approached technology firms who are interested in the new machine learning tools that we have developed.
Sectors Digital/Communication/Information Technologies (including Software),Healthcare

URL http://cbl.eng.cam.ac.uk/Public/Turner/WebHome
 
Description Prof. Moore and one of the RAs who was paid by this grant actively working to get the new methods we have developed into audiology clinics. They are also working with companies so that they may utilise the computationally efficient methods we developed for computing loudness of sounds.
First Year Of Impact 2019
Sector Healthcare
Impact Types Societal,Economic

 
Description Amazon research award
Amount $80,000 (USD)
Organisation Amazon.com 
Sector Private
Country United States
Start 03/2019 
 
Description Baroness de Turckhiem Fund Award
Amount £22,233 (GBP)
Organisation University of Cambridge 
Department Trinity College Cambridge
Sector Academic/University
Country United Kingdom
Start 01/2017 
End 01/2019
 
Description Facebook AI Partnership (GPU Server Gift)
Amount $50,000 (USD)
Organisation Facebook 
Sector Private
Country United States
Start 03/2017 
 
Description Google Research Focussed Grant
Amount $80,000 (USD)
Organisation Google 
Sector Private
Country United States
Start 01/2017 
 
Description Machine Learning for Tomorrow: Efficient, Flexible, Robust and Automated
Amount £3,100,000 (GBP)
Funding ID EP/T005637/1 
Organisation Engineering and Physical Sciences Research Council (EPSRC) 
Sector Public
Country United Kingdom
Start 01/2020 
End 01/2025
 
Description Microsoft Research Azure Computer Credits Award
Amount $50,000 (USD)
Organisation Microsoft Research 
Sector Private
Country Global
Start 02/2018 
End 02/2019
 
Description Microsoft Research 
Organisation Microsoft Research
Country Global 
Sector Private 
PI Contribution Joint supervision of PhD students and Postdoctoral Research Associates; Joint projects with applications within Microsoft and beyond.
Collaborator Contribution Joint supervision of PhD students and Postdoctoral Research Associates; Joint projects with applications within Microsoft and beyond.
Impact The partnership has existed for 5 months. In that time we have had three joint papers written and accepted together. In the longer term we expect the project to have economic and societal impact in the fields of health, gaming, and intelligent software systems. We have applied for a Prosperity Partnership grant together in order to expand and strengthen the existing partnership.
Start Year 2018