Decoding speech from brainwaves
Lead Research Organisation:
University of Oxford
Abstract
Brief description of the context of the research including potential impact
Decoding speech from the brain (e.g. heard speech, attempted speech, imagined speech,
etc.) without surgery would open up brain-computer interfaces (BCIs) to a range of novel
applications. This can range from assisting those with verbal communication difficulties in
the short term to creating new peripheral technologies for interacting with computers in the
long term.
Aims and Objectives
- Mainly: provide state-of-the-art methods for decoding inner speech from brain
signals
- Additionally:
- Improve the data-efficiency of existing approaches to scale accuracy faster as
collecting neural recordings is expensive and time consuming
- Find ways to harmonise heterogeneous brain data to leverage neural
recordings at scale
- Discover where and how representations of speech and thought are
processed in the brain
Novelty of the research methodology
As the number of public neural recordings has rapidly grown in recent years, deep learning
is a promising avenue for decoding these complex signals. The methods we are using are
state-of-the-art (deep learning) and we are developing novel approaches specifically for the
challenges of this problem (data with high-dimensionality, low signal-to-noise ratio, and very
little labelled data).
Alignment to EPSRC's strategies and research areas (which EPSRC research area the
project relates to) Further information on the areas can be found on
http://www.epsrc.ac.uk/research/ourportfolio/researchareas/
This work is most related to the following research areas: Artificial intelligence technologies,
Assistive technology, rehabilitation and musculoskeletal biomechanics, Analytical Science,
Biological Informatics, Medical imaging, Natural language processing, Speech technology
Any companies or collaborators involved
Current collaborators include Mark Woolrich (Oxford Centre for Human Brain Activity) and
Brendan Shillingford (Google DeepMind). It is likely that we will seek further collaborations
from industry and academia as this work progresses
Decoding speech from the brain (e.g. heard speech, attempted speech, imagined speech,
etc.) without surgery would open up brain-computer interfaces (BCIs) to a range of novel
applications. This can range from assisting those with verbal communication difficulties in
the short term to creating new peripheral technologies for interacting with computers in the
long term.
Aims and Objectives
- Mainly: provide state-of-the-art methods for decoding inner speech from brain
signals
- Additionally:
- Improve the data-efficiency of existing approaches to scale accuracy faster as
collecting neural recordings is expensive and time consuming
- Find ways to harmonise heterogeneous brain data to leverage neural
recordings at scale
- Discover where and how representations of speech and thought are
processed in the brain
Novelty of the research methodology
As the number of public neural recordings has rapidly grown in recent years, deep learning
is a promising avenue for decoding these complex signals. The methods we are using are
state-of-the-art (deep learning) and we are developing novel approaches specifically for the
challenges of this problem (data with high-dimensionality, low signal-to-noise ratio, and very
little labelled data).
Alignment to EPSRC's strategies and research areas (which EPSRC research area the
project relates to) Further information on the areas can be found on
http://www.epsrc.ac.uk/research/ourportfolio/researchareas/
This work is most related to the following research areas: Artificial intelligence technologies,
Assistive technology, rehabilitation and musculoskeletal biomechanics, Analytical Science,
Biological Informatics, Medical imaging, Natural language processing, Speech technology
Any companies or collaborators involved
Current collaborators include Mark Woolrich (Oxford Centre for Human Brain Activity) and
Brendan Shillingford (Google DeepMind). It is likely that we will seek further collaborations
from industry and academia as this work progresses
People |
ORCID iD |
| Dulhan Jayalath (Student) |
Studentship Projects
| Project Reference | Relationship | Related To | Start | End | Student Name |
|---|---|---|---|---|---|
| EP/S024050/1 | 30/09/2019 | 30/03/2028 | |||
| 2868397 | Studentship | EP/S024050/1 | 30/09/2023 | 29/09/2027 | Dulhan Jayalath |