Using multimodal models to facilitate adaptive language learning
Lead Research Organisation:
University of Cambridge
Department Name: Computer Science and Technology
Abstract
Technological advancements in recent years have given rise to many massive open online courses providing self-guided education to thousands of people around the world. These courses tend to provide a linear education with minimal user-personalisation. A more adaptive and personalised system has the potential to provide an enhanced and more efficient learning experience.
My work focuses on developing advanced adaptive learning systems within the context of English second language acquisition. Specifically, I will investigate the application of multimodal semantic models for knowledge modelling in order to facilitate adaptive language learning in line with user learning requirements.
Multimodal semantic models were inspired by research in Cognitive science and neuroscience which suggests that human semantic concept representation is
grounded in multiple modalities such as text and vision. These can be captured by contemporary technology, and continuing technological advancements increasingly enable researchers to record more complex modalities such as gesture and posture. The modalities can be fused together to create sophisticated semantic models that act as knowledge models for a user, and which may drive adaptive functions of a learning system.
Furthermore, we can use technology to control the application environment and information presented to the user in the form of visual and auditory stimulus; the resurgence of virtual and augmented reality technologies also present advanced methods of controlling user stimuli, for example where a learning application consists of the full immersive virtual world. This ability to capture and control an increasing amount of user data and stimuli presents an opportune moment to investigate advanced adaptive learning systems.
My work focuses on developing advanced adaptive learning systems within the context of English second language acquisition. Specifically, I will investigate the application of multimodal semantic models for knowledge modelling in order to facilitate adaptive language learning in line with user learning requirements.
Multimodal semantic models were inspired by research in Cognitive science and neuroscience which suggests that human semantic concept representation is
grounded in multiple modalities such as text and vision. These can be captured by contemporary technology, and continuing technological advancements increasingly enable researchers to record more complex modalities such as gesture and posture. The modalities can be fused together to create sophisticated semantic models that act as knowledge models for a user, and which may drive adaptive functions of a learning system.
Furthermore, we can use technology to control the application environment and information presented to the user in the form of visual and auditory stimulus; the resurgence of virtual and augmented reality technologies also present advanced methods of controlling user stimuli, for example where a learning application consists of the full immersive virtual world. This ability to capture and control an increasing amount of user data and stimuli presents an opportune moment to investigate advanced adaptive learning systems.
Organisations
People |
ORCID iD |
Paula Buttery (Primary Supervisor) | |
Christopher Davis (Student) |
Publications
Studentship Projects
Project Reference | Relationship | Related To | Start | End | Student Name |
---|---|---|---|---|---|
EP/N509620/1 | 30/09/2016 | 29/09/2022 | |||
1940766 | Studentship | EP/N509620/1 | 30/09/2017 | 30/07/2022 | Christopher Davis |
EP/R513180/1 | 30/09/2018 | 29/09/2023 | |||
1940766 | Studentship | EP/R513180/1 | 30/09/2017 | 30/07/2022 | Christopher Davis |