Artificial Intelligence and Cognitive Science

Lead Research Organisation: University of Birmingham
Department Name: School of Psychology

Abstract

This interdisciplinary project bridges between artificial intelligence, computational linguistics, computational and cognitive neuroscience. Driven by innovations and progress in artificial intelligence and computer vision, cognitive computational neuroscience has recently used deep learning models to unravel the features represented at different levels of the cortical hierarchy. Combining deep learning and brain population responses such as fMRI or EEG revealed that higher cortical areas along the ventral visual processing stream represent features that emerge in higher layers of deep convolutional networks trained on object categorization (see Yamins and DiCarlo, Nature Neuroscience, 2016). Likewise research has started to correlate features emerging in neural networks trained on auditory source sounds with neural representations along the auditory processing stream of the human brain. While the extent to which deep neural networks truly mimic the computations performed by human observers is still debated (e.g. see Lake, Ullman, Tenenbaum Gershman, 2017), the combination of deep neural network and human neuroimaging has already proven a powerful approach to further our understanding of the human brain. Conversely, differences and similarities in learning/training, computations and representations between human and artificial systems will inform and inspire further advances in artificial intelligence. To our knowledge research combining deep learning with brain imaging data has focused predominantly on processing of signals from single sensory modalities. Yet, in our natural environment our senses are constantly bombarded with many different sensory signals. For effective interactions with the world, human and artificial agents need to integrate information across multiple senses into coherent and more reliable representations. Most prominently, speech comprehension in noise is greatly facilitated when the observer combines the auditory speech signal with concurrent visual input, i.e. temporally correlated articulatory facial movements. In this project we will exploit the synergies between artificial intelligence and computational neuroscience to investigate how artificial and human systems generate speech representations from vision and audition. 1. We will train deep neural networks on speech (e.g. words) recognition independently and together on visual (i.e. articulatory movements) and auditory speech signals. We expect that lower layers in the neural network predominantly represent unisensory representations, while higher layers generate representations that combine information from vision and audition. We will then investigate the representations elicited by unisensory and audiovisual inputs (e.g. partial observability) and at different levels of signal to noise ratios and various distortions. 2. Using psychophysics we will investigate the representational space formed by human observers. How do human observers recognize speech input under unisensory and audiovisual contexts? How is their performance affected by different levels of noise or distortions? Do neural networks and human observers make similar or different confusions (e.g. comparison of confusion matrices)? 3. Combining fMRI and EEG we will investigate the neural representations across the visual and auditory processing hierarchies and relate those to the features that emerged in different layers of the neural network. Comparing human and machine learning performance on speech recognition tasks will provide insights into similarities and differences in the representations, computations and learning of human and artificial systems for speech recognition. The results will advance our understanding of the neural mechanisms underlying audiovisual speech recognition and inspire innovations in algorithms and training schemes of artificial systems for audiovisual speech recognition.

Publications

10 25 50

Studentship Projects

Project Reference Relationship Related To Start End Student Name
EP/R512436/1 01/10/2017 31/03/2022
1944028 Studentship EP/R512436/1 01/10/2017 30/09/2021 Michael Joannou