Neural Representation of the Identities and Expressions of Human Faces

Lead Research Organisation: MRC Centre Cambridge
Department Name: MRC Cognition and Brain Sciences Unit

Abstract

Abstracts are not currently available in GtR for all funded research. This is normally because the abstract was not required at the time of proposal submission, but may be because it included sensitive information such as personal details.

Publications

10 25 50
publication icon
Furl N (2013) Top-Down Control of Visual Responses to Fear by the Amygdala in The Journal of Neuroscience

publication icon
Furl N (2014) Cross-frequency power coupling between hierarchically organized face-selective areas. in Cerebral cortex (New York, N.Y. : 1991)

publication icon
Furl N (2012) Dynamic and static facial expressions decoded from motion-sensitive areas in the macaque monkey. in The Journal of neuroscience : the official journal of the Society for Neuroscience

 
Description The brain is composed, in part, of pathways: connected networks of areas that represent information and perform an important function. The main objective of our ESRC grant was to explore whether there are multiple distinguishable brain pathways responsible for recognition of identities versus recognition of expressions and how these pathways relate to neural representations of facial form and facial motion. We hypothesised that an analysis of brain responses to dynamic videos of facial movements would reveal new insight into this pathway structure in the brain. One of our aims was to use computational models (e.g., dynamic causal modelling, DCM) to test models of pathway structure against our data.
Below I provide brief descriptions of the ESRC-funded studies that use dynamic faces and/or DCM methods to reveal these brain pathways, their facial form and motion representations, and how they can interact together when recognising facial expressions. We have also used connectivity models and methods to learn more about these pathways including (a) "face-blind" individuals, who are less able to recognise faces (developmental prosopagnosics), show deficits in how brain areas are connected, suggesting a facial identity-related pathway has been disrupted in these individuals; (b) brain areas that respond to faces use a characteristic "cross-frequency coupling" pattern to communicate, which may be a signature for the transmission of information in the face recognition pathways. Our grant is also associated with new studies, that remain ongoing and unpublished but are coming soon, and that further explore these topics.

Please note that part of this research was conducted at the MRC Cognition Brain Sciences Unit (CBU) in Cambridge and part of it was performed at Royal Holloway, University of London (RHUL). The work at these institutions derived from one grant project but are listed on ResearchFish as separate grants and so we have reported separate outputs for each. Here, we describe the details of outputs resulting from work at the CBU (ES/I01134X/1). Please see ResearchFish report for ES/I01134X/2 for details of outputs resulting from work conducted at RHUL.


(1) The first output that we published was an analysis of existing macaque functional magnetic resonance imaging data (fMRI, a type of brain imaging that creates detailed maps of brain responses to stimuli, such as faces). Using special statistical methods known as multivariate decoding, we found evidence for facial expression representations in brain areas that ordinarily respond to motion. This paper suggests an important role for motion-sensitive brain areas in recognition of both dynamic and static facial expressions.

Furl N*, Hadj-Bouziane F*, Liu N, Averbeck BB, Ungerleider LG. 2012. Dynamic and static facial expressions decoded from motion-sensitive areas in the macaque monkey. J Neurosci 32:15953-62. *co-first authors

(2) The second output that we published is also an analysis of a previously existing dataset. These data come from magnetoencephalography (MEG). MEG allows us to track brain responses very quickly so that we can analyse the frequency with which they increase and decrease in activity at the millisecond level (neural oscillations). Using DCM as a connectivity modelling technique, we discovered a characteristic frequency signature that areas responding to faces use to communicate with each other. This finding of a fundamental mechanism could aide future research that wishes to track how the different face recognition pathways transfer information.

Furl N, Coppola R, Averbeck BB, Weinberger D. 2014. Cross-frequency power coupling among hierarchically-organized face-selective areas. Cerebral Cortex 24:2409-20.


(3) The third output that we published also used DCM to develop pathway models of the face recognition system. However, in this study, DCM was applied to fMRI data. We aimed to discover how brain areas representing form (e.g., FFA) and those representing motion (e.g., V5) interact when confronted with dynamic and static fearful expressions. We found that fearful faces enhanced responses in motion areas when faces were moving but they enhanced responses in form areas when faces were static. Our connectivity modelling showed this occurred because of the influence of the amygdala, an area involved in emotion. Thus, it seems form and motion pathways can each detect fearful information, depending on whether the face is dynamic, and they are controlled by the amygdala.

Furl N, Henson RN, Friston KJ, Calder AJ. 2013. Top-down control of visual responses to fear by the amygdala. J Neurosci 33:17435-43.


(4) A fourth output that we published investigated why one brain area, the superior temporal sulcus, responds only to faces that are moving, and not to objects or static stimuli. Our DCM analysis showed that the form and motion pathways work together to produce responses in the STS that are specific to facial form and movement. We conclude, as our grant hypothesised, that dynamic facial stimuli can reveal interactions between form and motion pathways subserving face recognition.

Furl N, Henson RN, Friston KJ, Calder AJ. 2015. Network Interactions Explain Sensitivity to Dynamic Faces in the Superior Temporal Sulcus. Cerebral Cortex 25:2876-82.
Exploitation Route Our research has applications in developing artificial visual recognition systems for video information as well as developing clinical models of brain disorders. These studies suggest ways that abstract visual information can be coded by neurons as well as the computations these neurons perform when coding. This knowledge can be used to devise and improve artificial visual recognition systems. Particularly, our results using dynamic facial stimuli can help develop software which can visually recognise video. Our research using the macaque monkey will help develop animal models of visual function. Our research on oscillatory communication between brain regions is a first step towards developing sophisticated models of disorders such as schizophrenia.
Sectors Education

 
Description Please note that ES/I01134X/1 and ES/I01134X/2 are the same grant, held by the PI when he was at different institutions. These have the same impacts. The outputs of this grant were designed from the beginning to be primarily academic in nature. Although there are currently few documented demonstrations of influence outside of academia at present, there is potential that this grant will have indirect influence in the long term. This grant addresses the issue of how the visual system recognises facial identities and expressions from dynamic videos. Understanding how emotional information is detected by the neural mechanisms in visual system may, in the long term, help those suffering disorders in face perception, such as prosopagnosia, or those suffering anxiety disorders, who are adversely affected when encountering emotional or threatening visual information. As a result of this grant, our laboratory has developed a method for extracting and quantifying facial motion from video. We also have developed methods for animating computer rendered 3D models of faces with extracted motion data. Our first results from this methods have been published in our Furl et al., (2017) Neuroimage paper. We have both behavioural and a new brain imaging study using these 3D motion rendering methods in preparation now for further publications. With further study, these methods may improve how automated systems process video data from faces. They may also create new graphics applications. The grant has had a positive training outcome on Dr Furl's career, as he has transitioned from a soft money principal investigator at the MRC CBU to a permanent lectureship post at Royal Holloway, University of London.