Understanding age-related changes in processing facial emotion

Lead Research Organisation: University of Glasgow
Department Name: School of Psychology

Abstract

It is estimated that, by 2060, one quarter of the British population will be over 65, many of whom may still be working. The brain ages as dramatically as the body over a lifetime, but despite this the functional consequences of brain aging are not understood. While we can all identify the appearance cognitive deficits related to age and illness, current research suggests that maintained cognitive performance masks massive re-arrangement, and reorganizing of cortical processes.
Cognitive aging manifests itself damagingly in the decline of the basic and essential task of interpreting social signals. Research has established that ageing adults show specific differences in the emotional interpretation of facial expressions, particularly anger. Differences in perception of emotion can have a drastic impact on everyday social interactions and decision-making. Failure to accurately interpret social signals may lead to vulnerabilities in identifying deception and in predicting the behavioural intentions of others.
Our first step in understanding aging adults vulnerabilities in emotion perception, will be to accurately characterize how the aging brain perceives emotion. While it is widely acknowledged that facial and bodily biological motion carry important information for categorizing social signals, most research on the cortical representation of faces and bodies has been done with static images. Thanks to recent developments in 4-D graphics, we have become able to investigate the way the brain extracts relevant information from rapidly changing dynamic cues. Our research will first investigate the processing of key social signals, emotionally expressive faces, using newly developed state-of-the-art realistic 4-D dynamic face stimuli with tightly controlled motion parameters. Combining eye tracking and random variation in the parameters, we will first establish which of these parameters contain the crucial emotion information that older adults rely on when categorizing realistic faces.
We know that cortical regions interact with each other on a fast, millisecond time scale, forming relatively specialized networks. An important hypothesis is that, with aging, the specialized network loose function and other brain regions get recruited to compensate. We will test this hypothesis by characterizing the cortical network that encodes and integrates rapidly changing motion parameters in facial expressions. For this, we will use the time-resolved brain signal (magnetoencephalograpy, or MEG). From the signal we will estimate both when and where motion information is encoded in the cortex. We have a working hypothesis based on previous work that different oscillatory bands in the brain code different face features (a bit like different frequencies code different radio programmes). This is known as multiplexing. To quantify precisely which aspects of the signal are carrying information about the diagnostic features of facial motion, we will use a powerful and general method for measuring how much information one signal carries about a second. the mutual information framework. We will record MEG data from three groups of older participants 20-35, 40-55, and 60-75 years old.
We expect the older participants to show different strategies of encoding based on cortical slowing, and anticipate that older participants recruit both more and different brain areas to perform some specialized cortical tasks, at the same level of competence as younger participants.
Our research will give us insight into the fundamental coding schemes of the cortex and it will also give insight into the adaptive strategies available to the older brain: For example, do older adults require the recruitment of more brain areas, that are usually involved in tasks other than specialized computation of facial information? Finally, we anticipate that the cortical strategies older participants use may be predictable from general changes within the brain.

Technical Summary

It is estimated that, by 2060, one quarter of the British population will be over 65, many of whom will still be working. Over a lifetime, the brain ages as dramatically as the body (Grady, 2008), manifesting itself in the decline of the basic and essential task of interpreting social signals. Research has established that ageing adults show specific differences in the emotional interpretation of facial expressions, particularly anger. We will study how oscillatory networks in the ageing brain extract and further process visual information to recognize facial expressions of emotion at different intensities. Specifically, we will quantify changes in the information processing parameters of naturalistic, dynamic expressive faces, in groups of younger (20-35 years), middle-aged (40-55) and older (60-75 years) adults. To this achieve this aim, we will combine a state-of-the-art framework blending a unique Glasgow-based platform that precisely controls the photo-realistic biological motion of faces with other developments in applications of information theoretic mathematics to analyses of brain signals (i.e. Mutual Information; Transfer Entropy, Schyns et al., 2011). We will apply the information theoretic measures to behavioural variables (e.g. emotion classification responses and reaction times) and to the nodes of cortical networks reconstructed from oscillatory MEG sources (guided by spatially resolved fMRI). We will seek to further our understanding of the dynamic face information required to classify facial expressions and of the brain regions recruited to process this critical information, and how these brain regions process and transfer information between stimulus onset and categorization response in cortical networks. This focus on information processing will directly test the dedifferentiation hypothesis-i.e. the idea that an initially differentiated brain progressively recruits other regions as a compensation strategy to counteract the effects of aging.

Planned Impact

Benefits to the general population:
Our realistic 4-D stimuli have just been developed, and have been successfully used to identify differences in the cultural models of emotion in Western and East Asian observers. This means that we have the first available validated model of emotion dynamics that can be varied parametrically. Whereas older adults do not represent a different culture, very little is known about differences in the interpretation of social signaling in the aging population. This will be the first study to address how, and why older adults read emotional signals differently. Cross generation failure of communication can be very distressing and isolating to both old and young. By identifying weaknesses in interpreting key emotional and social signals in the older population, we can identify circumstances when communication is likely to break down. This will be of potential help to both employers and policy makers, to understand where the older population may need special support to fulfill the requirements of jobs that require quick decision making on the basis of implicit social signals.

Healthcare providers and charities concerned with the wellbeing of the aged population will also be interested and able to benefit from increased understanding of how older adults regulate their emotional responses, as will older adults themselves.

Benefits to society, and communication technology within society.
With increasing geographical mobility, and fracturing of family structures between different locations around the country, older people are gradually beginning to use the now widely available mediated technology such as Skype and face-time, to maintain vital emotional contact with the outside world. Younger adults have grown up with these technologies, and potentially learned to accept them as familiar and direct means of communication. Embededness in social and emotional structures is a major promoter of wellbeing and hence health - protective against isolation, anxiety and depression. We do not yet know how effective these technologies are supporting emotional communication in older adults, and or how they can be adapted to help older adults get the crucial information they need. By modeling how older adults attend to and interpret dynamic digital images of faces we can guide research into how these interfaces can be improved. This research will interest providers of technologies such as Skype, face-time, and video conferencing, as they have the potential to develop age-specific interfaces.

Our research will thus benefit both the aging population: they will learn how they can best deploy technology to get them the information they require to make accurate judgments about emotional states. It will also potentially benefit technological providers of services and products to the aging population, and promote
the development of age-friendly technology.

One focus of our proposal is the role of individual differences in determining cortical processing. As our society ages, determining the preconditions for healthy cognitive aging is becoming a widespread preoccupation in the general public, and for policy makers and health advisors. Our research will describe the functional consequences of individual differences in connectivity and oscillatory rhythms: some of those differences may be due to genetic factors, others may be susceptible to cognitive training. Our research will help to characterize what healthy aging in the brain really means and indirectly promote the possibility of achieving it.
 
Description The model of the brain as an information processing machine is a profound hypothesis in which neuroscience, psychology and theory of computation are now deeply rooted. Modern neuroscience aims to model the brain as a network of densely interconnected functional nodes. However, to model the dynamic information processing mechanisms of perception and cognition, it is imperative to understand brain networks at an algorithmic level-i.e. as the information flow that network nodes code and communicate. In this grant, we used innovative methods (Directed Feature Information) to reconstruct examples of possible algorithmic brain networks that code and communicate the specific features underlying a variety of face categorization tasks. In each observer, we identified a network architecture comprising one occipito-temporal hub where the features underlying both perceptual decisions dynamically converge. Our focus on detailed information flow represents an important step towards a new brain algorithmics to model the mechanisms of perception and cognition.
Exploitation Route Yes, we have written a toolbox with our tools that is currently submitted.
Sectors Healthcare