Data-driven Analysis of the Dynamics of Information-acquisition Over time during Social judgement

Lead Research Organisation: University of Glasgow
Department Name: College of Medical, Veterinary, Life Sci

Abstract

At the heart of human society, social interactions shape and maintain the complex network of relationships between individuals and groups. Yet, whereas some social exchanges foster healthy relations with considerable benefits for society, some actively jeopardize the social connections on which harmonious societies are based. A powerful driver of such social degeneration is prejudice, reflected by the stereotypes harbored by individuals. Specifically, stereotypes bias thought and action via top-down processes, including imposing unsubstantiated inferences about competence [4-6], trustworthiness [12, 13] and criminality [34] based on the recognition of out-group identifiers. However, little is known about how stereotypes and prejudice exert a top-down control to shape the dynamics of information extraction and integration, thereby influencing critical social judgments.
To address this, we propose a state-of-the-art data-driven and modeling analysis of the dynamics of information acquisition over the time course of social judgments. Our proposal comprises a main theoretical hypothesis, studied under three complementary strands, each of which constitutes one main research objective.

Theoretical hypothesis. Complex social judgments (e.g. of facial attraction, dominance and trustworthiness) depend on the conjunction of visual information acquisition and the influence of top-down factors (e.g. prejudice, stereotypes and context) on information processing. We will document the role of top-down factors on the extraction dynamics and use of information for social judgments.
Strand 1. We will combine eye movements and state-of-the-art reverse correlation methods to understand how top-down social categories (prejudice and stereotypes) modulate the sequences of eye fixations on static faces to extract social information for social judgments.
Strand 2. Real-world social interactions typically happen with dynamic faces. Extending Stand 1, we will investigate social information extraction from photo-realistic, three-dimensional dynamic faces.
Strand 3. We will integrate the data-driven analyses of Strands 1-3 in a Bayesian model of information acquisition via eye movements that merges the top-down factors of social priors (prejudice and stereotyping) with the bottom-up preferences for facial features. We also plan to adapt this model to the development of communicative dynamic avatars.

Benefits.
Academia. The Social Sciences and Neuroscience, Information and Communication Technology (ICT). Determining precisely the integration of information causing social judgments of attractiveness, trustworthiness and dominance is centrally important for social science. Normative models of social signals derived from our research will also inform how populations with social deficits fail (e.g. Autism Spectrum Disorder and Asperger Syndrome).

Industrial stakeholders. Gaming industry, robotics, social networking industry, including the exploding field of social computing, with higher-level parameterisation of face and better understanding of how the face communicates important social dimensions. The gaming and 3D-movie industries rely on detailed understandings of social signals. Their automated recognition ranges from homeland security to remote communication via avatars and companion robots, with automated expression applying primarily to the latter two. So far, the 3D-movie/animation industries use point-light displays to capture social signals. Yet, our 4D platform provides models with a far greater power of generalisation.

User community. Intuitive, socially intelligent interfaces for mediated communication. Older population, companion robots. Wider impact of social networking on society. Ultimately, the user community will benefit from ecologically improved (i.e., more human), more intelligent communication interfaces for mediated communication based on socially-relevant traits.

Planned Impact

Who will benefit from this research?
- Academic beneficiaries. The Social Sciences and Neuroscience, Information and Communication Technology (ICT).
- Industrial stakeholders. Gaming industry, robotics, social networking industry. More generally, the exploding field of social computing, with higher-level parameterisation of face and better under-standing of how the face communicates important social dimensions.
- User community. Intuitive, socially intelligent interfaces for mediated communication. Older population, companion robots. Wider impact of social networking on society.

How will they benefit from this research?
- Academic
o Social Sciences and Neuroscience. Dimensions of physical attractiveness, dominance and trustworthiness have primary importance for social judgements that directly influence the lives of individuals. Determining precisely the integration of information causing these judgements is centrally important for social science. Normative models of social signals de-rived in our research will also inform how populations with social deficits fail (e.g. Autism Spectrum Disorder and Asperger Syndrome). The Glasgow group currently collaborates on this topic with Ralph Adolphs (CalTech, USA) to develop more effective remediation thera-pies. A previous related collaboration using our methods produced a Nature publication in 2005.
o ICT. With the development of the digital economy, avatars and companion robots - i.e., re-alistic and generalisable systems for automatically expressing and reading social signals depend on accurate models of these signals. This is not only important for producing modulated signals to communicate with precision, but also to read signals of varying intensities for accurate interpretation. Our research directly addresses these issues and has been very well received in the ICT community due to the flexibility of our 4D platform (i.e. it can derive models of varying intensity for any social signal arising from facial movements) and also because our 4D models are intrinsically and uniquely grounded in human perception, thus validating them.
- Industry. The gaming and 3D-movie industries rely on detailed understandings of social signals. Their automated recognition ranges from homeland security to remote communication via avatars and companion robots, with automated expression applying primarily to the latter two. So far, the 3D-movie/animation industries use point-light displays to capture social signals. Yet, our 4D plat-form provides models with a far greater power of generalisation.
- User community. Ultimately, the user community will benefit from ecologically improved (i.e., more human), more intelligent communication interfaces for mediated communication based on socially-relevant traits.

Publications

10 25 50
 
Description In human social interaction, the face is a central tool of communication because it provides a rich source of social information. Although some signals (e.g., facial expressions of emotion and mental states) can be voluntarily deployed strategically to negotiate social situations, other signals (e.g., those indicating social traits such as dominance, trustworthiness, and attractiveness) are transmitted involuntarily by the phenotypic morphology of the face. The consequences of voluntary and involuntary signaling are significant for individuals (e.g., mate choice, occupational opportunities, sentencing decisions and so forth) and groups (e.g., voting preferences, effective within-culture and across culture communications). However, humans are highly adaptive social beings; like other social animals, humans can camouflage these involuntary morphology-based signals to optimize success within their ecological niche. We addressed the question of which specific facial movements modulate social perceptions of mental states ('thinking,' 'interested,' 'bored' and 'confused') and social traits ('attractiveness,' 'trustworthiness' and 'dominance') and also when in time the face transmits movements that rapidly signal approach avoidance vs. a wider range of social signals. We found, across the Western Caucasian (WC) and East Asian (EA) cultures, that early, simpler and, biologically rooted signals (wide opened eyes; wrinkled nose) support discrimination of elementary categories (e.g. approach/avoidance), whereas later more complex signals discriminate categories important for social interactions (e.g. facial expressions of emotion). Across WC and EA cultures, we also found common facial expressions of interest and boredom facilitating cross-cultural communication but a culture-specific expression of confusion that hinders cross-cultural communication--but not communication within culture. Finally, we found that facial movements of modulating the social attractiveness, trustworthiness and dominance of faces could trump the default social perceptions arising from their morphology.
Exploitation Route The outcome of the research are generative models--i.e. dynamic mathematical models of facial expressions that can be mapped onto any 3D faces. Our models control how changes in the dynamics of the facial movements changes the perceived intensity of each emotion, mental state and social trait. Thus, our generative face models represent novel psychophysical laws for social sciences; these laws predict the perception of emotions, mental states and social traits on the basis of dynamic face identities. We are currently engaging with a Social Robotics company (Furhat Robotics http://www.furhatrobotics.com) to transfer our models onto their social robots to achieve realistic and culture-sensitive models of facial expressions.
Sectors Communities and Social Services/Policy,Creative Economy,Digital/Communication/Information Technologies (including Software),Education,Healthcare,Leisure Activities, including Sports, Recreation and Tourism,Culture, Heritage, Museums and Collections,Security and Diplomacy

 
Description Our findings are dynamic mathematical models of facial expressions of emotion, mental states and social traits. These will be used in the digital economy, to synthesize perceptually validated and culturally sensitive facial expressions on an avatar. Specifically, we have engaged with Furhat Robotics (http://www.furhatrobotics.com) and are currently transferring our findings to their social robot. Simply stated, this involves mapping our dynamic facial expressions onto the face of the Furhat robot and validate their accurate categorization. This is on-going work and we will update in due time.
First Year Of Impact 2013
Sector Digital/Communication/Information Technologies (including Software)
Impact Types Cultural,Societal,Economic