Analysing Dynamic Change in Faces

Lead Research Organisation: University College London
Department Name: Psychology

Abstract

Humans are very good at understanding and interpreting the motion of others peoples faces. We can effortlessly recognise emotions and interpret subtle facial behaviours such as sardonic smiles, thoughtful frowns or questioning looks, but the question remains, how do we do this? We need new computer based tools to be able to explore this fascinating area of psychology. In this project we will develop a new form of three-dimensional camera system that will allow us to record the movements of people's faces and then process this video information to discover the components of movements that go to make them up. Once we are able to discover the parts of movements that add together to make familiar facial expression we can use this to be able to create new faces; in much the same way as a music mixing desk allows you to blend together different sounds, we will have software that allows us to mix new faces with whatever expressions we select. Using this new tool we can then carry out experiments to look at how we process faces and imitate other people's facial movement. We will examine how observing the movement in one persons face can be translated into movements of our own face to imitate the action. Because the faces we use are created in the computer we can manipulate them in any way we like. This new technology will allow us to address a large set of basic questions. Can we imitate a person if the face seen only from the side or if it is shown upside down? Do we do better when we imitate our self, a friend or a stranger? We can even create caricatures of faces, where we exaggerate particular movements, to evaluate how these facial gestures are represented in the human face processing system. A better understanding of how imitation works will help us understand social behaviours and their development, and also help in developing computer systems that can both recognise and react to our facial expressions. The new face mixing software will also have commercial applications, for example it can be of use in the computer games and entertainment industry. Movements from one persons face can be used as the instructions to be transferred to create another persons face making the same movement. This will allow for example a voice actor to control the movements of a characters face in addition to simply providing the expressive dialogue, the generation of high quality realistic synthetic actors or faster more efficient ways to video conference over your mobile phone.
 
Description Our focus was on the development and use of new tools for facial motion analysis and expression mapping between faces. Faces vary in colour as well as image brightness but colour is not used effectively in image motion or stereo algorithms so we developed a new approach to image motion analysis that characterised the bright-dark, yellow-blue and red-green opponent channels of the human colour system as chromatic derivatives. These derivatives were incorporated into our existing spatio-temporal brightness derivative method for motion and binocular disparity calculation resulting in improved performance.

The prime motivation of the computer vision work was to build tools supporting new methods for studying the perception of facial motion. A major aim was to generate a photorealistic average avatar allowing the separation of facial motion from form. This was achieved using 2D image-based performance-driven animation. We constructed a photorealistic avatar using Principle Components Analysis (PCA) over vectors encoding the differences between single frames of movie sequence and a reference frame. This can deliver an expression space for a given person. We examined the psychological validity of a PCA-based expression space.

By adapting to facial images at the ends of a particular dimension of facial variation (e.g. the first principal component) we could shift the appearance of expressions away from the adapting expression without shifting the perception of faces arrayed along a second orthogonal direction. This showed adaptation within expression space and that images which were statistically orthogonal were also perceptually orthogonal.

The idea that faces are represented as relative to a mean face, a standard model in face perception, raises questions about over which set of faces is the mean constructed. We built PCA spaces across individuals rather than across expressions to investigate "family resemblance" between different classes.
We used a novel technique of mapping a vector representing a deviation of a male face from the male mean into a female face space. This resulted in a female "sibling". We showed that the "sibling pairs" looked more alike than a random pairing indicating "family resemblance" may be encoded by similar vectors referenced to the average of classes of faces.

The same technology can be used to visualise our prejudices. We found that average Conservative and Labour MP's faces were indistinguishable. However average faces rated as strongly Labour or strongly Conservative did look distinctively different and were correctly matched to their stereotypical category by participants in a follow-up experiment. Our ability to imitate facial expressions is puzzling as we rarely see our own faces. We tested the ability of participants to identify a facial action projected onto a computer graphic avatar as being generated by themselves, a friend or another person. If based on experience they should find it easiest to recognise a friend. However we found participants could recognise themselves and their friends from the motion alone when upright but only themselves for upside down faces. Disrupting the timing of the motion showed self-recognition was based on rhythmic cues we have about our own facial motion.

We also undertook extensive public engagement in the project, with media coverage for our results and presentation at the Royal Society Summer exhibition in 2011.
Exploitation Route The technology allows mapping of facial movement between photorealistic faces allowing the generation of a photorealistic performance-driven avatar. It can also be used to facial motion analysis. The work is currently being extended by one EPSRC student (CoMPLEX DTC) one BBSRC DTC student and one student funded by UCL and NTT (Japan). We have been invited to advertise the products of this research on the UK's Security and Resilience Industry Suppliers Community (RISC) website. http://www.riscuk.org/academia/academic-marketplace/facial-motion-analysis/
Sectors Digital/Communication/Information Technologies (including Software),Leisure Activities, including Sports, Recreation and Tourism,Security and Diplomacy

 
Description The work has a number of potential applications including mapping faces between different views of the face, and mapping between different lighting conditions. This work is now supported by NTT Japan through a studentship jointly funded by NTT and UCL. It can also be used for performance-driven animation and the analysis of faction action. The aspect will be progressed though further collaboration with computer scientists and through knowledge transfer activity. A project has been presented through the Security and Resilience Industry Suppliers Community (RISC)'s Academic Marketplace. We also undertook extensive public engagement in the project, with media coverage for our results and presentation at the Royal Society Summer exhibition in 2011.
First Year Of Impact 2011
Sector Security and Diplomacy
Impact Types Cultural

 
Description Australian Research Council Research Grant
Amount £70,000 (GBP)
Funding ID DP0986898 
Organisation Australian Research Council 
Sector Public
Country Australia
Start 10/2009 
End 09/2012
 
Description Research Project Grant Full
Amount £244,000 (GBP)
Funding ID RPG-2013-218 
Organisation The Leverhulme Trust 
Sector Charity/Non Profit
Country United Kingdom
Start 10/2013 
End 01/2017
 
Description Royal Society Summer Exhibition 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? Yes
Geographic Reach National
Primary Audience Public/other audiences
Results and Impact The exhibit generated surprise that it was possible to produce and drive a photorealistic avatar. This prompted discussion about what such a tool might be used for.

The work was selected for press attention.
Year(s) Of Engagement Activity 2010