Affective Computing Models: from Facial Expression to Mind-Reading ("ACMod")
Lead Research Organisation:
Bournemouth University
Department Name: Faculty of Media and Communication
Abstract
Humans exhibit and communicate with a wide range of affective and cognitive states. Mind reading allows humans to predict, model, and interpret each other's behaviour beyond the capabilities of other animals, a claim that arguably can be made despite recent research suggesting apes being successful with false-belief tasks. Therefore, mind reading is fundamental to human social interaction and communication. In mind reading, one of the most important signs is facial expression, as it conveys critical information that reflects mental states and relates to 55% of information when people perceive others' feelings and attitudes. Since Duchenne5 studied the electro-stimulation of individual facial muscles in 1862 and ten years later, Darwin published "The Expression of the Emotions in Man and Animals", making a case for shared ancestry of facial expressions. Research on facial expressions has attracted a lot of attention from different disciplines such as psychology, neuroscience and computer science. In recent years, the development of computing technologies and massive online facial images/videos enabled the boosting of deep learning-based facial expression recognition (FER). To date, automatic FER has achieved excellent progress, from static image to dynamic video analysis, from acted/posed to spontaneous expressions, from macro-expressions to micro-expressions.
In summary, the rising challenges include,
1) Substantial psychological works support the use of appraisal theories of emotion for internal emotion detection through facial behaviours. At the same time, research in computer science mainly focuses on appearance or geometric facial modelling but ignores the underlying biologically-driven mechanism;
2) There is limited available data from different cultures, hindering the research on machine learning method development;
3) Micro-expressions, rapid (1/25 to 1/3 second), subtle, and involuntary facial expressions that are difficult to control through one's willpower, is not studied for culture inconsistency;
In summary, the rising challenges include,
1) Substantial psychological works support the use of appraisal theories of emotion for internal emotion detection through facial behaviours. At the same time, research in computer science mainly focuses on appearance or geometric facial modelling but ignores the underlying biologically-driven mechanism;
2) There is limited available data from different cultures, hindering the research on machine learning method development;
3) Micro-expressions, rapid (1/25 to 1/3 second), subtle, and involuntary facial expressions that are difficult to control through one's willpower, is not studied for culture inconsistency;
Organisations
Publications

Huang J
(2024)
Point'n Move: Interactive scene object manipulation on Gaussian splatting radiance fields
in IET Image Processing

Sheibanifard A
(2025)
An end-to-end implicit neural representation architecture for medical volume data
in PLOS ONE
Title | GSDeformer: Direct, Real-time and Extensible Cage-based Deformation for 3D Gaussian Splatting |
Description | We present GSDeformer, a method that achieves cage-based deformation on 3D Gaussian Splatting (3DGS). Our method bridges cage-based deformation and 3DGS using a proxy point cloud representation. The point cloud is created from 3DGS, and deformations on the point cloud translate to transformations on the 3D Gaussians that comprise 3DGS. To handle potential bending from deformation, we employ a splitting process to approximate it. Our method does not extend or modify the core architecture of 3DGS; thus, it can work with any existing trained vanilla 3DGS as well as its variants. We also automated cage construction from 3DGS for convenience. Experiments show that GSDeformer produces superior deformation results than current methods, is robust under extreme deformations, does not require retraining for editing, runs in real-time(60FPS), and can extend to other 3DGS variants. |
Type Of Material | Computer model/algorithm |
Year Produced | 2024 |
Provided To Others? | Yes |
Impact | It will benefit computer vision, computer graphics, VR/AR/MR/XR, SLAM, Gen-AI. |
URL | https://jhuangbu.github.io/gsdeformer/ |