MEA

Micro-expression is very rapid involuntary facial expressions which reveal suppressed affect. Facial micro-expressions reveal contradictions between facial expressions and the emotional state, enabling recognition of suppressed emotions. Micro-expressions are important for understanding humans’ deceitful behavior. Psychologists have been studying them since the 1960’s. 

Currently the attention is elevated in both academic fields and in media. However, while general facial expression recognition (FER) has been intensively studied for years in computer vision, little research has been done in automatically analyzing microexpressions. The biggest obstacle to date has been the lack of a suitable database. In this paper [1] we present a novel Spontaneous Micro-expression Database SMIC, which includes 164 microexpression video clips elicited from 16 participants. Microexpression detection and recognition performance are provided as baselines. SMIC provides sufficient source material for comprehensive testing of automatic systems for analyzing microexpressions, which has not been possible with any previously published database.

Recently, we propose two spatiotemporal feature descriptors for micro-expression recognition. 

1 Spatiotemporal Completed Local Quantized Pattern [2]

Spontaneous facial micro-expression analysis has become an active task for recognizing suppressed and involuntary facial expressions shown on the face of humans. Recently, Local Binary Pattern from Three Orthogonal Planes (LBP-TOP) has been employed for micro-expression analysis. However, LBP-TOP suffers from two critical problems, causing a decrease in the performance of micro-expression analysis. It generally extracts appearance and motion features from the sign-based difference between two pixels but not yet considers other useful information. As well, LBP-TOP commonly uses classical pattern types which may be not optimal for local structure in some applications. This paper proposes SpatioTemporal Completed Local Quantization Patterns (STCLQP) for facial micro-expression analysis. Firstly, STCLQP extracts three interesting information containing sign, magnitude and orientation components. Secondly,  an efficient vector quantization and codebook selection are developed for each component in appearance and temporal domains to learn compact and discriminative codebooks for generalizing classical pattern types. Finally, based on discriminative codebooks, spatiotemporal features of sign, magnitude and orientation components are extracted and fused. Experiments are conducted on three publicly available facial micro-expression databases. Some interesting findings about the neighboring patterns and the component analysis are concluded. Comparing with the state of art, experimental results demonstrate that STCLQP achieves a substantial improvement for analyzing facial micro-expressions.

2 Spatiotemporal Local Binary Pattern with Integral Projection (STLBP-IP)

 Recently, there are increasing interests in inferring mirco-expression from facial image sequences. For micro-expression recognition, feature extraction is an important critical issue. In this paper, we proposes a novel framework based on a new spatiotemporal facial representation to analyze micro-expressions with subtle facial movement. Firstly, we propose to use an integral projection method based on difference images for obtaining horizontal and vertical projection, which can preserve the shape attribute of facial images and increase the discrimination for micro-expressions. Furthermore, we employ the local binary pattern operators to extract the appearance and motion features on horizontal and vertical projections. Intensive experiments are conducted on three availably published micro-expression databases for evaluating the performance of the method. Experimental results demonstrate that the new spatiotemporal descriptor can achieve promising performance in micro-expression recognition.

Appearance feature descriptor

Motion feature descriptor

3 Discriminative Spatiotemporal Local Binary Pattern with Improved Integral Projection [4, 6]

Recently, there are increasing interests in inferring mirco-expression from facial image sequences. Due to subtle facial movement of micro-expressions, feature extraction has become an important and critical issue for spontaneous facial micro-expression recognition. Recent works usually used spatiotemporal local binary pattern for micro-expression analysis. However, the commonly used spatiotemporal local binary pattern considers dynamic texture information to represent face images while misses the shape attribute of face images. On the other hand, their works extracted the spatiotemporal features from the global face regions, which ignore the discriminative information between two micro-expression classes. The above-mentioned problems seriously limit the application of spatiotemporal local binary pattern on micro-expression recognition. In this paper, we propose a discriminative spatiotemporal local binary pattern based on an improved integral projection to resolve the problems of spatiotemporal local binary pattern for micro-expression recognition. Firstly, we develop an improved integral projection for preserving the shape attribute of micro-expressions. Furthermore, an improved integral projection is incorporated with local binary pattern operators across spatial and temporal domains. Specifically, we extract the novel spatiotemporal features incorporating shape attributes into spatiotemporal texture features. For increasing the discrimination of micro-expressions, we propose a new feature selection based on Laplacian method to extract the discriminative information for facial micro-expression recognition. Intensive experiments are conducted on three availably published micro-expression databases including CASME, CASME2 and SMIC databases. We compare our method with the state-of-the-art algorithms. Experimental results demonstrate that our proposed method achieves promising performance for micro-expression recognition.


4 Motion magnification for micro-expression recognition [5, 7]

We investigated methods for spotting micro-expression from video data to be used, e.g., as a preprocessing step prior to recognition. In [4], we magnified micro-expression video by using motion magnification.

The work has been highlighted in international media such as MIT Technology Review and Daily Mail.

Magnified micro-expression demo: original video

and magnified video

(SMIC database, class: surprise)

Reference:

[1] Xiaobai Li, Tomas Pfister, Xiaohua Huang, Guoying Zhao, and Matti Pietikäinen. A spontaneous micro-expression database: inducement, collection and baseline. IEEE International Conference on Automatic Face and Gesture Recognition, pp. 1-6, 2013.

[2] Xiaohua Huang, Guoying Zhao, Xiaopeng Hong, Wenming Zheng and Matti Pietikäinen. Spontaneous facial micro-expression analysis using spatiotemporal completed local quantized patterns, Neurocomputing, 2016.

[3] Xiaohua Huang, Guoying Zhao, Xiaopeng Hong, Wenming Zheng and Matti Pietikäinen. Texture description with completed local quantized patterns. The 18th Scandinavian Conference on Image Analysis, pp. 1-10, 2013.

[4] Xiaohua Huang, Sujing Wang, Xin Liu, Guoying Zhao, Xiaoyi Feng and Matti Pietikäinen. Spontaneous Facial Micro-Expression Recognition using Discriminative Spatiotemporal Local Binary Pattern with an Improved Integral Projection. arXiv, 2016.

[5] Xiaobai Li, Xiaopeng Hong, Antti Moilanen, Xiaohua Huang, Tomas Pfister, Guoying Zhao, Matti Pietikäinen. Reading hidden emotions: spontaneous micro-expression spotting and recognition. arXiv: 1511,00423v1, November 2015.

[6] Xiaohua Huang, Sujing Wang, Xin Liu, Guoying Zhao, Xiaoyi Feng and Matti Pietikäinen. Discriminative Spatiotemporal Local Binary Pattern with revisited Integral Projection for Spontaneous Facial Micro-Expression Recognition. IEEE Transactions on Affective Computing, 2017.

[7] Xiaobai Li, Xiaopeng Hong, Antti Moilanen, Xiaohua Huang, Tomas Pfister, Guoying Zhao, Matti Pietikäinen. Towards reading hidden emotions: spontaneous micro-expression spotting and recognition. IEEE Transactions on Affective Computing, 2017.