ICVGIP synthesis

Abstract: Imposing expressions on expression-neutral human face images is an interesting application of human-computer-interaction, animation, entertainment and other such fields. The objective of this paper is to impose one of the six prototypic emotional expressions i.e., Joy, Surprise, Disgust, Fear, Anger and Sorrow to a given expression-neutral face image. For this, we first establish individual models for each of the six prototypic expressions. This model is independent of the shape and texture i.e., identity of the subjects in the training video sequences. Given an intensity of a particular expression, we find the changes in the shape and texture due to a particular expression from the derived models.  These changes are added to the automatically annotated test image on which the expression is to be imposed. The major contributions of the paper are: (1) Developing a method for finding facial structure specific changes of a subject for imposing a particular expression and (2) Establishing a nonlinear relationship between the expression intensity and the corresponding facial changes. The experimental results show that the proposed method is better compared to another related method. The proposal is also good at preserving the identity of the subject while imposing a given expression on the expression-neutral face image.

First column: original images on which we want to impose expressions. Second column: original images displaying expressions the we want to synthesize. Third columns: images synthesized following linear model of expression intensity. Fourth column: image synthesized following linear model of expression intensity and facial structure parameters. Fifth column: images synthesized following non-linear model of expression intensity and facial structure parameters.

 

 Discussion: For the same emotion, facial expressions differ for people with different facial structures. Most of the facial expression synthesis models impose the same expression for the same emotion to people having different facial structure. Column two to the right side shows example of such method. In the current paper we have proposed a novel method that takes different facial structures of different people into consideration and imposes different and more detailed expressions on different people in the absence of actor performance data for the expression being imposed. Column three and four show the images synthesized following our method. Column three assumes linear relationship and column four assumes non-linear relationship between expression intensity, facial structure pair and  corresponding changes in facial appearance. Notice that the non-linear assumption  imposes more detailed expression.

Possible application areas: The capability of imposing emotional expressions on a given expression neutral face image can be utilized in such fields as animation, human-computer-interaction, robotics, entertainment etc. The two videos to the right show two examples of synthesized (animated) expressions imposed on one given expression neutral face images. The expression generation capability along with facial expression recognition capability enables a computer to interact with its users in a more humane manner. 

Query/Suggestion: If you have any kind of query/suggestion regarding this work or any other, do not hesitate to contact me at the e-mail id:

agarwal.swapna@gmail.com 

or

swapna_r@isical.ac.in

 

The video is generated from a single image displaying expression neutral face. Using our algorithm we have imposed the expression 'happiness' with increasing intensities on the given image. For better visualization purpose the frame rate has been kept low. Music has been added later on for better experience.

 

The video is generated from a single image displaying expression neutral face. Using our algorithm we have imposed the expression 'disgust' with increasing intensities on the given image. For better visualization purpose the frame rate has been kept low. Music has been added later on for better experience.