Click here for Papers
Click here for Codes
Click here for Videos and Demos
Past Projects
(1) Fully-Automatic and Real-Time Face and Eyes Tracking
The goal of this project was to develop a fully-automatic facial expression recognition system capable of handling uncontrolled real-world settings. The most crucial step in the pipeline was to develop a fully-automatic face detection and facial landmark point tracking system capable of handling faces in the wild. For this, several methodologies were proposed that were of capable of real-time performance.
(2) Automatic 2D and 3D Deformable Model Building
A major drawback of statistical models of non-rigid, deformable objects, such as the active appearance model (AAM), is the required pseudo-dense annotation of landmark points for every training image. This is typically done in a tedious and error prone manual process. To solve this problem, the goal is simply to generate a bunch of synthetic images from a single given frontal image and use these to train synthetic 2D and 3D models. These models can then be used for automatically annotating the unseen images thereby eliminating the need for manual annotation.
(3) Head pose estimation
Accurate estimation of the head pose (Yaw, Pitch and Roll) can make so many face related applications a lot simpler and better. It is basically the first step for any application that involves a variation in pose (for example, face or facial expression recognition). A simple data-intensive approach that rely on accurate 2D data, generated from a well aligned 3D model, can be easily used to train an accurate head pose estimator.
[ICCV2011]
(4) Face Recognition
Pose variation is one of most crucial factors that limits the utility of current state-of-the-art face recognition systems. Broadly speaking, there are two strategies to explicitly address the pose problem (1) Pose-Normalization (2) Gallery Augmentation. Both of these strategies can have 2D and 3D variants that can be efficiently used to improve face recognition under pose-variation.
I also implemented Cosine Similarity Metric Learning (CSML) and Tied Factor Analysis method for the task of face recognition and verification. Currently, I am exploring these (and other related) learning based face recognition/verification methods.
(5) Facial Expression Analysis
My intial research involved exploring various AAM fitting/tracking algorithms for facial expression recognition.
[ACII2009]
Later in collaboration with Mr. Abhinav Dhall (PhD Student, ANU), I have been exploring the use of Structural Similarity Index Measure (SSIM) based distance metric for the facial expression analysis.
We also participated in the emotion recognition tasks of the FG2011 Facial Expression Recognition and Analysis Challenge (FERA2011).
Team ANU was ranked 5th in this event.
[FERA2011]
Currently, I am working on the problem of pose-invariant facial expression analysis.
(6) A little fun with Faces
In another work, I proposed a facial expression transfer approach that relies on learning the mapping between the parameters of two completely independent deformable face models that facilitates ‘meaningful transfer’ of expressions by transferring the changes induced both in the shape and texture of the face while preserving person specific qualities and mannerisms.