Projects

Encounters with manipulated and fake images are very common these days. Most of these images change the perception of reality for the audience. In this project, we take a deeper dive as to what is that change and answer questions such as what does a viewer see when they see fake images, can we identify cues that help us improve the perception of real vs. fake?

In the post-truth era, anything that we see can be misleading especially visual media such as images and videos. This project aims at telling the story behind the origin of these media objects and their history of evolution on the internet.

Due to the success of Generative Models in creating synthetic human faces, understanding how far is the generated face from the real faces has become interesting again. With respect to applications where we use composite faces for de-identification or creating fake face images to pass off as real, this analysis is relevant for both the adversary and detector.

Predicting facial attributes and personality traits using human perception as yardstick is an important research direction in the Human-AI interaction space. Understanding what AI-based models can reliably predict, and where they have error trends, requires us to backtrack a few steps and reconnect with the principles and results of perceptual studies in the Psychological Science literature.


Due to easily available tools and tutorials, retouched face images have become a common trend. Sometimes they can be retouched to an extent that can throw off certain face recognition algorithms. Also, due to other social concerns such as unrealistic perception of ideal looks, this project aims towards detecting retouching effects in face images.

Visual Change Detection & Analysis

Detect, localize and classified change between two versions of an image. Formulate the problem using a feasible set of changes, depending on the domain, and analyze what changed between two versions using techniques from Computer Vision.