Research

1) Motion Pattern Segmentation for Crowded Scene Analysis (Ph.D. Research @ MIT, MAHE, Manipal, India):

Crowded scene analysis is a research area that deals with the study and analysis of the dynamics of the crowd. The primary aim of the crowded scene analysis research is to build an efficient automated crowd video surveillance system to ensure the crowd’s safety and security. This research work focuses on enhancing the efficiency of a low to mid-level crowd scene analysis process known as crowd motion pattern segmentation. Based on the type of motion data extracted from the input video, this thesis contributes two efficient crowd motion pattern segmentation approaches in the compressed-domain and the pixel-domain. The proposed compressed-domain motion pattern segmentation approach uses a Graph-based Collective Merging algorithm to segment the motion vector fields decoded from the input crowd video. The proposed pixel-domain motion pattern segmentation approach uses a Spatio-Angular Density-based Clustering Algorithm to group the keypoint-based trajectories obtained from the input crowd video. In addition, this thesis contributes a novel method based on a Histogram of Angular Deviation-based trajectory feature vector to classify a given crowded scene based on the global motion patterns of the objects within the scene.

Pixel-Domain Crowd Motion Pattern Segmentation - A.K.Pai et al., IEEE Access 2020 [code]

2) Crowd Density Estimation using Texture Features (Ph.D. Course Work Project @MIT, MAHE, Manipal, India):

    • Two efficient texture features: Uniform LBP and Gabor Features were used.

    • The features were computed separately on the input crowd images and then concatenated.

    • The resultant feature was used to train an SVM classifier.

    • The proposed technique was tested on the PETS 2009 dataset.

Block-wise Single Image Crowd Density Estimation - A.K.Pai et al., 14th IEEE International Conference on AVSS @ Lecce, Italy

3) Dense Depth map Estimation based on the Extraction of Image Features (Guest Scientist @IMS, Leibniz Universitat, Hanover, Germany):

    • Input: A pair of stereo images with a wide baseline.

    • SIFT keypoint correspondences + RANSAC+ Image Rectification was performed initially.

    • DAISY descriptor was used to find image features from the rectified images.

    • The final disparity map was obtained by finding correspondences between the DAISY features across the two images and by applying a Graph-cut technique.

Dense Disparity Map from Stereo Cameras with Wide Baseline