Research
research profile
Research Interests
I am broadly interested to understand human vision, perception and cognition processes in order to solve various problems in the domain of computer vision. My interdisciplinary research interest includes:
Machine Learning & Pattern Recognition
Computer Vision
Medical Imaging / Image processing
Cognitive science
Research Aspirations
Research Projects
Breast Cancer Detection and Classification
Breast cancer (BrC) is among those diseases, in which early diagnosis plays a life-saving role. Detection and classification of breast cancer medical images with the help of Computer-Aided Detection (CAD) enables the doctors to make well informed, timely decisions that can raise the survival rate.
In this project different imagining modalities i.e. mammograms, ultrasounds or MRI images will be analyzed using deep learning / machine learning approach in order to aid radiologists in robust diagnosis of BrC. Important aspect of this project is to explore ways to make deep learning transparent (Explainable AI or XAI).
Publications from the project:
Shahid Munir Shah, Rizwan Ahmed Khan, Sheeraz Arif, Unaiza Sajid. Artificial Intelligence For Breast Cancer Analysis: Trends & Directions. Computers in Biology and Medicine, Vol 142, March 2022. [Elsevier] (2020 IF = 4.589).
Shahid Munir Shah and Rizwan Ahmed Khan. Secondary Use of Electronic Health Record: Opportunities and Challenges. IEEE Access, Vol. 8, Pages 136947 - 136965, Aug 2020 [IEEE Access] [PDF Manuscript] (2020 IF = 3.367). Preprint, arXiv:2001.09479 [arXiv].
Unaiza Sajid, Dr. Rizwan Ahmed Khan, Dr. Shahid Munir Shah, Dr. Sheeraz Arif. Breast Cancer Classification using Deep Learned Features Boosted with Handcrafted Features. arXiv (2022) preprint, arXiv: 2206.12815 [arXiv].
Automatic detection of Autism
Neuro-developmental disorders such as ASD (Autism Spectrum Disorder) are linked with abridged ability to produce and perceive facial expressions. Defining criteria for autistic disorder, as set by the diagnostic handbooks, and accepted worldwide such as ICD-10 WHO (World Health Organization 1992) & DSM-IV APA (American Psychiatric Association) are: Abnormalities in social interaction, verbal and non-verbal communication impairments and a limited range of interests and activities.
The objective of the research project is to identify autism spectrum disorder (ASD), among patients especially children at very early stage of their childhood using different techniques of machine learning i.e. classification, clustering and recently introduced deep learning. To detect ASD in an individual, resting state functional magnetic resonance imaging (rs-fMRI) data can be analyzed to investigate the neural pattern associated with ASD.
Publications from the project:
M Hamza Sharif and Rizwan Ahmed Khan. A novel machine learning based framework for detection of Autism Spectrum Disorder (ASD). Applied Artificial Intelligence, Nov 2021. [Taylor & Francis] (2020 IF = 1.58).
A novel framework for automatic detection of Autism: A study on Corpus Callosum and Intracranial Brain Volume. arXiv (2019) preprint, arXiv:1903.11323 [arXiv].
Muhammad Shoaib Jaliawala and Rizwan Ahmed Khan. "Can Autism be Catered with Artificial Intelligence-Assisted Intervention Technology? A comprehensive survey" In Artificial Intelligence Review, 2019. [SpringerLink]
Children spontaneous facial expression video database
There exists different databases that contains videos or images of adults showing different facial expressions i.e. CK/CK+ database, MMI database, FG-net FEED, CAS-PEAL Database, Radboud Faces Databases (RaFD) etc. There are few databases for children affect analysis i.e.
The NIMH Child Emotional Faces Picture Set (NIMH-ChEFS)
The Child Affective Facial Expression (CAFE)
For child affect analysis, research community is lacking standardized database that contains videos. Video database is important as it encodes all the facial deformations with time for any specific facial expressions (important for machine learning / computer vision ). To cater this need, currently I am recording "children spontaneous facial expression video database" (LIRIS-CSE). Database contains small video segments of six universal facial expressions, except expression of anger (due to ethical concerns). Children (aged 6-11) were recorded while they watch different animations which induces spontaneous expressions.
Publications from the project:
We have published article explaining our database and benchmark results using transfer learning:
Rizwan Ahmed khan, Crenn Arthur, Alexandre Meyer and Saida Bouakaz. A novel database of children’s spontaneous facial expressions (LIRIS-CSE). Image and Vision Computing, 2019 [In Press, Elsevier].
Rizwan Ahmed Khan, Crenn Arthur, Alexandre Meyer, Saida Bouakaz. A novel database of Children's Spontaneous Facial Expressions (LIRIS-CSE). arXiv (2018) preprint, arXiv:1812.01555. [arXiv]
Visit project page to download the database.
Example from the database is given below:
Real-time facial expression recognition, specifically for kids. Demo video shows only adult face as children face database has privacy issues.
Full scene labeling
The targeted application concerns the automated analysis of first person-view image sequences acquired using a mobile camera (glasses) endowed with an eye-tracking system. The general goal being full scene labeling / parsing while specific goal is detection and recognition of various focused scene elements.
PhD Research Work
Communication is the key aspect of everyday life and facial expressions provide the most effective way of non-verbal communication. Automatic, reliable and real time facial expression recognition is a challenging problem due to variability in pose, illumination and the way people show expressions across cultures. Human-Computer interactions (HCI), social robots, game industry, synthetic face animation, data-driven animation, deceit detection, interactive video and behavior monitoring are some of the potential application areas that can benefit from automatic facial expression recognition.
The aim of my PhD research work is to develop such an algorithm that can recognize facial expressions in real-time across cultures accurately. Generally the recent works in the field of facial expression recognition (FER) is based on extracting features from the face using mathematical or geometrical heuristic and then classifying them in one of those facial expressions. Computer vision research community working in the field of facial expression recognition is generally lacking to infuse the information that how human visual system (HVS) does the same job effortlessly and in real-time. In this research work we aim to base algorithms and models for FER on HVS. Major contributions of my PhD are listed below.
Understanding Human Visual System
All of the state of the art methods for facial expression recognition usually spends computational time on whole face image or divides the facial image based on some mathematical or geometrical heuristic for features extraction. We argue that the task of expression analysis and recognition could be done in more conducive manner, if only some regions are selected for further process (i.e. salient regions) as it happens in human visual system.
we base our research for facial expression recognition on human visual system (HVS), which selects only few facial regions (salient) to extract information. In order to determine the saliency of facial region(s) for a particular expression, we conducted psycho-visual experiment with the help of an eye-tracker. The images show the experimental setup and the gaze maps (average), while the video shows the gaze pattern.
For more information about this study please refer to our paper: Exploring human visual system: study to aid the development of automatic facial expression recognition framework", In IEEE Conference on Computer Vision and Pattern Recognition (CVPR)- Workshop on Gesture Recognition, Providence, Rhode Island, USA 2012. [PDF] (use login "CVPR2012" and passwd "papers") [IEEExplore]
Algorithms for facial expression recognition
In my PhD I have proposed three algorithms for facial expression recognition. All of them have their merits and demerits. The video below shows the result of an algorithm published in ICIP 2012. The video shows the result on Cohn-Kanade AU-Coded Facial Expression Databasehttp://www.pitt.edu/~jeffcohn/CKandCK+.htm.
Recognition of expression in low resolution images
We have proposed a facial expression recognition system that caters for illumination changes and works equally well for low resolution as well as for good quality / high resolution images. We have tested our proposed system on spontaneous facial expressions as well and recorded encouraging results. For details read "Pattern Recognition Letters" article: [Doi:http://dx.doi.org/10.1016/j.patrec.2013.03.022]
Recognition of expression of "pain"
We also working to understand how the expression of pain is perceived and how it can be recognized through algorithm. The image below is taken from "UNBC-McMaster Shoulder Pain Expression Archive Database". The video shows the result of our initial study.