Research projects

Medical Image and Signal Processing

Brain Tumor Segmentation

Brain Tumor Segmentation (BraTS) utilizes multi-institutional pre-operative MRI scans and focuses on the segmentation of intrinsically heterogeneous (in appearance, shape, and histology) brain tumors, namely gliomas. The segmentation task is subdivided into

  • segmentation of enhancing tumor (ET)
  • segmentation of tumor core (TC)
  • segmentation of whole tumor (WT)

Publications

  1. Fatima Ehsan et, al., Brain Tumor Segmentation from Multimodal MRI scans using KNN as a classifier, under review

Members: Fatima Ehsan, Mahnoor Ali

Melanoma Detection

Melanoma is the deadliest form of skin cancer. Although the mortality is significant, when detected early, melanoma survival exceeds 95%. The detection task is subdivided into

  • Automated predictions of lesion segmentation boundaries within dermoscopic images
  • Classify and localize clinical dermoscopic attribute patterns as binary masks
  • Classify disease categories for dermoscopic images

Publications

1. K. Zafar et al., Skin lesion segmentation from dermoscopic images using convolutional neural network, Sensors, vol. 20, no. (6), 2020

Members: Kashan Zafar

Pattern Recognition based Mayo-electric Control

Advances in mayo-electric interfaces have increased the use of wearable prosthetics including robotic arms. Although promising results have been achieved with pattern recognition-based control schemes, control robustness requires improvement to increase user acceptance of prosthetic hands. The aim of this work is to quantify the performance of various pattern recognition based techniques (LDA, SVM, NN, DL) on efficacy of long-term robust prosthetic control.

Publications

  1. M. Zia et. al., Stacked sparse autoencoders for EMG-based classification of hand motions: A comparative multi day analyses between surface and intramuscular EMG, Applied Sciences, vol. 8, no. (7), 2018
  2. M. Zia et. al., Multiday EMG-based classification of hand motions with deep learning techniques, Sensors, vol. 18, no. (8), 2018
  3. A. Waris et al., The effect of time on EMG classification of hand motions in able-bodied and transradial amputees, Journal of Electromyography and Kinesiology, vol. 40, 72–80, 2018
  4. M. Zia et. al., Performance of combined surface and intramuscular EMG for classification of hand movements, in 40th IEEE Engineering in Medicine and Biology Society (EMBC’18), (Hawaii, U.S.A), Jul. 2018
  5. M. Zia et al., A novel approach for classification of hand movements using surface EMG signals, in 17th IEEE International Symposium on Signal Processing and Information Technology (ISPIT'17), (Spain), Dec. 2017
  6. A. Waris et al., Classification of functional motions of hand for upper limb prosthesis with surface electromyography, International Journal of Biology and Biomedical Engineering, vol. 8, 15–20, 2014
  7. A. Waris et al., Control of upper limb active prosthesis using surface electromyography, in Recent Advances in Biology, Medical Physics, Medical Chemistry, Biochemistry and Biomedical Engineering (EUROPMENT), (Italy), 2013

Members: Zia ur rehman, Asim Waris, Bushra Saeed

Brain Computer Interface based on Near-Infrared Spectroscopy (NIRS)

People suffering from neuromuscular disorders such as locked-in syndrome (LIS) are left in a paralyzed state with preserved awareness and cognition. In this study, it was hypothesized that changes in local hemodynamic activity, due to the activation of Broca’s area during overt/covert speech, can be harnessed to create an intuitive Brain Computer Interface based on Near-Infrared Spectroscopy (NIRS). Our analysis based on 6 overtly and covertly spoken words, using optimized support vector machine classifiers, indicates NIRS as a viable solution for future BCI applications

Publications

  1. U. A. Sheikh et al., Classification of overt and covert speech for near-infrared spectroscopy-based brain computer interface, Sensors, vol. 18, no. (9), 2018

Members: Usman Ayub Shiekh, Namra Afzal

Juxtapleural Pulmonary Nodule Detection and Segmentation in Lung Cancer CT Images

Early diagnosis of lung cancer plays crucial role in the improvement of patients' chances of survival. Computer aided detection (CAD) system has been a groundbreaking step in the timely diagnosis and identification of potential nodules (lesions). CAD system starts detection process by extracting lung regions from CT scan images. This step narrows down the region for detection, thus saving time and reducing false positives outside the lung regions, resulting in the improvement of specificity of CAD systems.

Publications

  1. M. Z. ur Rehman et al., An appraisal of nodules detection techniques for lung cancer in CT images, Biomedical Signal Processing and Control, vol. 41, 140–151, 2018
  2. M. Z. ur Rehman et al., Adaptive thresholding technique for segmentation and juxtapleural nodules inclusion in lung segments, International Journal of Bio-Science and Bio-Technology, vol. 8, no. (5), 105–114, 2016

Members: Zia ur rehman

Bone Fracture Detection in X-Ray Images

Bone is tough, locomotive tissue of the body which is often subjected to fractures and degenerative disorders. For diagnostic purposes, clinician readily use X-Ray imaging. This provides researcher with ample opportunity to utilize image processing and analysis techniques for automated detections. Our work has focused on detecting these bone fractures and different types arthritis; Osteoarthritis Arthritis (OA) and rheumatoid Arthritis (RA).

Publications

  1. Najwa et al., Ground Truth Annotation, Analysis and Release of Data set of Radiographic Images of Bone Fractures, under review
  2. H.Hayat et al., Arthritis identification from multiple regions by X-ray image processing, in International Journal of Signal Processing, Image Processing and Pattern Recognition, vol. 10, no. (11), 23–32, 2017

Members: Hunza Hayat, Najwa Farooq

Analysis of Retinal Images

Eye disorders (age-related macular degeneration, diabetic retinopathy, and glaucoma) can manifest themselves in retinal images. A good computer aided diagnostic system can alert the onset of disease resulting in timely treatment and/or preventive measures. Our work has primarily focused on diabetic retinopathy and glaucoma detection

Publications

  1. T. Shafa et al., Automated Classification of Retinal Diseases in STARE Database, in 4th International Conference on Recent Trends in Computer Science and Electronics (2019 RTCSE), (Hawaii, U.S.A), 2019
  2. T. Shafa et al., A review on structural analysis of human retinal : Blood vessels, optic nerve, fovea centralis and related diseases, International Journal of u- and e- Service, Science and Technology, vol. 10, no. (12), 1–12, 2017
  3. H. Ahmad et al., Detection of glaucoma using retinal fundus images, in 2014 IEEE International Conference on Robotics and Emerging Allied Technologies in Engineering (iCREATE), (Islamabad - Pakistan), 2014

Members: Tooba, Naireen Zaheer, Namra Rauf

Lesion Detection in Mammograms

  1. F. Zahra et al., Automated Segmentation and Classification of Lesion on Breast Ultrasound, Internal Technical Report, 2014

Analysis of Electrocardiography (ECG)

  1. Y. Ilyas et al., Power line noise removal from ECG signal using notch, band stop and adaptive filters, in 17th ICEIC (USA) 2018
  2. Respiration Rate (RR) detection in ECG signal for mobile devices, Internal Technical Report, 2017
  3. Q. Talah et. al., Biometric embedded system architecture for hand vein Identification system on FPGA, Internal Technical Report, 2017
  4. Z. Hassan et al., Review of fiducial and non-fiducial techniques of feature extraction in ecg based biometric systems, in IJST, vol. 9, no. (21), 2016
  5. Z. Hassan et al., Improvement in ECG based biometric systems using wavelet packet decomposition (WPD) algorithm, in IJSTI, vol. 9, no. (30), 2016

Analysis of Electroencephalogram (EEG)

  1. M. A. Ahmad et al., Comparative analysis of classifiers for developing an adaptive computer-assisted EEG analysis system for diagnosing epilepsy, in BRI, vol. 2015, 2015
  2. M. Z. Baig et al., Motor imagery based EEG signal classification using self organizing maps, in SI, vol. 2, 2015
  3. M. Z. Baig et al., Classification of left/right hand movement from EEG signal by intelligent algorithms, in IEEE ISCAIE (Malaysia), 2014

computer vision and multimedia analytics

Visual Attention Models of Dynamic Scenes

Human visual system can quickly, effortlessly, and efficiently process visual information from their surroundings. As a result, modern computer vision has been heavily influenced by how biological visual systems encode properties of the natural environment Human subjects can perform several complex tasks such as object localization, identification, and recognition in scenes, owing to their ability to “attend” to selected portions of their visual fields while ignoring other information. Although visual attention can either be driven by bottom-up / exogenous-control or top-down / endogenous-control mechanisms, research studies have found that bottom-up influences act more rapidly than top-down processes. Our work here focuses on

  • Running psychophysical experiments to understand governing mechanisms of attention
  • Proposing computational models for these mechanisms
  • Applying these models in real-world scenarios

Publications

  1. M. Wahid et. al., The effect of eye movements in response to different types of scenes using a graph-based visual saliency algorithm, Applied Sciences, vol. 9, no. (24), 2019
  2. H. Mehmood et al., Dynamic saliency model inspired by middle temporal visual area: A spatio-temporal perspective, in 2018 Digital Image Computing: Techniques and Applications (DICTA), (Canberra, Australia), Dec. 2018
  3. M. S. Azam et al., A benchmark of computational models of saliency to predict human fixations in videos, in 11th International Conference on Computer Vision Theory and Applications (VISAPP 2016), (Rome, Italy), 2016
  4. S. O. Gilani et al., PET: An eye-tracking dataset for animal-centric pascal object classes, in 2015 IEEE International Conference on Multimedia and Expo (ICME), (Italy), 2015
  5. M. S. Azam et al., Saliency based object detection and enhancements using spectral residual approach in static images and videos, Advanced Science Letters, vol. 21, no. (12), 3677–3679, 2015
  6. M. Dwarikanath et al., Coherency based spatio-temporal saliency detection for video object segmentation, IEEE Journal of Selected Topics in Signal Processing, vol. 8, no. (3), 454–462, 2014
  7. S. O. Gilani et al., Impact of image appeal on visual attention during photo triaging, in 20th IEEE International Conference on Image Processing (ICIP), (Australia), 2013
  8. S. O. Gilani et al., Gist modulated saliency in videos, in 2nd ECE Graduate Student Symposium, National University of Singapore, (Singapore), 2012
  9. S. O. Gilani et al., Fixation durations during scene transitions, Journal of Vision, vol. 11, no. (11), 512–512, 2011
  10. S. O. Gilani et al., Spatio temporal saliency modelling in videos, in 1st ECE Graduate Student Symposium, National University of Singapore, (Singapore), 2011

Members: Hassan Mahmood, Shoaib Azam, Usman Khalid, Maria Wahid

Person Detection in Unconstrained Environment

Person detection has been an active area of research due to its wide range of potential applications in pedestrian detection, in-store video analytics, crowd management, and video surveillance. Among a few challenges faced are varying viewpoints, illumination, postures, and sensing modalities. However, strong priors exist for an efficient and practical solution; e.g., movement characteristic, scene properties, postural connectivity etc., Our research aims to develop an efficient model of person detection for variety of challenges in real-world applications

Publications

  1. M. N. Khan et. al., Photo detector-based indoor positioning systems variants: A new look, Computers & Electrical Engineering, vol. 83, 106607, 2020
  2. S. Munir et al., Human Torso Detection in Infrared Videos, under review
  3. M. Ammar et al., Human Detection by Learning Locally Adaptive Steering Kernels (LASK), under review
  4. M. Asad et al., Emotion detection through facial feature recognition, in Proceeding of 3rd International Conference on Green Computing and Engineering Technologies - ICGCET-2017, (Killaloe, Ireland), 2017
  5. S. O. Gilani, Human Detection on Raspberry PI, Internal Technical Report, 2017
  6. S. O. Gilani, Crowd Emotion Detection using Person Model, Internal Technical Report, 2017
  7. H. Ahmed et al., Monocular vision-based signer-independent Pakistani sign language recognition system using supervised learning, Indian Journal of Science and Technology, vol. 9, no. (25), 2016
  8. B. Ali et al, Improved method for stereo vision-based human detection for a mobile robot following a target person, South African Journal of Industrial Engineering, vol. 26, no. (1), 102–119, 2015
  9. B. Ali et al., Human tracking by a mobile robot using 3d features, in IEEE International Conference on Robotics and Biomimetics (ROBIO), 2013

Members: Munir Sultan, Muhammad Ammar

Multimedia Analytics

Multimedia analytics is a vast and multidisciplinary field. With recent technological innovations, we have proliferation of multimedia usage in our daily life. The data embeds several modalities e.g., audio, visual, textual information. This calls for novel algorithms and technologies (drawing on multiple disciplines) for multimedia retrieval, access, exploration, understanding, abstraction, and interaction. Currently we are focusing on multimedia abstraction and interactions by analysing

  • User behaviour
  • Memorability
  • Emotions
  • Saliency

Publications

  1. S. O. Gilani et al., Video abstraction inspired by human model of attention, in 9th International Conference on Information Technology, Electronics & Mobile Communication (IEMCON 2018), (Vancouver, Canada), Nov. 2018.
  2. S. Ramanathan et. al., Utilizing implicit user cues for multimedia analytics, in Frontiers of Multimedia Research, Association for Computing Machinery and Morgan & Claypool, 219–251 ,2018

Members: Hasnain Ali

Autonomous Vehicle Navigation

Autonomous Vehicle research has recently entered into mainstream application (e.g., Google, Uber). The enabling technology relies on ability of vehicle to sense its environment, interpret multi-sensor (vision, radar, GPS, lidar, odometer etc., ) information and take appropriate decisions (path planning) and actions (control system). Currently, we are focusing on

  • video and scene analysis
  • real-time control

Publications

  1. Arqab et al., Autonomous Vehicle Control, Internal Technical Report, 2018
  2. H. Fleyeh et al., Road sign detection and recognition using fuzzy artmap: A case study swedish speed-limit signs, in Artificial Intelligence and Soft computing, (Spain), 2006

Members: Arqab, Aibak

Crowd Modelling and Analytics

Crowd modelling and analytics research offers key benefits in crowd management and security. Currently we are focusing on computing two factors; crowd density and crowd flow. Our approach is based on micro and macro level analysis of the crowd image in estimating these factors.

Members: Tahseen Akhtar

OCR Based application

  1. M. Sami et al., Text detection and recognition for semantic mapping in indoor navigation, in IEEE ICITCS (Malaysia), 2015
  2. S. Z. Zhou et al., Open source OCR framework using mobile devices, in SPIE-EI (USA), 2008

Augmented/Virtual Reality

  1. S. O. Gilani, Interactive transcription system and method, US Patent 8,358,320, 2013
  2. Z. Zhou et al., Wizqubes - a novel tangible interface for interactive storytelling in mixed reality, IJVR, vol. 7, no. (4), 9–15, 2008
  3. P. Song et al., Vision–based projected tabletop interface for finger interac- tions,” in HCI, Springer, 2007
  4. Z. Zhou et al, What you write is what you get: A novel mixed reality interface, in HCI (China), 2007

Scene Understanding

  1. S. O. Gilani et al., Automated scene analysis by image feature extraction, in IEEE PiCom (Auckland), 530–536, 2016
  2. S. O. Gilani et al., Scene transitions effects fixation length in movies, in Decade of Mind IV (Singapore), 2011