Thesis
Research Areas
Sleep Disorders (Sleep Apnea, Sleep Staging, Sleep Walking, Bruxism)
Machine/Deep Learning
Brain-Computer Interface
Gait Rehabilitation (Parkinson's Disorder)
Gesture Recognition
Emotion Detection
Predictive Impairment Disorder (Autism)
Projects
Neurocomputing Lab (IIT Delhi, December 2020 - Present)
Respiration Analysis Using Various Physiological Signals for Sleep Apnea Detection: This research introduces two compact, wearable systems (WRMS and WeLove) for respiration monitoring and sleep apnea detection using IMU, ECG, and PPG signals. Temporal parameters derived from these signals were benchmarked against traditional methods to validate their accuracy and reliability. Experimental studies conducted in real-world conditions demonstrated the feasibility of using these wearable systems for long-term monitoring. To enhance the fidelity of PPG signals, advanced signal processing techniques were applied, minimizing noise and improving data quality. A comparative analysis using statistical methods, machine learning models, and deep learning algorithms confirmed that integrating multiple signals improves detection accuracy. Notably, IMU signals proved particularly effective for estimating respiration parameters, offering a non-invasive, cost-effective, and efficient solution for continuous sleep apnea monitoring. (PhD Research, July 2021– Present)
Machine Learning for Sleep Apnea Detection: This research presents a novel machine learning framework for sleep apnea detection using a single-lead ECG signal. The study incorporates two less explored features—permutation entropy and wavelet energy—along with wavelet, auto-regressive, and entropy-based features from ECG and HRV signals. A feature selection algorithm was employed to reduce dimensionality, and multiple classifiers were tested for performance evaluation. Three deep learning models were compared against the proposed ML model, with a time complexity analysis conducted for each. The method achieved higher accuracy than existing approaches by integrating novel features and optimized selection techniques. Comprehensive analyses, including minute-by minute, inter-subject, and intra-subject evaluations, validated its robustness across two datasets, outperforming state-of-the-art techniques. (PhD Research, July 2021– Present)
Deep Learning for Sleep Apnea Detection: This research introduces novel deep learning models for sleep apnea detection using single-lead ECG signals, providing a simplified and efficient diagnostic approach. The proposed deep learning framework automates apnea detection by directly analyzing ECG, PPG, and respiratory data, eliminating manual feature extraction. Various architectures, including CNNs, RNNs, and self-supervised models, were optimized for accuracy and robustness across multiple datasets and apnea conditions. These models, designed for real-time processing, enable seamless integration into wearable devices, facilitating continuous and non-invasive sleep apnea monitoring. (PhD Research, July 2021– Present)
Wearable Inertial Measurement Unit-Based System for Sleepwalking Detection (preliminary study): This research presents a wearable system using a single shank-mounted IMU sensor for sleepwalking detection, offering a comfortable, affordable, and non invasive solution for real-time monitoring. Unlike conventional methods requiring multiple sensors, this approach accurately estimates gait events and temporal gait parameters with a single IMU, making it practical for real-world use. The system is cost-effective, eliminating the need for expensive motion capture setups, and achieves high accuracy in gait detection using SVM and LDA classifiers. Robustness was validated across different sensor placements, with comparisons to existing studies highlighting its effectiveness. Its computational efficiency enables seamless real-time integration for gait monitoring and rehabilitation. (PhD Research, July 2021– Present)
Brain-Computer Interface (BCI) Research - This study optimizes Brain-Computer Interface (BCI) systems by addressing EEG channel redundancy, enhancing efficiency in brain-device interaction. A classifier-dependent genetic algorithm (GA) is introduced to automate EEG channel selection, significantly reducing channel requirements by 79% and 68% for two public BCI datasets. The Neural Network classifier achieved high accuracy (94.90±1.30% for motor execution and 97.12±1.12% for motor imagery), demonstrating robustness across naive subjects with minimal training data. Integrating a continuous overlapped windowing technique further improves real-time applicability, making the system efficient for robotics control and other BCI applications. (Collaboration Research, July 2021– Present)
Advancements in EEG-Based Hand Trajectory Estimation and BCI-Driven Robotic Control: These studies explore EEG-based hand trajectory estimation for robotic control and rehabilitation. The first study uses a time-delayed neural network on EEG data from 12 subjects, achieving peak accuracy (0.638±0.030) and consistency (0.654±0.030) with the full EEG frequency range. However, it performs better in 2D, making it ideal for planar robotic control. The second study enhances trajectory estimation by fusing EEG and EMG data with a CNN-LSTM model, improving accuracy (0.608±0.031) and correlation (0.6354±0.030) with actual movements. These fusion captures movement intent from EEG and muscle activation from EMG, benefiting prosthetics and rehabilitation. The third study introduces a BCI model for robotic hand control, integrating task classification (TC) and trajectory estimation (TE). TC deciphers user intent from EEG, while TE estimates 3D hand trajectories for precise control. Real-time feedback ensures adaptability, enhancing assistive robotics for motor-impaired individuals. (Collaboration Research, July 2021– Present)
Age-Related Improvement in Predictive Processing in Autism: Autism spectrum disorder (ASD) affects speech, communication, social interaction, and behavior, with symptoms varying in severity. Its exact cause remains unknown, but impaired prediction ability is hypothesized as a key factor. The brain relies on prediction for natural interactions, and disruptions can make the world feel overwhelming. This study developed a machine learning-based video game to assess prediction abilities in ASD children (4-12 years) and compare them with neurotypical peers (4-12 years). Results showed age-related improvement but a delayed trajectory in ASD. The system holds potential for both diagnosis and therapy, offering a valuable tool for clinicians. (Collaboration Research, July 2021– Present)
Early Prediction of Freezing of Gait in Parkinson’s Disease Patients: This study explores multimodal sensor fusion and deep learning for early prediction of Freezing of Gait (FoG) in Parkinson’s disease (PD) patients. Using IMU, EMG, and EEG signals, a CNN+LSTM model was developed and compared with other classifiers, achieving 94.45% accuracy. Robustness to noise and inter subject variability was confirmed, with pre-FoG detection reaching 94.20% accuracy. The findings highlight the potential of multimodal deep learning in improving FoG prediction, reducing false detections, and enhancing clinical applications. Future research should investigate additional sensors, cross-cohort transferability, longitudinal studies, and real-time deployment in clinical settings. (Collaboration Research, July 2021– Present)
3D Motion Perception in Patients with Eye-Movement Disorders - In this project, an eye movement-based test was developed using Virtual Reality (VR) to differentiate between patients with visual field defects (VFD) and healthy controls. The VR system was employed to test visual fields in 15 patients from each group, successfully distinguishing between those with Neuro-Ophthalmological disorders, Glaucoma, and healthy individuals based on their eye movements. The innovative VR device demonstrated a high capability in differentiating between these conditions. Additionally, machine learning techniques were utilized to analyze eye movement behavior on an individual basis, allowing for the precise characterization of gaze disorders. This project showcases expertise in VR development, clinical testing, and the application of advanced machine learning for medical diagnostics, emphasizing a strong interdisciplinary approach to solving complex health challenges. (IIT-AIIMS Delhi Project, Dec 2020– Aug 2021)
IRACS Lab (IIT Gandhinagar, India, Dec. 2018 - Dec. 2020 )
Design of Portable, Individualized Audio, Visual, and Tactile Cueing Modules - Designed portable, affordable, microcontroller based cueing modules that offer auditory, visual, and tactile cues, allowing users to flexibly adjust the cue delivery based on their individual needs. These innovative modules were engineered to enhance usability and accessibility, making them suitable for a wide range of applications. To validate their effectiveness, I conducted studies involving ten healthy participants for each cueing module type. These studies aimed to assess whether the modules performed as expected in real-world scenarios. The preliminary results were promising, indicating that the auditory, visual, and tactile cueing modules functioned reliably and met the intended design goals. This project highlights a strong capability in hardware design, user-centered testing, and iterative development, showcasing an ability to create practical solutions tailored to user requirements. (MTech Thesis, Jan 2019 – Dec 2020)
Reviewer
Journals:
Neurocomputing (Elsevier).
Biomedical Signal Processing and Control. (Elsevier)
Technology and Healthcare (IOS Press)
IEEE Transactions on Artificial Intelligence
Scientific Reports
IEEE Sensors Letters
Conferences:
IEEE International Conference on Systems, Man, and Cybernetics 2025
45th Annual International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC 2023)
IEEE International Conference on Systems, Man, and Cybernetics (IEEE SMC 2024)
2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP 2025). ICASSP 2025
2025 National Conference on Communications (NCC 2025). NCC 2025
Supervision/Guidance (Interns or Project Staff)
Alyona Rout, a third-year student at Kalinga Institute of Technology, is involved in a project from May 27 to July 26, 2025, focused on sleep apnea detection and app development for addressing sleep health concerns. Through this work, she aims to gain hands-on experience with novel deep learning techniques for apnea detection across multiple datasets and work toward publishing a research paper in the field.
Devakshee Bhattacharjya, a third-year student at Tezpur University, is working from June 2 to July 15 on developing software for real-time dynamic cerebral autoregulation monitoring in neurocritical care settings. Through this project, she aims to gain valuable research experience and contribute toward publishing a research paper in the field of neurological health monitoring.
Srishti Srivastava, a third-year student at Netaji Subhas University of Technology, is engaged from May 27 to July 27, 2025, in a project focused on developing a wearable device to detect bruxism using piezoelectric sensors. Through this work, she aims to gain hands-on experience with hardware electronics and publish a research paper.
Pranav Singla, a second-year student at SRMIST University, is working from May 24 to July 21, 2025, on a project developing a hand gesture-controlled robot car toy designed for children with ASD to enhance their predictive ability and visual motor skills. Through this project, he aims to publish a research paper, deepen his understanding of the technology, and gain valuable research experience.
Ayush Gupta (Arizona State University) has completed his first year and is currently working on an EMG dataset for gesture recognition with arm translation. During his project from May 30 to July, he is focused on leveraging electromyography signals to develop accurate and responsive gesture recognition systems. This work aims to advance his expertise in bio-signal processing and machine learning, with the goal of publishing a research paper. Ayush’s involvement demonstrates a strong dedication to exploring innovative technologies and enhancing his technical skillset in biomedical engineering and human-computer interaction.
Prerna Khanbhayata (BTech, RK University) is pursuing a BTech in the Department of Computer Engineering. Developed an advanced robotic arm control system utilizing electromyography (EMG) signals, demonstrating expertise in bioengineering and robotics. Successfully integrated EMG sensors to capture and interpret muscle activity, allowing for precise and intuitive manipulation of the robotic arm. Additionally, engineered and 3D printed a custom prosthetic hand, enhancing the project with practical, real-world applications. This innovative work showcases a strong proficiency in both hardware design and software integration, highlighting a commitment to cutting-edge technological solutions in assistive devices. (27th May- 30th June, Summer Intern)
Manisha Pargadu (BTech, RK University) is pursuing a BTech in the Department of Computer Engineering. Developed an advanced robotic arm control system utilizing electromyography (EMG) signals, demonstrating expertise in bioengineering and robotics. Successfully integrated EMG sensors to capture and interpret muscle activity, allowing for precise and intuitive manipulation of the robotic arm. Additionally, engineered and 3D printed a custom prosthetic hand, enhancing the project with practical, real-world applications. This innovative work showcases a strong proficiency in both hardware design and software integration, highlighting a commitment to cutting-edge technological solutions in assistive devices. (27th May- 30th June, Summer Intern)
Bhanuj Sharma: (BTech, Amity University) is pursuing a BTech in the Department of Computer Engineering. Designed and 3D printed a hardware platform for a photoplethysmography (PPG) sensor, demonstrating expertise in mechanical design and biomedical engineering. The custom 3D-printed housing was created to optimize the sensor's performance and user comfort. Additionally, developed a graphical user interface (GUI) to display real-time signals from the PPG sensors, making the data accessible and easy to interpret. This project highlights skills in CAD modeling, 3D printing, and software development, showcasing an ability to integrate hardware innovation with intuitive data visualization.
Saadiq Rauf Khan (MTech, IITD) is pursuing his MTech in the Department of Electrical Engineering, IIT Delhi. For his MTech project, he worked on "stress detection using various deep-learning models" for his Master's thesis. Developed a sophisticated stress detection system using various deep learning models, showcasing advanced knowledge in artificial intelligence and machine learning. Implemented and optimized several neural network architectures to accurately analyze physiological and behavioral data for stress indicators. This project involved extensive data preprocessing, model training, and validation to ensure high accuracy and reliability. Demonstrated expertise in Python, TensorFlow, and other relevant AI frameworks, emphasizing a strong ability to apply deep learning techniques to real-world health monitoring and psychological assessment applications. (1st August 23- 1st Dec)
Anurag Gambhir (4th year BTech, TIET, Patiala) completed his three-month summer internship at IIT Delhi. When he joined the internship, he had completed his 3rd year at Thapar Institute of Engineering & Technology. I supervised/guided him on the " Design of Wearable PPG Sensors for Continuous Cardiovascular Disease Monitoring" project throughout his internship. Authored a comprehensive review paper on the advancements in photoplethysmography (PPG) signal technology, highlighting significant developments and emerging trends in the field. Conducted an in-depth analysis of current literature, examining innovations in PPG sensor design, signal processing algorithms, and their applications in health monitoring. The paper emphasized the impact of these advancements on improving accuracy, usability, and the scope of PPG-based systems. This work demonstrates strong research skills, a thorough understanding of biomedical engineering, and the ability to synthesize complex information into coherent and insightful conclusions. (1st May - 1st August, 2023, Summer Intern)