Our research group is dedicated to pushing the boundaries of machine learning in the medical imaging domain, with a primary focus on cutting-edge areas such as domain adaptation, test time adaptation, and the development of lightweight models. In the realm of image reconstruction, our efforts aim to advance techniques that address challenges related to limited data and diverse imaging conditions. Simultaneously, our group is committed to pushing the frontiers of medical image analysis by leveraging state-of-the-art machine learning methodologies, fostering advancements that enhance diagnostic accuracy and efficiency in healthcare applications.
RESEARCH FOCUS
Magnetic Resonance Image Reconstruction: Magnetic Resonance (MR) image reconstruction could involve implementing and optimizing state-of-the-art reconstruction algorithms, exploring the application of deep learning models for image enhancement, and addressing challenges like noise reduction and accelerated imaging. The project could further focus on designing algorithms that effectively handle challenges related to highly accelerated imaging, such as domain shift and test time adaptation, ensuring robust performance across diverse datasets and real-world scenarios in the context of MR image reconstruction.
Quantitative Susceptibility Mapping Reconstruction: Quantitative Susceptibility Mapping (QSM) enables accurate quantification of myelin, iron, and calcium content in the brain by estimating the tissue magnetic susceptibility distributions from MR phase measurements. Our lab focuses on QSM and its transformative clinical applications. We work on the development of end to end pipelines for computing the tissue susceptibility and unraveling valuable insights for neurodegenerative diseases.
Limited Angle Computed Tomography Reconstruction: This project aims to develop algorithms for accurate image reconstruction for computed tomography (CT) from limited-angle (LA) acquisitions, enabling substantial reduction in harmful radiation doses without compromising diagnostic quality. The focus will be on optimizing reconstruction techniques for various geometries, including parallel-beam and fan-beam, and addressing challenges posed by limited projection views to enhance the efficiency and applicability of CT imaging in medical, industrial, and security contexts. Through this research, the project seeks to contribute to safer and more sustainable CT imaging practices while maintaining high-quality diagnostic capabilities.
Our laboratory is actively engaged in advancing the frontiers of medical imaging through innovative projects focused on the analysis of diverse imaging modalities, including Optical Coherence Tomography (OCT), Computed Tomography (CT), Magnetic Resonance Imaging (MRI), and ultrasound. In the realm of OCT, our research aims to refine classification techniques for accurate diagnosis, enabling enhanced precision in identifying various pathologies. For CT and MRI, our efforts are dedicated to developing sophisticated segmentation algorithms that facilitate detailed anatomical and pathological delineation, contributing to improved clinical interpretation. In the domain of ultrasound imaging, we explore novel approaches for both classification and segmentation, harnessing the potential of data science to elevate diagnostic capabilities. These projects underscore our commitment to advancing medical image analysis and its transformative impact on healthcare diagnostics. Furthermore, our research extends to the adaptation of these models for deployment on edge devices, ensuring their practicality and efficiency in real-world healthcare settings.
LIVE PROJECTS
Deep Learning for Image Reconstruction
Weakly Supervised Machine Learning for Medical Image Analysis
We are developing an AI-driven, automated tool for different medical image analysis applications including breast cancer detection, predicting fatty liver volumes etc. Our approach leverages weakly supervised deep learning for classification and segmentation, ensuring robust, interpretable, and clinically relevant outcomes. By integrating AI on the edge for real-time, point-of-care analysis, we aim to make early detection faster and more accessible. In collaboration with expert clinicians, we are translating cutting-edge AI innovations into practical clinical applications, driving the future of personalized healthcare. We are actively looking for passionate and driven Ph.D. candidates to join us in developing AI-driven healthcare solutions.
AI for Geospatial Data Analytics
We are also working on AI-driven geospatial data analytics, leveraging deep learning, self-supervised learning, and scalable architectures to process massive remote sensing datasets. Our work spans land-use classification, environmental monitoring, disaster management, and climate change assessment, transforming raw satellite and aerial imagery into actionable insights. With a focus on edge AI and real-time geospatial intelligence, we aim to revolutionize urban planning, agriculture, and ecological forecasting. We are seeking passionate Ph.D. candidates to join us in advancing cutting-edge AI for geospatial applications.
COMPLETED MAJOR THESES
Network Quantization for Medical Image Segmentation: A Post-Training Perspective
Sourav Ramachandran
Medical image segmentation is vital for the diagnosis and monitoring of diseases such as COVID-19. This study investigates post-training quantization techniques applied to the U-Net architecture to segment COVID-19 lesions on chest CT scans. A unified dataset comprising 2,729 image-mask pairs was created by merging three publicly available sources, with consistent white-mask representations for all types of lesion.
The baseline U-Net model demonstrated strong segmentation performance. In addition, the incorporation of deep supervision showed improvements in metrics such as the Dice score and the intersection over union (IoU). Following model training, quantization methods were applied, specifically fixed point and the alternating-direction multiplier method (ADMM), yielding substantial reductions in model size. Incorporating weight regularization within the ADMM framework led to additional performance gains.
Although the original U-Net offered the highest segmentation accuracy, it came at the cost of higher computational demands. In contrast, the ADMM-based quantized model with deep supervision and weight regularization provided a better trade-off, maintaining competitive accuracy while significantly improving model compactness and inference efficiency. These findings highlight the potential of ADMM-based quantization to enable lightweight, real-time deep learning models for medical image segmentation in resource-constrained environments.
From Training to Inference: Supervised and Unsupervised Learning for Clinically Robust QSM Reconstruction
Aaqilah A J
Quantitative Susceptibility Mapping (QSM) is a powerful MRI technique used to infer tissue magnetic susceptibility, with applications in neuroimaging and clinical diagnosis. Traditional QSM reconstruction methods often struggle with heavy computational requirements and poor generalizability across acquisition protocols.
This report presents two complementary approaches to address these limitations. First, we introduce a supervised VAE-UNet architecture trained to enhance structural fidelity and generalization in data-constrained settings. Building on this, we propose a fully unsupervised training framework followed by Test-Time Adaptation (TTA) that dynamically refines model parameters per subject during inference using physics-based losses.
The adaptation is guided by Stein’s Unbiased Risk Estimator (SURE) for principled early stopping to prevent overfitting. Our method demonstrates superior performance compared to conventional QSM techniques and generalizes well to out-of-distribution datasets. We further highlight QSM’s clinical utility by estimating oxygen extraction fraction (OEF) maps, underscoring its potential in quantitative neuroimaging.
Feature Augmentation-Driven Deep Learning Approach for Robust Breast Cancer Classification
Hisham Hassan O T
Detecting breast cancer in its early stages is a challenging and critical task that requires the expertise of highly trained clinicians to identify the subtle abnormalities in the radiological images that could indicate the presence of cancer. However, the availability of expert radiologists is limited, making the early detection of breast cancer challenging.
Recent advancements in artificial intelligence (AI) have demonstrated great potential in addressing these limitations, offering enhanced capabilities for breast cancer detection. But, the success of these models is often hindered by the need for extensive, high-quality datasets, which are constrained by stringent privacy regulations. One way to tackle these challenges is the use of augmentations on data (or features) by transforming the available data (or features) to generate a diverse training dataset. However, all transformations do not guarantee improved performance.
This study focuses on introducing a feature augmentation method that significantly improves the performance of the model by identifying those feature transformations that contribute to accurate predictions across different network layers during a pre-training phase, and applying them during the training phase to augment the features at the respective network layers.
The proposed approach achieved improvements of 3.75% and 5.28% in accuracy, and 4.67% and 4.28% in F1-score, under the 6.6% and 20% data (limited data scenarios considered in this study), respectively, compared to the approach with optimally transformed data augmentation method.
Vision-Driven and Multimodal Deep Learning Approaches for Aerial Vehicle Classification
Goutham Raj
Aerial vehicle categorization is of prime importance in applications like surveillance, security, air traffic, and commercial UAV management. The problem is identifying different types of aerial vehicles from non-vehicle backgrounds. Yet, reasons like variability of appearance, outdoor noise, low availability of data, and online processing constraints hinder the classification.
This thesis discusses the creation of effective deep learning models for the classification of aerial vehicles. We introduce a lightweight CNN of our own design for resource-scarce environments, examine the efficacy of transformer-based models in improved feature learning, and present a multimodal approach that combines image and sound modalities for enhanced robustness.
Collectively, these solutions are designed to provide scalable, accurate, and real-time models for deployment on real-world applications.
Remote Sensing-based Fire Risk Assessment Using Deep Learning: Addressing Class Imbalance and Inter-Class Similarities
Ingole Prasmit Pralhad
Forest fires cause severe environmental damage and financial losses, threatening biodiversity, natural resources, and human habitations. In light of these challenges, there is a critical need for accurate forecasting systems and efficient risk mitigation strategies to sustainably manage forested regions. This study evaluates the performance of convolutional neural networks—ResNet18, EfficientNet, and MobileNet—on high-resolution satellite imagery for wildfire risk classification. To further enhance model robustness and accuracy, an ensemble approach is employed, combining the predictions of individual models through probability averaging. Experimental evaluation on the FireRisk dataset shows that the ensemble model outperforms individual networks, achieving an accuracy of 63.10% across seven fire risk categories. When evaluated on a merged-label (3-class) configuration, the ensemble model demonstrated further improvement, reaching an accuracy of 80% and an F1 score of 80.32%, reflecting its effectiveness in dealing with class imbalance and highly correlated samples.