Associated Student: Ujjwal Vivek
In this research, we have developed a multimodal fake news detection framework for low-resource Indian languages, leveraging both text and image analysis. News data is collected from multiple fact-checking sources, covering languages with limited datasets. Text is processed using XLM-RoBERTa, while images are analyzed with Vision Transformers (ViT), with BLIP-2 generating contextual captions. Feature extraction integrates XLM-RoBERTa for text, ViT for images, and ALBEF for multimodal alignment, followed by classification. This approach enhances fake news detection, with future improvements focusing on meme analysis and transfer learning for low-resource languages.
Associated Student: Aniket Pasi
In this research, we have developed a multimodal fake news detection framework using text and image analysis. News data is collected through web scraping with BeautifulSoup, ensuring diverse and real-world datasets. Text is processed using transformer-based models, while images are analyzed with Vision Transformers (ViT). A fusion of these models enhances detection accuracy, improving reliability in identifying misinformation.
Associated Student: Lokesh Pagadala
In this research, we have proposed a federated learning-based intrusion detection system (FL-IDS) leveraging deep neural networks (DNNs) for anomaly detection in IoT networks. FL enables decentralized model training across distributed edge devices, preserving data privacy. DNNs enhance detection accuracy compared to traditional rule-based methods. This framework improves security and scalability in IoT-based intrusion detection.
Associated Student: Ankur Gamit
In this research, we have developed a federated learning framework for pneumonia detection using chest X-ray datasets, integrating differential privacy to safeguard medical data at both local and global levels. Explainable AI techniques enhance model transparency and interpretability, addressing data scarcity and privacy challenges in distributed healthcare environments.
This work presents a novel method for detecting Interest Flooding Attacks (IFA) in Named Data Networking using an Attention-Based Federated Learning (FL) framework. The approach enables decentralized learning across edge nodes, preserving privacy while training robust models collaboratively. Attention mechanisms are employed to dynamically weigh contributions from different clients, enhancing detection accuracy. The proposed model effectively identifies anomalous interest packet patterns without sharing raw data. Experimental results demonstrate significant improvements in detection performance and scalability over traditional centralized and non-attentive FL methods.
Associated Student: Devanshu Chauhan
This work explores the application of the Zero Trust Architecture (ZTA) paradigm to enhance the security of Internet of Things (IoT) networks. Unlike traditional perimeter-based models, Zero Trust assumes no implicit trust and enforces strict identity verification and access controls. The proposed framework integrates continuous authentication, micro-segmentation, and device posture assessment tailored for resource-constrained IoT environments. It addresses key challenges such as device heterogeneity, scalability, and dynamic threat landscapes. The approach significantly improves resilience against internal and external threats in IoT ecosystems.
This work addresses the detection of Slow Distributed Denial of Service (DDoS) attacks that use low-rate traffic to exhaust server resources stealthily. Traditional threshold-based methods often fail to detect such attacks due to their subtle nature. The proposed method leverages time-based traffic features and machine learning techniques for accurate identification. Experiments are conducted on publicly available datasets to validate effectiveness. The goal is to improve early detection and mitigation of stealthy DDoS threats in networked systems.
This work focuses on detecting deep fake content generated using advanced AI techniques like GANs. It explores spatial and temporal inconsistencies in forged videos and images to identify manipulations. A hybrid approach combining CNNs and transformer-based models is proposed for improved accuracy. The model is trained and evaluated on benchmark datasets such as FaceForensics++ and DFDC. The study aims to enhance digital media integrity and combat misinformation.
This work investigates the development and fine-tuning of Large Language Models (LLMs) for low-resource Indic languages. It addresses challenges like limited annotated data, diverse scripts, and complex morphology inherent to Indian languages. The approach leverages transfer learning, multilingual pretraining, and data augmentation to improve performance. Evaluation shows promising results in tasks like machine translation, sentiment analysis, and question answering. The study contributes toward inclusive AI by empowering underrepresented linguistic communities.