Robust, Hierarchical, and Privacy-Aware Federated Learning for Networking Applications
Despite its advantages, Federated Learning (FL) comes with three significant challenges for networking applications. First, FL robustness against poisoning attacks that inject corrupt data to disrupt the learning process. Second, learning incompatibility arises because the two-layer client-server architecture of FL does not align with the three-layer end-cloud-server architecture of networking applications. Third, learning privacy involves protecting local data against privacy breaches, regardless of the attacker's power. Although individual solutions for these challenges exist, they are often treated in isolation, resulting in fragmented solutions that address one issue at the expense of others. This project is the first to introduce a solution that jointly addresses all three challenges, proposing a holistic approach that tackles robustness, compatibility, and privacy challenges simultaneously for improved FL adaptation. The research will advance theory and develop practical tools that enable academia and industry to foster secure learning, enhance regional cybersecurity, and promote the adoption of FL.
Federated Learning based Large Language Models
As Large Language Models (LLMs) become integral to intelligent services across healthcare, education, smart infrastructure, and edge computing, it is increasingly important to enable collaborative model development without compromising user data privacy or system security. Federated LLMs offer a promising solution by enabling decentralized training across distributed data sources, where data remains local and only model updates are shared. However, LLMs in federated environments introduce significant challenges, including high communication overhead, system and statistical heterogeneity, and vulnerability to adversarial attacks. This project builds a novel federated LLM framework that prioritizes privacy, robustness, and scalability. It analyzes model vulnerabilities to attacks and builds a defense framework that detects and mitigates attacks to ensure trustworthy model behavior in adversarial environments. By advancing the theory and practice of Federated LLMs, this project will contribute to the broader vision of responsible, scalable, and decentralized AI systems that respect user privacy and enable equitable access to cutting-edge language technologies.
Prompt injection attack in Large Language models
This project examines the application of prompt templates and specially crafted tokens to jailbreak language models and conduct systematic data poisoning attacks. Building upon the Nano-GCG attack framework, the project will demonstrate jailbreak vulnerabilities in LLaMA 3.2 models and Phi-3.5, effectively bypassing alignment safeguards. It will also investigate the downstream impact of poisoning training data, analyzing how subtle adversarial perturbations degrade model performance in lightweight layers, such as LoRA adapters, during the fine-tuning process. The work will provide insights into vulnerabilities in modern, sophisticated chat templates and how they can be exploited against production use cases of LLMs, offering a foundation for developing more robust defenses against prompt-based and data-centric attacks.
Explainability Guided FL and its challenges
This project introduces an explanation-guided security framework for federated learning (FL) by integrating contribution-aware aggregation with rigorous privacy risk analysis. The proposed method leverages Shapley values to quantify each client's contribution and penalize malicious participants, enhancing robustness against FL attacks. In parallel, the project investigates the privacy risks of using explainability in FL, specifically how a curious server could exploit explanation signals to reconstruct client data through model inversion attacks. By examining both the defensive and adversarial roles of explainability, this project aims to develop FL systems that are both robust and privacy-aware—particularly for high-stakes domains such as healthcare and networking.
Blockchain-based FL and its challenges
Federated learning (FL) has evolved to address the privacy needs of distributed machine learning by enabling model training without sharing local data. However, FL faces several challenges, including the central server's single point of failure and concerns around server trust. To overcome these issues, blockchain-based federated learning (BFL) has been introduced. Despite its advantages, BFL brings new vulnerabilities due to its open and decentralized nature. This project investigates the vulnerabilities of BFL to various attacks, including state-of-the-art models and data poisoning attacks. Based on these findings, it develops and evaluates defense mechanisms designed to detect, mitigate, and prevent such attacks. This research ultimately aims to improve the security and robustness of FL and BFL.
Hierarchical federated learning for smart grid
While FL follows a two-tier client-server infrastructure, many practical networking applications operate within a three-tier infrastructure consisting of clients, edge nodes, and a central server. Hierarchical federated learning (HFL) addresses this mismatch by introducing an intermediate aggregation layer (typically edge servers), which reduces communication overhead, improves scalability, and enables regionally localized learning. This project investigates the use of HFL for smart grid applications along with its security and privacy issues. It also introduces a privacy-preserving HFL framework enhanced with differential privacy. By ensuring robust privacy and efficient learning in a distributed setting, this project aims to enable secure and adaptive deployment of HFL in critical infrastructure systems.