Collaborative and Federated AI
We study collaborative learning frameworks that enable multiple agents, devices, or institutions to learn collectively while preserving data locality and personalization. Our research focuses on scalable algorithms for distributed learning environments where data and computation are decentralized and heterogeneous.
Key research topics include:
Federated continual learning
Personalized federated learning
Multi-AI-agents system
Efficient On-device AI
Modern AI models are increasingly deployed on resource-constrained devices such as mobile phones, wearable systems, and edge platforms. We develop efficient AI architectures and system-level techniques that enable large models to operate under strict compute, memory, and energy constraints. Our research explores efficient inference and training techniques for on-device foundation models.
Key research topics include:
Efficient inference for large language models (LLMs)
Parameter efficient fine-tuning (PEFT)
Nested model architecture for on-device AI
Zero-th order optimization for on-device learning
Reliable and Adaptive AI
AI systems deployed in real-world environments must be reliable, controllable, and adaptable to changing conditions. Our research focuses on improving the robustness, controllability, and adaptability of modern AI models.
We study methods that allow models to modify, forget, or adapt knowledge while maintaining reliable behavior. Key research topics include:
Machine unlearning
Task arithmetic and model composition
Hallucination mitigation in multimodal models
Controllable and adpative AI systems
Multimodal AI
Modern AI systems increasingly integrate information from multiple modalities such as vision, language, molecular structures, and sensor signals. Our research investigates how AI models can learn unified representations that integrate heterogeneous data sources.
We study techniques for modality fusion, representation alignment, and compositional latent representations that generalize across modalities and tasks. Key research topics include:
Modality fusion and cross-modal alignment
Compositional representation learning
Multimodal foundation model