Edge AI Lab
Department of Computer Science and Engineering, Yonsei University
Image generated by DALL·E 3
Edge AI Lab
Department of Computer Science and Engineering, Yonsei University
Image generated by DALL·E 3
We are a research group focused on various aspects of AI/ML, aiming to fulfill diverse user demands at the edge. Grounded in theoretical and empirical foundations, we develop algorithms to make AI scalable, trustworthy, and efficient, addressing key challenges in the deployment of practical AI services.
If you are interested in joining our group, please check this page for more details.
[Apr. 2025] Our paper on efficient federated machine translation got accepted to IEEE Transactions on Audio, Speech and Language Processing (TASLP)
[Jan. 2025] Five papers got accepted to ICLR 2025, including one as a spotlight paper!
[Jan. 2025] Our paper on submodel partitioning for hierarchical wireless federated learning got accepted to IEEE/ACM Transactions on Networking (ToN)
[Jan. 2025] Our paper on differentially-private federated learning got accepted to IEEE ICC 2025
[Dec. 2024] Our paper on pre-training for federated learning got accepted to AAAI 2025
[Dec. 2024] Our paper on gradient correction for federated learning was presented at NeurIPS 2024
[Dec. 2024] Our paper on AI/ML over space-air-ground integrated networks was published in IEEE Journal on Selected Areas in Communications (JSAC)
[Oct. 2024] Our paper on federated learning for IoT fingerprinting was presented at WiOpt 2024
[Sep. 2024] The Edge AI Lab webpage is now open!
[ICLR'25] Decentralized Sporadic Federated Learning: A Unified Algorithmic Framework with Convergence Guarantees (Spotlight)
[ICLR'25] Asynchronous Federated Reinforcement Learning with Policy Gradient Updates: Algorithm Design and Convergence Analysis
[ICLR'25] Unlocking the Potential of Model Calibration in Federated Learning
[ICLR'25] PRISM: Privacy-Preserving Improved Stochastic Masking for Federated Generative Models
[ICLR'25] Adaptive Energy Alignment for Accelerating Test-Time Adaptation
[NeurIPS'24] Hierarchical Federated Learning with Multi-Timescale Gradient Correction
[ICML'24] Achieving Lossless Gradient Sparsification via Mapping to Alternative Space in Federated Learning
[NeurIPS'23] StableFDG: Style and Attention Based Learning for Federated Domain Generalization
[NeurIPS'23] NEO-KD: Knowledge-Distillation-Based Adversarial Training for Robust Multi-Exit Neural Networks
[ICML'23] Test-Time Style Shifting: Handling Arbitrary Styles in Domain Generalization
[ICLR'23] Warping the Space: Weight Space Rotation for Class-Incremental Few-Shot Learning (Spotlight presentation: notable-top-25%)
[ICLR'23] Active Learning for Object Detection with Evidential Deep Learning and Hierarchical Uncertainty Aggregation
[NeurIPS'21] Few-Round Learning for Federated Learning
[NeurIPS'21] Sageflow: Robust Federated Learning against Both Stragglers and Adversaries
[NeurIPS'20] Election Coding for Distributed Learning: Protecting SignSGD against Byzantine Attacks
[ToN'25] Federated Learning over Hierarchical Wireless Networks: Training Latency Minimization via Submodel Partitioning
[JSAC'24] Orchestrating Federated Learning in Space-Air-Ground Integrated Networks: Adaptive Data Offloading and Seamless Handover
[JSAC'24] Cooperative Federated Learning over Ground-to-Satellite Integrated Networks: Joint Local Computation and Data Offloading
[TMC'24] Federated Split Learning with Joint Personalization-Generalization for Inference-Stage Optimization in Wireless Edge Networks
[INFOCOM'23] SplitGP: Achieving Both Generalization and Personalization in Federated Learning
[INFOCOM'21] TiBroco: A Fast and Secure Distributed Learning Framework for Tiered Wireless Edge Networks
[JSAC'21] FedMes: Speeding Up Federated Learning with Multiple Edge Servers
[TWC'21] Coded Wireless Distributed Computing with Packet Losses and Retransmissions
[TWC'21] Hierarchical Broadcast Coding: Expediting Distributed Learning at the Wireless Edge