Home
Dong-Jun Han
Postdoctoral Researcher @ Purdue University
Email: djhan930@gmail.com, han762@purdue.edu
I am a postdoctoral researcher at Purdue University working with Prof. Christopher G. Brinton and Prof. Mung Chiang. I received my Ph.D. degree in EE from KAIST in 2022, where I was fortunate to be advised by Prof. Jaekyun Moon and to receive the Best Ph.D. Dissertation Award. Before that, I received my B.S. and M.S. degrees from KAIST in 2016 and 2018, respectively. My research focuses on Edge AI/ML, including distributed/federated learning and efficient/trustworthy on-device AI, with the general goal of providing intelligent services to edge users. In this direction, I have been publishing papers in top-tier ML venues (NeurIPS, ICML, ICLR) as well as top-tier network/communication venues (JSAC, TWC, TMC, INFOCOM).
[Fall 2024] I will be joining the Department of Computer Science and Engineering at Yonsei University as an Assistant Professor. If you would like to join our group and work on interesting AI/ML research topics, feel free to send me an email (djhan930@gmail.com) with your transcript and CV.
News
[May 2024] One paper on gradient compression has been accepted to ICML 2024
[May 2024] I will be serving on the Technical Program Committee for IEEE INFOCOM 2025
[Jan. 2024] Three papers have been accepted to IEEE ICC 2024
[Dec. 2023] One paper on model calibration has been accepted to AAAI 2024
[Dec. 2023] One paper on satellite-assisted AI/ML has been accepted to IEEE Journal on Selected Areas in Communications (JSAC)
[Oct. 2023] One paper on edge-AI training/inference has been accepted to IEEE Transactions on Mobile Computing (TMC)
[Sep. 2023] Two papers have been accepted to NeurIPS 2023
[July 2023] One paper on domain generalization was presented at ICML 2023
[May 2023] Two papers were presented at ICLR 2023
[May 2023] One paper was presented at IEEE INFOCOM 2023
Selected Publications: AI/ML Algorithms
[ICML'24] Achieving Lossless Gradient Sparsification via Mapping to Alternative Space in Federated Learning
[AAAI'24] Consistency-Guided Temperature Scaling using Style and Content Information for Out-of-Domain Calibration
[NeurIPS'23] StableFDG: Style and Attention Based Learning for Federated Domain Generalization
[NeurIPS'23] NEO-KD: Knowledge-Distillation-Based Adversarial Training for Robust Multi-Exit Neural Networks
[ICML'23] Test-Time Style Shifting: Handling Arbitrary Styles in Domain Generalization
[ICLR'23] Warping the Space: Weight Space Rotation for Class-Incremental Few-Shot Learning (Spotlight presentation: notable-top-25%)
[ICLR'23] Active Learning for Object Detection with Evidential Deep Learning and Hierarchical Uncertainty Aggregation
[Ph.D. Thesis'22] Fast and Robust Distributed Machine Learning (Best Ph.D. Dissertation Award from KAIST EE)
[NeurIPS'21] Few-Round Learning for Federated Learning
[NeurIPS'21] Sageflow: Robust Federated Learning against Both Stragglers and Adversaries
[NeurIPS'20] Election Coding for Distributed Learning: Protecting SignSGD against Byzantine Attacks
Selected Publications: AI/ML over Networks
[JSAC'24] Cooperative Federated Learning over Ground-to-Satellite Integrated Networks: Joint Local Computation and Data Offloading
[TMC'24] Federated Split Learning with Joint Personalization-Generalization for Inference-Stage Optimization in Wireless Edge Networks
[INFOCOM'23] SplitGP: Achieving Both Generalization and Personalization in Federated Learning
[INFOCOM'21] TiBroco: A Fast and Secure Distributed Learning Framework for Tiered Wireless Edge Networks
[JSAC'21] FedMes: Speeding Up Federated Learning with Multiple Edge Servers
[TWC'21] Coded Wireless Distributed Computing with Packet Losses and Retransmissions
[TWC'21] Hierarchical Broadcast Coding: Expediting Distributed Learning at the Wireless Edge