Home
Dong-Jun Han
Postdoctoral Researcher @ Purdue University
Email: han762@purdue.edu, djhan930@gmail.com
I am a postdoctoral researcher at Purdue University working with Prof. Christopher G. Brinton and Prof. Mung Chiang. I received my Ph.D. degree in EE from KAIST in 2022, where I was fortunate to be advised by Prof. Jaekyun Moon and to receive the Best Ph.D. Dissertation Award. Before that, I received my B.S. and M.S. degrees from KAIST in 2016 and 2018, respectively. My research interest lies in the intersection of communications, networking, and machine learning, with the general goal of providing intelligent services over 6G communication networks. In this direction, I have been publishing papers in top-tier ML conferences (NeurIPS, ICML, ICLR), communications/networking conferences (INFOCOM) and journals (JSAC, TWC, TMC).
News
[May 2024] Our new paper on gradient compression has been accepted to ICML 2024
"Achieving Lossless Gradient Sparsification via Mapping to Alternative Space in Federated Learning"
[Jan. 2024] Three papers have been accepted to IEEE ICC 2024
"Cooperative Federated Learning over Hybrid Terrestrial and Non-Terrestrial Networks"
"Submodel Partitioning in Hierarchical Federated Learning: Algorithm Design and Convergence Analysis"
"FedMFS: Federated Multimodal Fusion Learning with Selective Modality Communication"
[Dec. 2023] Our new paper on satellite-assisted AI/ML has been accepted to IEEE Journal on Selected Areas in Communications (JSAC)
"Cooperative Federated Learning over Ground-to-Satellite Integrated Networks: Joint Local Computation and Data Offloading"
My 3rd JSAC paper as a first author!
[Dec. 2023] Our new paper on model calibration has been accepted to AAAI 2024
"Consistency-Guided Temperature Scaling using Style and Content Information for Out-of-Domain Calibration"
[Oct. 2023] Our new paper on edge-AI training/inference has been accepted to IEEE Transactions on Mobile Computing (TMC)
"Federated Split Learning with Joint Personalization-Generalization for Inference-Stage Optimization in Wireless Edge Networks"
My first journal paper with my postdoc advisors in Purdue!
[Sep. 2023] Two papers have been accepted to NeurIPS 2023
"StableFDG: Style and Attention Based Learning for Federated Domain Generalization"
"NEO-KD: Knowledge-Distillation-Based Adversarial Training for Robust Multi-Exit Neural Networks"
[May 2023] Our new paper on multi-exit neural network has been accepted to IEEE Transactions on Neural Networks and Learning Systems (TNNLS)
"Improving Low-Latency Predictions in Multi-Exit Neural Networks via Block-Dependent Losses"
[Apr. 2023] Our new paper on domain generalization has been accepted to ICML 2023
"Test-Time Style Shifting: Handling Arbitrary Styles in Domain Generalization"
[Jan. 2023] I officially started my new career as a postdoctoral researcher at Purdue University, working with Prof. Christopher G. Brinton and Prof. Mung Chiang!
Selected Achievements (Communications/Networking)
[JSAC'24] Cooperative Federated Learning over Ground-to-Satellite Integrated Networks: Joint Local Computation and Data Offloading
[TMC'24] Federated Split Learning with Joint Personalization-Generalization for Inference-Stage Optimization in Wireless Edge Networks
[INFOCOM'23] SplitGP: Achieving Both Generalization and Personalization in Federated Learning
[Best Ph.D. Thesis'22] Fast and Robust Distributed Machine Learning
[JSAC'21] FedMes: Speeding Up Federated Learning with Multiple Edge Servers
[INFOCOM'21] TiBroco: A Fast and Secure Distributed Learning Framework for Tiered Wireless Edge Networks
[TWC'21] Hierarchical Broadcast Coding: Expediting Distributed Learning at the Wireless Edge
[TWC'21] Coded Wireless Distributed Computing with Packet Losses and Retransmissions
[TWC'21] Probabilistic Caching and Dynamic Delivery Policies for Categorized Contents and Consecutive User Demands
[TWC'18] Bi-Directional Cooperative NOMA Without Full CSIT
[JSAC'17] Combined Subband-Subcarrier Spectral Shaping in Multi-Carrier Modulation under the Excess Frame Length Constraint
Selected Achievements (AI/ML)
[ICML'24] Achieving Lossless Gradient Sparsification via Mapping to Alternative Space in Federated Learning
[NeurIPS'23] StableFDG: Style and Attention Based Learning for Federated Domain Generalization
[NeurIPS'23] NEO-KD: Knowledge-Distillation-Based Adversarial Training for Robust Multi-Exit Neural Networks
[ICML'23] Test-Time Style Shifting: Handling Arbitrary Styles in Domain Generalization
[ICLR'23] Warping the Space: Weight Space Rotation for Class-Incremental Few-Shot Learning
[ICLR'23] Active Learning for Object Detection with Evidential Deep Learning and Hierarchical Uncertainty Aggregation
[NeurIPS'21] Few-Round Learning for Federated Learning
[NeurIPS'21] Sageflow: Robust Federated Learning against Both Stragglers and Adversaries
[NeurIPS'20] Election Coding for Distributed Learning: Protecting SignSGD against Byzantine Attacks