My name is Ke Sun (孙科). I am a Postdoctoral Fellow working with Prof. Susan A. Murphy in the Department of Statistics at Harvard University.
I obtained my Ph.D. degree in Statistical Machine Learning from University of Alberta in 2024, advised by Prof. Linglong Kong. During my Ph.D., I was fortunate to visit the Department of Statistics at the London School of Economics and Political Science (LSE) in 2023, advised by Prof. Chengchun Shi. I obtained my MPhil in Data Science (statistics) at Peking University (PKU) in 2020 under the supervision of Prof. Zhouchen Lin and Prof. Zhanxing Zhu. I attained my Bachelor's degree in Financial Statistics and Risk Management from SouthWestern University of Finance and Economics (SWUFE) in 2017.
Contact: kesun[AT] fas.harvard.edu, ksun6[AT] ualberta.ca
Links: Google Scholar / CV / Github
Research Interests
My research interest is to create algorithms in reinforcement learning and sequential decision-making with various applications. I am particularly interested in analyzing the algorithm's theoretical foundations and designing statistically efficient, computationally scaled, and robust algorithms. Currently, I am working on the following topics:
Distributional RL and Distributional Learning
Adaptive Experiment Design and Causal Inference
Distribution/Environment Robustness and Continual Learning
Reward Modeling, RLHF, and Reasoning
Recent Preprints
Ke Sun, Linglong Kong, Hongtu Zhu, Chengchun Shi. Optimal Treatment Allocation Strategies for A/B Testing in Partially Observable Time Series Experiments. (submitted)
Ke Sun, Yingnan Zhao, Enze Shi, Yafei Wang, Xiaodong Yan, Bei Jiang, Linglong Kong. The Benefits of Being Categorical Distributional: Uncertainty-aware Regularized Exploration in Reinforcement Learning. (submitted)
Publications
[14] Hongming Zhang*, Ke Sun*, Bo Xu, Linglong Kong, Martin Müller. A Distance-based Anomaly Detection Framework in Deep Reinforcement Learning. Transactions on Machine Learning Research (TMLR), 2024
[13] Ke Sun, Yingnan Zhao, Wulong Liu, Bei Jiang, Linglong Kong. Distributional Reinforcement Learning with Regularized Wasserstein Loss [code]. Advances in Neural Information Processing Systems (NeurIPS), 2024
[12] Ke Sun, Jun Jin, Xi Chen, Wulong Liu, Linglong Kong. Reweighted Bellman Targets for Continual Reinforcement Learning. ICML Workshop: Aligning Reinforcement Learning Experimentalists and Theorists, 2024
[11] Ke Sun, Bei Jiang, Linglong Kong. How Does Return Distribution in Distributional Reinforcement Learning Help Optimization? ICML Workshop: Aligning Reinforcement Learning Experimentalists and Theorists, 2024
[10] Enze Shi, Yi Liu, Ke Sun, Lingzhu Li, Linglong Kong. An Adaptive Model Checking Test for Functional Linear Model. Bernoulli. 2024.
[9] Enze Shi, Jinhan Xie, Shenggang Hu, Ke Sun, Hongsheng Dai, Bei Jiang, Linglong Kong, Lingzhu Li. Tracking full posterior in online Bayesian classification learning: a particle filter approach. Journal of Nonparametric Statistics. 2024
[8] Ke Sun*, Bing Yu*, Zhouchen Lin, Zhanxing Zhu. Patch-level Neighborhood Interpolation: A General and Effective Graph-based Regularization Strategy. Asian Conference on Machine Learning (ACML), 2023
[7] Ke Sun, Mingjie Li, Zhouchen Lin. Pareto Adversarial Robustness: Balancing Spatial Robustness and Sensitivity-based Robustness. SCIENCE CHINA Information Sciences (SCIS, CCF-A), 2023
[6] Ke Sun, Yingnan Zhao, Shangling Jui, Linglong Kong. Exploring the Training Robustness of Distributional Reinforcement Learning against Noisy State Observations [code]. European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD), 2023.
[5] Yi Liu, Ke Sun, Bei Jiang, Linglong Kong. Identification, Amplification and Measurement: A bridge to Gaussian Differential Privacy. Advances in Neural Information Processing Systems (NeurIPS), 2022.
[4] Ke Sun*, Yafei Wang*, Yi Liu, Yingnan Zhao, Bo Pan, Shangling Jui, Bei Jiang, Linglong Kong. (*equal contribution in alphabetical order). Damped Anderson Mixing for Deep Reinforcement Learning: Acceleration, Convergence, and Stabilization [code]. Advances in Neural Information Processing Systems (NeurIPS), 2021.
[3] Ke Sun, Zhanxing Zhu, Zhouchen Lin. AdaGCN: AdaBoosting Graph Convolutional Networks into Deep Models [code]. International Conference on Learning Representations (ICLR), 2021.
[2] Ke Sun, Zhouchen Lin, Zhanxing Zhu. Multi-Stage Self-Supervised Learning for Graph Convolutional Networks on Graphs with Few Labels [code]. Association for the Advancement of Artificial Intelligence (AAAI), 2020.
[1] Ke Sun, Zhouchen Lin, Hantao Guo, Zhanxing Zhu. Virtual Adversarial Training on Graph Convolutional Networks in Node Classification. Chinese Conference on Pattern Recognition and Computer Vision (PRCV) (oral presentation), 2019.
Technical Reports
Bing Yu*, Ke Sun*, He Wang, Zhouchen Lin, Zhanxing Zhu. Classify and Generate Reciprocally: Simultaneous Positive-Unlabelled Learning and Conditional Generation with Extra Data. arXiv preprint.
Ke Sun, Zhanxing Zhu, Zhouchen Lin. Towards Understanding Adversarial Examples Systematically: Exploring Data Size, Task and Model Factors. arXiv preprint.
Ke Sun, Zhanxing Zhu, Zhouchen Lin. Enhancing the Robustness of Deep Neural Networks by Boundary Conditional GAN. arXiv preprint.
(* denotes equal contribution)
Academic Service
Conference Reviewer:
International Conference on Machine Learning (ICML), 2022 - 2025
Conference on Neural Information Processing Systems (NeurIPS), 2022 - 2024
International Conference on Learning Representations (ICLR), 2024 - 2025
International Conference on Artificial Intelligence and Statistics (AISTATS), 2023 - 2025
Conference on Uncertainty in Artificial Intelligence (UAI), 2023 - 2024
International Joint Conferences on Artificial Intelligence (IJCAI), 2024
European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD), 2021
Workshop Reviewer:
ICML 2024 workshop: Aligning Reinforcement Learning Experimentalists and Theorists
ICLR 2024 workshop: Bridging the Gap Between Practice and Theory in Deep Learning
NeurIPS/ICML 2021 workshop: Self-Supervised Learning - Theory and Practice
Journal Reviewer:
Journal of the Royal Statistical Society, Series B (JRSSB)
Journal of the American Statistical Association (JASA)
Favorite Quotes
Practice without theory is blind, but theory without practice is mere intellectual play. ——Immanuel Kant
Ideas matter. ——Richard S. Sutton
The statistical community has been committed to the almost exclusive use of data models. This commitment has led to irrelevant theory, questionable conclusions, and has kept statisticians from working on a large range of interesting current problems. …… The danger is that if we define the boundaries of our field in terms of familiar tools and familiar problems, we will fail to grasp the new opportunities. —— Leo Breiman