My name is Ke Sun (孙科). I am a Postdoctoral Fellow working with Prof. Susan A. Murphy in the Department of Statistics at Harvard University. 

I obtained my Ph.D. degree in Statistical Machine Learning from University of Alberta in 2024, advised by Prof. Linglong Kong. During my Ph.D., I was fortunate to visit the Department of Statistics at the London School of Economics and Political Science (LSE) in 2023, advised by Prof. Chengchun Shi. I obtained my MPhil in Data Science (statistics) at Peking University (PKU) in 2020 under the supervision of Prof. Zhouchen Lin and Prof. Zhanxing Zhu. I attained my Bachelor's degree in financial statistics from SouthWestern University of Finance and Economics (SWUFE) in 2017.  

Contact: kesun[AT] fas.harvard.edu, ksun6[AT] ualberta.ca

Links: Google Scholar / CV / Github  

Research Interests

My research interest is to create algorithms in reinforcement learning and sequential decision-making with various applications. I am particularly interested in analyzing the algorithm's theoretical foundations and designing statistically efficient, computationally scaled, and robust algorithms. Currently, I am working on the following topics:

Recent Preprints

Publications 

[14] Hongming Zhang*, Ke Sun*, Bo Xu, Linglong Kong, Martin Müller. A Distance-based Anomaly Detection Framework in Deep Reinforcement Learning. Transactions on Machine Learning Research (TMLR), 2024

[13] Ke Sun, Yingnan Zhao, Wulong Liu, Bei Jiang, Linglong Kong. Distributional Reinforcement Learning with Regularized Wasserstein Loss [code]. Advances in Neural Information Processing Systems (NeurIPS), 2024

[12] Ke Sun, Jun Jin, Xi Chen, Wulong Liu, Linglong Kong. Reweighted Bellman Targets for Continual Reinforcement Learning. ICML Workshop: Aligning Reinforcement Learning Experimentalists and Theorists, 2024

[11] Ke Sun,  Bei Jiang, Linglong Kong.  How Does Return Distribution in Distributional Reinforcement Learning Help Optimization? ICML Workshop:  Aligning Reinforcement Learning Experimentalists and Theorists, 2024

[10] Enze Shi, Yi Liu, Ke SunLingzhu Li, Linglong Kong. An Adaptive Model Checking Test for Functional Linear Model. Bernoulli. 2024.

[9] Enze Shi, Jinhan Xie, Shenggang Hu, Ke Sun, Hongsheng Dai, Bei Jiang, Linglong Kong, Lingzhu Li. Tracking full posterior in online Bayesian classification learning: a particle filter approach. Journal of Nonparametric Statistics. 2024

[8] Ke Sun*, Bing Yu*, Zhouchen Lin, Zhanxing Zhu. Patch-level Neighborhood Interpolation: A General and Effective Graph-based Regularization Strategy. Asian Conference on Machine Learning (ACML), 2023

[7] Ke Sun, Mingjie Li, Zhouchen Lin. Pareto Adversarial Robustness: Balancing Spatial Robustness and Sensitivity-based Robustness. SCIENCE CHINA Information Sciences (SCIS, CCF-A), 2023

[6] Ke Sun, Yingnan Zhao, Shangling Jui, Linglong Kong. Exploring the Training Robustness of Distributional Reinforcement Learning against Noisy State Observations [code]. European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML-PKDD), 2023.

[5] Yi Liu, Ke Sun, Bei Jiang, Linglong Kong.  Identification, Amplification and Measurement: A bridge to Gaussian Differential Privacy.  Advances in Neural Information Processing Systems (NeurIPS), 2022.

[4] Ke Sun*, Yafei Wang*, Yi Liu, Yingnan Zhao, Bo Pan, Shangling Jui, Bei Jiang, Linglong Kong. (*equal contribution in alphabetical order). Damped Anderson Mixing for Deep Reinforcement Learning: Acceleration, Convergence, and Stabilization [code]. Advances in Neural Information Processing Systems (NeurIPS), 2021.

[3] Ke Sun, Zhanxing Zhu, Zhouchen Lin.  AdaGCN: AdaBoosting Graph Convolutional Networks into Deep Models [code]. International Conference on Learning Representations (ICLR), 2021.

[2] Ke Sun, Zhouchen Lin, Zhanxing Zhu. Multi-Stage Self-Supervised Learning for Graph Convolutional Networks on Graphs with Few Labels [code]. Association for the Advancement of Artificial Intelligence (AAAI), 2020. 

[1] Ke Sun, Zhouchen Lin, Hantao Guo, Zhanxing Zhu. Virtual Adversarial Training on Graph Convolutional Networks in Node Classification. Chinese Conference on Pattern Recognition and Computer Vision (PRCV) (oral presentation), 2019.

Technical Reports 

(* denotes equal contribution)

Academic Service

Favorite Quotes 

Practice without theory is blind, but theory without practice is mere intellectual play.  ——Immanuel Kant

There is nothing more practical than a good theory. —— Ludwig Boltzmann

The statistical community has been committed to the almost exclusive use of data models. This commitment has led to irrelevant theory, questionable conclusions, and has kept statisticians from working on a large range of interesting current problems. …… The danger is that if we define the boundaries of our field in terms of familiar tools and familiar problems, we will fail to grasp the new opportunities.  —— Leo Breiman