Publications
Underline indicates my research interns or students; * indicates equal contributions; # indicates correspondence.
Conference Proceedings:
Neural Field Classifiers via Target Encoding and Classification Loss.
Xindi Yang*, Zeke Xie*#, Xiong Zhou, Boyu Liu, Buhua Liu, Yi Liu, Haoran Wang, Yunfeng Cai, and Mingming Sun.
In International Conference on Learning Representations. [ICLR 2024]
Variance-enlarged Poisson Learning for Graph-based Semi-Supervised Learning with Extremely Sparse Labeled Data.
Xiong Zhou, Xianming Liu, Hao Yu, Jialiang Wang, Zeke Xie, Junjun Jiang, and Xiangyang Ji.
In International Conference on Learning Representations. [ICLR 2024]
On the Overlooked Structure of Stochastic Gradients.
Zeke Xie, Qian-Yuan Tang, Mingming Sun, and Ping Li.
In Neural Information Processing Systems. [NeurIPS 2023]
On the Overlooked Pitfalls of Weight Decay and How to Mitigate Them: A Gradient-Norm Perspective.
Zeke Xie, Zhiqiang Xu, Jingzhao Zhang, Issei Sato, and Masashi Sugiyama.
In Neural Information Processing Systems. [NeurIPS 2023]
S3IM: Stochastic Structural SIMilarity and Its Unreasonable Effectiveness for Neural Fields.
Zeke Xie*, Xindi Yang*, Yujie Yang, Qi Sun, Yixiang Jiang, Haoran Wang, Yi Liu, Yunfeng Cai, and Mingming Sun.
In International Conference on Computer Vision. [ICCV 2023]
Dataset Pruning: Reducing Training Data by Examining Generalization Influence.
Shuo Yang, Zeke Xie, Hanyu Peng, Min Xu, Mingming Sun, and Ping Li.
In International Conference on Learning Representations. [ICLR 2023]
Sparse Double Descent: Where Network Pruning Aggravates Overfitting.
Zheng He, Zeke Xie, Quanzhi Zhu, and Zengchang Qin.
In International Conference on Machine Learning. [ICML 2022]
Adaptive Inertia: Disentangling the Effects of Adaptive Learning Rate and Momentum.
Zeke Xie, Xinrui Wang, Huishuai Zhang, Issei Sato, and Masashi Sugiyama.
In International Conference on Machine Learning. [ICML 2022, Oral, 2%]
Positive-Negative Momentum: Manipulating Stochastic Gradient Noise to Improve Generalization.
Zeke Xie, Li Yuan, Zhanxing Zhu, and Masashi Sugiyama.
In International Conference on Machine Learning. [ICML 2021]
A Diffusion Theory For Deep Learning Dynamics: Stochastic Gradient Descent Exponentially Favors Flat Minima.
Zeke Xie, Issei Sato, and Masashi Sugiyama.
In International Conference on Learning Representations. [ICLR 2021]
A Quantum-Inspired Ensemble Method and Quantum-Inspired Forest Regressors.
Zeke Xie and Issei Sato.
In Asian Conference on Machine Learning. [ACML 2017]
Journal Articles:
Artificial Neural Variability for Deep Learning: On Overfitting, Noise Memorization, and Catastrophic Forgetting.
Zeke Xie, Fengxiang He, Shaopeng Fu, Issei Sato, Dacheng Tao, and Masashi Sugiyama.
Neural Computation, MIT Press. [NECO 2021]