"Explore Learning Frontier."
Zeke Xie (谢 泽柯) Assistant Professor of Data Science and Analytics/Artificial Intelligence, HKUST (GZ) Email: zekexie at hkust-gz dot edu dot cn
Director of Xie Machine Learning Foundations (xLeaF) Lab
My mission is to find a way to towards the science of AI.
Dr. Zeke Xie is an Assistant Professor at Information Hub, Hong Kong University of Science and Technology (Guangzhou). He is leading Xie Machine Learning Foundations Lab (xLeaF Lab) that is generally interested in understanding and solving fundamental issues of modern AI, particularly AIGC and Large Models, by scientific principles and methodology. He received multiple competitive faculty research awards from ByteDance, Huawei, and CCF-Baidu. He also served as Area Chairs for top conferences, including NeurIPS and ICLR.
Previously, he was a researcher at Baidu Research responsible for large models and AIGC research. He obtained Ph.D. and M.E. both from The University of Tokyo. He was fortunate to be advised by Prof. Issei Sato and Prof. Masashi Sugiyama. He was also affiliated with RIKEN AIP during the Ph.D. study. Before that, he obtained Bachelor of Science from University of Science and Technology of China.
He wrote a number of popular science articles and tech blogposts which got published on multiple media and attracted more than 200k followers.
I am recruiting multiple PhD students, RAs, Mphil students and postdocs who can enjoy the mission and the experience of exploring the boundary of human knowledge. Students who truly enjoy the process and share similar research interests are highly welcomed to work with me.
We will definitely gain both memorable experiences and rich results even beyond the initial expectation.
Please read this page for more information.
招聘信息详情请参考此链接。
2025.09.18: Our two papers "Generative Data Augmentation" and "Channel Data Influence" on data-centric AI are accepted at NeurIPS 2025.
2025.08.27: I receive the NSFC grant.
2025.08.16: I am invited to serve as an Area Chair for ICLR 2026.
2025.08.09: I receive the one million CNY equivalent computing grant from MiraclePlus (奇绩创坛算力项目).
2025.07.31: Our paper "Pre-trained Molecular Language Models" is accepted at npj Artificial Intelligence.
2025.06.26: Our paper "Golden Noise" is accepted at ICCV 2025.
2025.06.25: I receive the Huawei Excellent Scholar Award from Huawei (华为优秀学者).
2025.05.01: Our two papers on LLM Loss Landscape and LLM Alignment are accepted at ICML 2025.
2025.04.02: I am invited to serve as an Area Chair for NeurIPS 2025.
2025.03.09: I receive the Innovation IDEA faculty research grant sponsored by Huawei.
2025.01.23: Our paper "Mono2Stereo" is accepted at CVPR 2025.
2025.01.23: Our two papers "Z-Sampling" and "IV-mixed Sampler" for diffusion models are accepted at ICLR 2025.
2024.10.21: I receive the Pinecone faculty research award from CCF-Baidu Open Fund (CCF-百度松果基金).
2024.08.16: I receive the Doubao faculty research award on Large Models from ByteDance (字节跳动豆包大模型基金).
2024.06.13: I join HKUST(GZ) as an assistant professor.
2024.01.16: Our two papers on Neural Field Classifiers and Poisson Learning are accepted at ICLR 2024.
2023.09.22: Our two papers on Weight Decay and Gradient Structure are accepted at NeurIPS 2023.
2023.09.13: I have an invited talk "Deep Learning Dynamics: A Scientific Approach" at DeepSeek.
2023.07.14: Our paper "S3IM: Stochastic Structural SIMilarity and Its Unreasonable Effectiveness for Neural Fields" is accepted at ICCV 2023.
2023.01.21: Our paper "Dataset Pruning: Reducing Training Data by Examining Generalization Influence" is accepted at ICLR 2023.
2022.07.20: I have a long oral presentation "Adaptive Inertia: Disentangling the effects of adaptive learning rate and momentum" at ICML 2022 main conference.
2022.05.15: Our two papers are accepted at ICML 2022 with one Oral selection (~2%).
2021.10.13: I join Baidu Research as a full-time researcher.
2021.07.19: I successfully defend my Ph.D. thesis and hold the Ph.D. degree.
Nowadays AI is like the physics in or even before the era of Galileo.
Researchers may observe many interesting things about AI.
However, we have no mathematical theory for most things.
We need to find a road towards the era of Newton for AI.
Science not only explains what works but also predicts what will work.
Science gives quantitative and trustworthy results.
Science establishes complex principles from first principles.
We believe formulating AI Science will be the most important challenge in the future of AI.
We hope to find a road towards the scientific revolution for AI.
This is a mission in our generation.
"All models are wrong. But some are useful."