(α-β order) denotes alphabetical ordering, * denotes equal contribution.
Tuning-Free Stochastic Optimization [arXiv]
Ahmed Khaled, Chi Jin
International Conference on Machine Learning (ICML) 2024
DoWG Unleashed: An Efficient Universal Parameter-Free Gradient Descent Method. [arXiv]
Ahmed Khaled, Konstantin Mishchenko, Chi Jin
Neural Information Processing Systems (NIPS) 2023
Efficient displacement convex optimization with particle gradient descent [arXiv]
Hadi Daneshmand, Jason D. Lee, Chi Jin
International Conference on Machine Learning (ICML) 2023
Faster Federated Optimization under Second-order Similarity [arXiv]
Ahmed Khaled, Chi Jin
International Conference on Learning Representations (ICLR) 2023
Minimax Optimization with Smooth Algorithmic Adversaries [arXiv]
(α-β order) Tanner Fiez, Chi Jin, Praneeth Netrapalli, Lillian J. Ratliff
International Conference on Learning Representations (ICLR) 2022
On Nonconvex Optimization for Machine Learning: Gradients, Stochasticity, and Saddle Points [arXiv]
Chi Jin, Praneeth Netrapalli, Rong Ge, Sham M. Kakade, Michael I. Jordan
Journal of the ACM, 2021.
Near-Optimal Algorithms for Minimax Optimization [arXiv]
Tianyi Lin, Chi Jin, Michael. I. Jordan
Conference of Learning Theory (COLT) 2020
What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization? [arXiv]
Chi Jin, Praneeth Netrapalli, Michael I. Jordan
International Conference on Machine Learning (ICML) 2020.
On Gradient Descent Ascent for Nonconvex-Concave Minimax Problems [arXiv]
Tianyi Lin, Chi Jin, Michael I. Jordan
International Conference on Machine Learning (ICML) 2020.
Sampling Can Be Faster Than Optimization [arXiv]
Yi-An Ma, Yuansi Chen, Chi Jin, Nicolas Flammarion, Michael I. Jordan
Proceedings of the National Academy of Sciences (PNAS) 2019.
Stochastic Cubic Regularization for Fast Nonconvex Optimization [arXiv]
Nilesh Tripuraneni*, Mitchell Stern*, Chi Jin, Jeffrey Regier, Michael I. Jordan
(Oral) Neural Information Processing Systems (NIPS) 2018.
Accelerated Gradient Descent Escapes Saddle Points Faster than Gradient Descent [arXiv]
Chi Jin, Praneeth Netrapalli, Michael I. Jordan
Conference of Learning Theory (COLT) 2018
Gradient Descent Can Take Exponential Time to Escape Saddle Points [arXiv]
Simon S. Du, Chi Jin, Jason D. Lee, Michael I. Jordan, Barnabas Poczos, Aarti Singh
Neural Information Processing Systems (NIPS) 2017.
How to Escape Saddle Points Efficiently [arXiv] [blog]
Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M. Kakade, Michael I. Jordan
International Conference on Machine Learning (ICML) 2017.
No Spurious Local Minima in Nonconvex Low Rank Problems: A Unified Geometric Analysis [arXiv]
(α-β order) Rong Ge, Chi Jin, Yi Zheng
International Conference on Machine Learning (ICML) 2017.
Global Convergence of Non-Convex Gradient Descent for Computing Matrix Squareroot [arXiv]
(α-β order) Prateek Jain, Chi Jin, Sham M. Kakade, Praneeth Netrapalli
Artificial Intelligence and Statistics Conference (AISTATS) 2017.
Local Maxima in the Likelihood of Gaussian Mixture Models: Structural Results and Algorithmic Consequences [arXiv]
Chi Jin, Yuchen Zhang, Sivaraman Balakrishnan, Martin J. Wainwright, Michael I. Jordan
Neural Information Processing Systems (NIPS) 2016.
Provable Efficient Online Matrix Completion via Non-convex Stochastic Gradient Descent [arXiv]
(α-β order) Chi Jin, Sham M. Kakade, Praneeth Netrapalli
Neural Information Processing Systems (NIPS) 2016.
Streaming PCA: Matching Matrix Bernstein and Near-Optimal Finite Sample Guarantees for Oja's Algorithm [arXiv]
(α-β order) Prateek Jain, Chi Jin, Sham M. Kakade, Praneeth Netrapalli, Aaron Sidford
Conference of Learning Theory (COLT) 2016.
Efficient Algorithms for Large-scale Generalized Eigenvector Computation and Canonical Correlation Analysis [arXiv]
(α-β order) Rong Ge, Chi Jin, Sham M. Kakade, Praneeth Netrapalli, Aaron Sidford
International Conference on Machine Learning (ICML) 2016.
Faster Eigenvector Computation via Shift-and-Invert Preconditioning [arXiv]
(α-β order) Dan Garber, Elad Hazan, Chi Jin, Sham M. Kakade, Cameron Musco, Praneeth Netrapalli, Aaron Sidford
International Conference on Machine Learning (ICML) 2016.
Escaping From Saddle Points --- Online Stochastic Gradient for Tensor Decomposition [arXiv]
(α-β order) Rong Ge, Furong Huang, Chi Jin, Yang Yuan
Conference of Learning Theory (COLT) 2015.