Chi Jin (金驰)
Assistant Professor of Electrical and Computer Engineering
Associated Faculty Member of Computer Science
Princeton University
Email: chij (at) princeton (dot) edu
I am currently an assistant professor of Electrical Engineering at Princeton University. I obtained my Ph.D. in Computer Science at UC Berkeley, advised by Michael I. Jordan. Prior to that, I received a B.S. in Physics from Peking University, and did my undergraduate thesis with Liwei Wang.
My research interests lie in machine learning theory, statistics, optimization and game theory. His research aim to develop principal and theoretical sound methodology for modern machine learning. My past research has mainly focuses on nonconvex optimization and Reinforcement Learning (RL). In nonconvex optimization, I provided the first proof showing that first-order algorithm (stochastic gradient descent) is capable of escaping saddle points efficiently. In RL, he provided the first efficient learning guarantees for Q-learning and least-squares value iteration algorithms when exploration is necessary. My works also establish the theoretical foundation for RL with function approximation, multiagent RL and partially observable RL.
Selected Paper
When Is Partially Observable Reinforcement Learning Not Scary? [arXiv]
Qinghua Liu, Alan Chung, Csaba Szepesvári, Chi Jin
ArXiv Preprint
Near-Optimal Learning of Extensive-Form Games with Imperfect Information. [arXiv]
(α-β order) Yu Bai, Chi Jin, Song Mei, Tiancheng Yu
ArXiv Preprint
V-Learning -- A Simple, Efficient, Decentralized Algorithm for Multiagent RL [arXiv]
(α-β order) Chi Jin, Qinghua Liu, Yuanhao Wang, Tiancheng Yu
ArXiv Preprint, Best Paper in ICLR 2022 workshop “Gamification and Multiagent Solutions”
Bellman Eluder Dimension: New Rich Classes of RL Problems, and Sample-Efficient Algorithms [arXiv]
(α-β order) Chi Jin, Qinghua Liu, Sobhan Miryoosefi
Neural Information Processing Systems (NIPS) 2021.
Near-Optimal Algorithms for Minimax Optimization [arXiv]
Tianyi Lin, Chi Jin, Michael. I. Jordan
Conference of Learning Theory (COLT) 2020
Provably Efficient Reinforcement Learning with Linear Function Approximation [arXiv]
Chi Jin, Zhuoran Yang, Zhaoran Wang, Michael I. Jordan
Conference of Learning Theory (COLT) 2020
Is Q-learning Provably Efficient? [arXiv]
Chi Jin*, Zeyuan Allen-Zhu*, Sebastien Bubeck, Michael I. Jordan
Neural Information Processing Systems (NIPS) 2018. Best Paper in ICML 2018 workshop "Exploration in RL"
Accelerated Gradient Descent Escapes Saddle Points Faster than Gradient Descent [arXiv]
Chi Jin, Praneeth Netrapalli, Michael I. Jordan
Conference of Learning Theory (COLT) 2018
How to Escape Saddle Points Efficiently [arXiv] [blog]
Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M. Kakade, Michael I. Jordan
International Conference on Machine Learning (ICML) 2017.
Education
2013 - 2019 University of California, Berkeley. Ph.D. in Computer Science
2012 - 2013 University of Toronto. Visiting student in Statistics
2008 - 2012 Peking University. Bachelor of Science in Physics
Experience
Summer 2016 Microsoft Research, Redmond. Research Intern with Dong Yu
Summer 2015 Microsoft Research, New England. Research Intern with Sham Kakade
Former Students
Hadi Daneshmand (postdoc)