Chi Jin (金驰)

Assistant Professor of Electrical and Computer Engineering

Associated Faculty Member of Computer Science

Princeton University

Email: chij (at) princeton (dot) edu

[Google Scholar]

I am currently an assistant professor of Electrical Engineering at Princeton University. I obtained my Ph.D. in Computer Science at UC Berkeley, advised by Michael I. Jordan. Prior to that, I received a B.S. in Physics from Peking University, and did my undergraduate thesis with Liwei Wang.

My research interests lie in machine learning, statistics and optimization. The primary goal of my Ph.D. research is to design better learning algorithms that are theoretically sound, and efficient in sample complexity, runtime and space. To achieve this goal, my research has focused on providing a deeper understanding of fundamental questions in nonconvex optimization and recently in reinforcement learning.


Selected Paper

When Is Partially Observable Reinforcement Learning Not Scary? [arXiv]

  • Qinghua Liu, Alan Chung, Csaba Szepesvári, Chi Jin

  • ArXiv Preprint

Near-Optimal Learning of Extensive-Form Games with Imperfect Information. [arXiv]

  • (α-β order) Yu Bai, Chi Jin, Song Mei, Tiancheng Yu

  • ArXiv Preprint

V-Learning -- A Simple, Efficient, Decentralized Algorithm for Multiagent RL [arXiv]

  • (α-β order) Chi Jin, Qinghua Liu, Yuanhao Wang, Tiancheng Yu

  • ArXiv Preprint

Bellman Eluder Dimension: New Rich Classes of RL Problems, and Sample-Efficient Algorithms [arXiv]

  • (α-β order) Chi Jin, Qinghua Liu, Sobhan Miryoosefi

  • Neural Information Processing Systems (NIPS) 2021.

Near-Optimal Algorithms for Minimax Optimization [arXiv]

  • Tianyi Lin, Chi Jin, Michael. I. Jordan

  • Conference of Learning Theory (COLT) 2020

Provably Efficient Reinforcement Learning with Linear Function Approximation [arXiv]

  • Chi Jin, Zhuoran Yang, Zhaoran Wang, Michael I. Jordan

  • Conference of Learning Theory (COLT) 2020

Is Q-learning Provably Efficient? [arXiv]

  • Chi Jin*, Zeyuan Allen-Zhu*, Sebastien Bubeck, Michael I. Jordan

  • Neural Information Processing Systems (NIPS) 2018. Best Paper in ICML 2018 workshop "Exploration in RL"

Accelerated Gradient Descent Escapes Saddle Points Faster than Gradient Descent [arXiv]

  • Chi Jin, Praneeth Netrapalli, Michael I. Jordan

  • Conference of Learning Theory (COLT) 2018

How to Escape Saddle Points Efficiently [arXiv] [blog]

  • Chi Jin, Rong Ge, Praneeth Netrapalli, Sham M. Kakade, Michael I. Jordan

  • International Conference on Machine Learning (ICML) 2017.


  • 2013 - 2019 University of California, Berkeley. Ph.D. in Computer Science

  • 2012 - 2013 University of Toronto. Visiting student in Statistics

  • 2008 - 2012 Peking University. Bachelor of Science in Physics


  • Summer 2016 Microsoft Research, Redmond. Research Intern with Dong Yu

  • Summer 2015 Microsoft Research, New England. Research Intern with Sham Kakade

PhD Students

Former Visitors