Chi Jin (金驰)

Assistant Professor of Electrical and Computer Engineering

Associated Faculty Member of Computer Science

Princeton University

EmailLinkLinkYouTubeLink

Research Interests

Education

University of California, Berkeley.

Advisor: Michael I. Jordan  

Peking University.   

My research focuses on the decision-making aspects of machine learning. We aims to develop intelligent agents capable of complex strategy, advanced reasoning, and planning. In the past, my group has primarily worked on establishing the theoretical foundations of machine learning, including areas such as reinforcement learning, multi-agent learning, game theory, statistical learning, and optimization. Recently, we have expanded our interests to include improving the reasoning abilities of LLM and developing LLM agents for tasks such as mathematics, coding, and complex games.


I have been recently giving talks on beyond equilibrium learning in game theory. See also my Princeton course on foundations of reinforcement learning here, and my tutorial on multiagent reinforcement learning at Simons institute here.


*I am recruiting undergraduate research interns, PhD students, and Postdoctoral researchers*. To apply, please email me with your CV attached. PhD applicants should also mention my name as a faculty member of interest in their statement of purpose. Due to the high volume of inquiries, I may not be able to respond until the hiring process begins. 

Awards

Selected Paper

Securing Equal Share: A Principled Approach for Learning Multiplayer Symmetric Games [arXiv]


Maximum Likelihood Estimation is All You Need for Well-Specified Covariate Shift [arXiv]


Optimistic MLE -- A Generic Model-based Algorithm for Partially Observable Sequential Decision Making [arXiv]


When Is Partially Observable Reinforcement Learning Not Scary? [arXiv]


Near-Optimal Learning of Extensive-Form Games with Imperfect Information. [arXiv]


V-Learning -- A Simple, Efficient, Decentralized Algorithm for Multiagent RL [arXiv]


Bellman Eluder Dimension: New Rich Classes of RL Problems, and Sample-Efficient Algorithms [arXiv]


Near-Optimal Algorithms for Minimax Optimization [arXiv]


Provably Efficient Reinforcement Learning with Linear Function Approximation [arXiv]


Is Q-learning Provably Efficient? [arXiv]


Accelerated Gradient Descent Escapes Saddle Points Faster than Gradient Descent [arXiv]


How to Escape Saddle Points Efficiently [arXiv] [blog]