Jingqi Li
I am an EECS PhD candidate at UC Berkeley, where I am very fortunate to be co-advised by Professor Claire Tomlin and Professor Somayeh Sojoudi.
My research brings together dynamic game theory, control, and deep reinforcement learning to enable autonomous agents to think strategically and act safely. In other words, I aim to provide intelligent agents—such as household robots, self-driving cars, and generative chatbots—with the capabilities to reason, learn, and coordinate in complex environments. Currently, I am focusing on game-theoretic decision-making and learning under incomplete information:
[Strategic decision-making ] Robots perform well in structured environments but often struggle in unstructured settings—especially when others aren’t cooperative, e.g., drone racing and autonomous driving. So, how can they learn/compute strategic decisions with provably convergence and safety guarantees? (e.g., [17], [16], [15], [9], [7])
[Information asymmetry ] Partial observation persists in real-world multi-agent coordination, and one primary factor of asymmetrical information is the uncertainty of each agent's intent. For example, a chatbot needs to understand human intent, and an autonomous vehicle must understand the intent of other drivers on the road. Consequently, it is crucial to develop strategies that enable agents to effectively manage information asymmetry and even strategically leverage it to achieve their goals. (e.g., [8], [10], [12], [11])
[Game theory for safe, socially intelligent robots ] If we want robots to become part of our lives, they must exhibit social intelligence comparable to that of humans. However, most robots currently lack this ability due to reliance on classical single-agent or fully cooperative modeling assumptions. Dynamic game theory empowers agents with strategic thinking and allows them to consider others’ perspectives, enabling robots to interact safely and learn effectively in changing environments. I am working on combining control theory, dynamic game theory, and generative modeling to design next-generation, learning-enabled, socially intelligent multi-agent systems. (e.g., [12], [17], [16], [14], [13], [8], [7], [3], [2])
Publications: Google Scholar, ResearchGate,
[17] J. Li, S. Sojoudi, C. Tomlin, D. Fridovich-Keil, "The computation of approximate feedback Stackelberg equilibria in multi-player nonlinear constrained dynamic games", SIAM Journal on Optimization (SIOPT), 2024.
[16] J. Li, D. Lee, J. Lee, K. Dong, S. Sojoudi, C. Tomlin, "Certifiable Deep Learning for Reachability Using a New Lipschitz Continuous Value Function", accepted by IEEE Robotics-Automation Letters (R-AL), 2025. Video presentation. [code]
[15] X. Liu, J. Li, F. Fotiadis, M. O Karabag, J. Milzman, D. Fridovich-Keil, U. Topcu, "Policies with Sparse Inter-Agent Dependencies in Dynamic Games: A Dynamic Programming Approach", AAMAS, 2025.
[14] G. Chenevert, J. Li, S. Bae, D. Lee, "Solving Reach-Avoid-Stay Problems Using Deep Deterministic Policy Gradients", submitted to IEEE American control conference, 2025.
[13] C. Chiu*, J. Li*, M. Bhatt, N. Mehr, "To what extent do open-loop and feedback Nash equilibria diverge in general-sum linear quadratic dynamic games?", IEEE Control Systems Letters (L-CSS), 2024.
[12] J. Li, A. Siththaranjan, S. Sojoudi, C. Tomlin, A. Bajcsy, "Intent Demonstration in General-Sum Dynamic Games via Iterative Linear-Quadratic Approximations", submitted to IEEE Transactions on Control Systems Technology (TCST), 2024.
[11] C. Strong, K. Stocking, J. Li, T. Zhang, J. Gallant, C. Tomlin, "A framework for evaluating human driver models using neuroimaging", L4DC, 2024.
[10] D. Papadimitriou, J. Li, "Constraint Inference in Control Tasks from Expert Demonstrations via Inverse Optimization", IEEE CDC, 2023.
[9] J. Li, C. Chiu, L. Peters, F. Palafox, M. Karabag, J. Alonso-Mora, S. Sojoudi, C. Tomlin, D. Fridovich-Keil, "Scenario-Game ADMM: A Parallelized Scenario-Based Solver for Stochastic Noncooperative Games", IEEE CDC, 2023.
[8] J. Li, C. Chiu, L. Peters, S. Sojoudi, C. Tomlin, D. Fridovich-Keil, "Cost Inference for Feedback Dynamic Games from Noisy Partial State Observations and Incomplete Trajectories", AAMAS 2023. [code]
[7] J. Li, D. Fridovich-Keil, S. Sojoudi, C. Tomlin, "Augmented Lagrangian Method for Instantaneously Constrained Reinforcement Learning Problems", in Proceedings of the 60th IEEE Conference on Decision and Control, 2021.
[6] B. Anderson, Z. Ma, J. Li, and S. Sojoudi, "Partition-based Convex Relaxations for Certifying the Robustness of ReLU Neural Networks", submitted to Journal of Machine Learning Research (JMLR), 2020.
[5] B. Anderson, Z. Ma, J. Li and S. Sojoudi, "Tightened convex relaxations for neural network robustness certification", in Proceedings of the 59th IEEE Conference on Decision and Control, 2020.
[4] J. Li, X. Chen, S. Pequito, G. J. Pappas, and V. M. Preciado, "On the structural target controllability of undirected networks", IEEE Transactions on Automatic Control, 2020.
[3] J. Li, X. Chen, S. Pequito, G. J. Pappas, and V. M. Preciado, "Resilient structural stabilizability of undirected networks", in Proceedings of IEEE American Control Conference, 2019.
[2] J. Li, X. Chen, S. Pequito, G. J. Pappas, and V. M. Preciado, "Structural target controllability of undirected networks", in Proceedings of the 57th IEEE Conference on Decision and Control, 2018, Invited paper.
[1] L. Feng, J. Li, and J. Xiao, "Temperature effects on excited state of strong-coupling polaron in an asymmetric RbCl quantum dot", Modern Physics Letters B, Vol. 29, No. 02, 1450261, 2015.
Contact:
jingqili AT berkeley DOT edu
Office:
SDH 7th floor, E36
Awards:
EECS Departmental Fellowship, University of California, Berkeley, 2019
Outstanding Research Award, University of Pennsylvania, 2019
Outstanding Undergraduate Thesis, School of Astronautics, Beihang University, 2016
Lee Kum-Kee Astronautics Scholarship, 2015
First Prize of the 24th 'Feng Ru Cup' Students Academic and Technological Works Competition, Beihang University, 2014
First Prize Scholarship, Beihang University, 2013
Model Student of Academic Records, Beihang University, 2013
First Prize of the 28th Chinese Physics Olympiad, Inner Mongolia, China, 2011
Peer reviewer:
AAAI, NeurIPS, Automatica, IEEE Transactions on Automatic Control (TAC), IEEE Transactions on Control of Network Systems (TCNS), IEEE Transactions on Network Science and Engineering (TNSE), IEEE Conference on Decision and Control (CDC), American Control Conference (ACC), European Control Conference (ECC), L4DC, ICRA, IEEE Robotics and Automation Letters (RA-L), IEEE Open Journal of Control Systems, IEEE Control Systems Letters (L-CSS), Robotics: Science and Systems (RSS).
Selected Invited Talks:
"Certifiable Reachability Learning for High-dimensional Dynamical Systems", UCSD, 2024
"Empowering Learning-enabled Autonomous Systems with Strategic Thinking: A Robust Game Theoretic Perspective", UT Austin, 2024
"Certifiable Reachability Learning for Drone Racing", DARPA Assured Neuro Symbolic Learning and Reasoning (ANSR) Seminar, 2024
"Accommodating Intention Uncertainty in Dynamic Games", UIUC, Coordinated Science Laboratory (CSL) Student Conference 2023, invited student speaker.
"Augmented Lagrangian Safe RL", Semiautonomous seminar, UC Berkeley, 2021.
Teaching Experience:
TA for ESE605 Modern Convex Optimization, UPenn, 2019 Spring.
GSI for EECS 227AT Optimization Models in Engineering, UC Berkeley, 2021 Fall.
GSI for EECS 227AT Optimization Models in Engineering, UC Berkeley, 2024 Fall.
Academic Service:
Student co-organizer of DREAM/CPAR Seminar series, 2022 - 2023