Affiliation: University of Texas, Austin, Aerospace Engineering and Engineering Mechanics
Presentation Time: 8:50 - 9:30 CST
Title: Towards Multi-Agent Strategic Autonomy: A Differentiable Game-Theoretic Perspective
Abstract: As autonomous systems scale to decentralized multi-agent settings, agents must make decisions in the presence of others with limited and asymmetric information, across both cooperative and non-cooperative interactions. This raises a fundamental question: how can we model, compute, and learn strategic decisions in such environments? This talk approaches this question from a differentiable game-theoretic perspective. By approximating game-theoretic optimality conditions as differentiable equations, we enable efficient computation and learning of equilibria under different information structures in complex dynamic games. I will present scalable algorithms for solving nonlinear feedback dynamic games with convergence and safety guarantees, along with inverse game-theoretic methods for inferring agents’ objectives and beliefs about others from partial observations. I will then discuss how these components can be combined in a closed loop, enabling agents to align with others or exploit information asymmetry when beneficial. We demonstrate these methods in applications such as advanced air mobility, multi-robot furniture moving, and drone racing, including hardware experiments, where decentralized agents coordinate in real time without direct communication. Overall, these results suggest that differentiable game-theoretic structure enables efficient computation and learning of multi-agent strategies in complex, interactive environments.
Bio: Jingqi Li is a Peter O’Donnell Jr. Postdoctoral Fellow at the Oden Institute, the University of Texas at Austin. He received his Ph.D. in EECS from University of California, Berkeley. His research focuses on dynamic game theory, control, and reinforcement learning for multi-agent autonomy under uncertainty
Affiliation: Georgia Institute of Technology, Electrical and Computer Engineering (ECE)
Presentation Time: 9:30 - 10:00 CST
Title: Constraint Learning in Multi Agent Dynamic Games from Demonstrations of Local Nash Interactions
Abstract: Robots operating in crowded human-populated environments must be capable of inferring the interaction constraints, such as collision avoidance specifications, that often underlie multi-agent behaviors. To empower robots to learn interaction constraints, this talk presents an inverse dynamic game-based framework for inferring parametric constraints from multi-agent interaction demonstrations at local Nash equilibria. To recover constraints consistent with the local Nash stationarity of the given demonstrations, we encode the corresponding Karush–Kuhn–Tucker (KKT) conditions within a mixed-integer linear program (MILP). We establish theoretical guarantees that our method learns inner approximations of the true safe and unsafe sets. We also use the recovered interaction constraint information to design motion plans that robustly satisfy the true, a priori unknown constraints despite limited demonstration data. Across simulations and hardware experiments, our method accurately infers constraints from interaction demonstrations and leverages the inferred constraint information to design safe interactive motion plans. We conclude by outlining ongoing and future work on extending the proposed framework to enable active, parameterization-free, real-time, and online interaction constraint inference. The research contributions presented in this talk are the result of collaborations with Zhouyu Zhang, Zheng Qiu, and Dr. Glen Chou.
Bio: Chih-Yuan Chiu is a research engineer in the School of Electrical and Computer Engineering (ECE) at the Georgia Institute of Technology. He received his Ph.D. in Electrical Engineering and Computer Sciences from the University of California, Berkeley, in 2023. His research focuses on designing control-theoretic and optimization-based algorithms to characterize, predict, and influence multi-agent interactions between humans and machines, with a focus on applications in the transportation sector.
Affiliation: University of California, Berkeley, Mechanical Engineering
Presentation Time: 10:40 - 12:20 CST
Title: Why Multi-Agent Learning Is Hard: From Imitation to Reinforcement Learning
Abstract: To truly transform our lives, autonomous systems must operate in complex environments shared with other agents. For instance, delivery robots navigate spaces with humans, while warehouse robots must coordinate on shared factory floors. These settings require systematic methods that enable efficient and reliable interactions among multiple agents. In this talk, I will discuss the challenges of learning in such interactive multi-agent domains, with a focus on both imitation learning and reinforcement learning. I will begin with imitation learning, highlighting how the multi-agent setting fundamentally differs from the single-agent case. I will then turn to reinforcement learning and discuss the key challenges that arise when multiple agents learn simultaneously. While learning methods have seen significant success in single-agent settings, multi-agent domains introduce additional complexity due to the strong coupling between agents’ decisions. I will highlight some of these challenges and discuss approaches that help make learning tractable and effective in interactive multi-agent systems.
Bio: Negar Mehr is an assistant professor in the Department of Mechanical Engineering at the University of California, Berkeley. Previously, she was an assistant professor of Aerospace Engineering at the University of Illinois Urbana-Champaign. Before that, she was a postdoctoral scholar at Stanford Aeronautics and Astronautics department. She received her Ph.D. in Mechanical Engineering from UC Berkeley in 2019 and her B.Sc. in Mechanical Engineering from Sharif University of Technology, Tehran, Iran, in 2013. She is a recipient of the NSF CAREER Award and ONR Young Investigator Program (YIP) award. She was recently recognized as a rising star by the American Society of Mechanical Engineers (ASME). She was awarded the IEEE Intelligent Transportation Systems best Ph.D. dissertation award in 2020.
Affiliation: Georgia Institute of Technology, Aerospace Engineering
Presentation Time: 11:20 - 12:00 CST
Title: From Centralized Game-theoretic Coordination to Commitment Breaking in Multi-Agent Aerial Traffic Systems
Abstract: This talk investigates multi-agent aerial traffic across a spectrum of coordination paradigms, from centralized air traffic planning to decentralized interaction with breakable commitments. I first discuss structured coordination through MDP congestion games and reach-avoid potential games, where agents reason strategically within a shared mathematical framework to improve traffic efficiency or enforce stronger safety objectives. These approaches illustrate how game-theoretic structure can support scalable coordination, while also revealing tradeoffs between computational tractability and guarantee strength. I then consider decentralized airspace settings such as uncontrolled airspaces in which centralized planning is unavailable and agents coordinate only through announced intentions or commitments. In these lower-information environments, coordination depends not only what agents announce, but also on their credibility. This leads to game-theoretic questions of belief mismatch, strategic deviation, and commitment breaking. The talk uses these examples to highlight a broader challenge in aerial autonomy: how to design coordination mechanisms that remain safe, tractable, and robust as centralized structure weakens and strategic behavior becomes more prominent.
Bio: Sarah H. Q. Li is an assistant professor in the School of Aerospace Engineering at the Georgia Institute of Technology, where she leads the Control, Coordination, and Competition under Uncertainty (C3U) Lab. She received her Ph.D. in aeronautics and astronautics at the University of Washington, and completed her postdoc with the Autonomous Control Lab at ETH Zurich. Her work combines optimal control, stochastic decision processes, and game theory to study how autonomous agents can coordinate safely and efficiently in uncertain environments.
Affiliation: University of Pennsylvania, Engineering and Applied Science
Presentation Time: 14:10 - 14:50 CST
Title: From Local Coordination to System-Level Design of Trustworthy Multi-Agent Systems
Abstract: Designing trustworthy autonomy for societal-scale systems requires reasoning across coupled layers of abstraction, each involving different forms of multi-agent interaction. For example, autonomous transportation systems of the future necessitate not only safe vehicle-to-vehicle interaction but also adaptive higher-level design and infrastructure decisions such as dynamic speed limits and flow management.
In this talk, we explore these challenges across layers in the context of advanced air mobility. At the highest layer of abstraction, we model on-demand airspace access as a resource allocation problem for which we design a distributed market mechanism using the Alternating Direction Method of Multipliers (ADMM). We then show how agents can safely operate under such access constraints in complex multi-vehicle interaction settings using time-to-reach guidance combined with safety filtering.
We conclude with a discussion of the open challenge of coupling these layers in a principled way. In particular, safety constraints both influence and are influenced by design decisions made for other layers, and this coupling complicates desired uncertainty quantification and statistical verification.
Affiliation: University of Illinois, Urbana-Champaign, Industrial & Enterprise Systems Engineering
Presentation Time: 15:30 - 16:10 CST
Title: Controllability and Persuasion in Transportation Systems
Abstract: As our transportation systems grow more complex and distributed, we may introduce new opportunities (vulnerabilities) for improved efficiency of the overall system (or system weakness to malicious agents). Additionally, as the level of autonomy in transportation systems increases, we need to understand how humans perceive and respond to autonomy in our control policies. In this talk, I will discuss our recent work in formalizing these two large problems.
Many ground transportation problems can be seen as a multi-agent system, where each driver's payoff depends on the decision of other drivers. However, these dynamics are often too complex / uncertain to model, and optimizing the resulting game itself may be intractable; a common practical solution is to consider regret minimizing methods: each agent treats the other agents as an exogenously given environment, and tries to optimize accordingly. However, there may be unintended consequences of 'ignoring' one's effect on other agents. In our work, we assume one agent knows that all other agents employ follow-the-regularized-leader dynamics, and we ask the question: when can an aware agent steer the system to any desired action distribution? We show how this can be reduced to a classical geometric control problem: when is a nonholonomic system controllable?
I will also discuss some preliminary results on applying Bayesian persuasion to influence human behavior in autonomous driving. In this setting, it may be unreasonable to expect all other drivers to receive, interpret, and analyze the signaling scheme, so we consider whether or not Bayesian persuasion can still be effective when the signaling scheme is not communicated explicitly but rather implicitly through social expectations.
Bio: Roy Dong is an Assistant Professor in the Industrial & Enterprise Systems Engineering department at the University of Illinois at Urbana-Champaign. He received a BS Honors in Computer Engineering and a BS Honors in Economics from Michigan State University in 2010. He received a PhD in Electrical Engineering and Computer Sciences at the University of California, Berkeley in 2017, where he was funded in part by the NSF Graduate Research Fellowship. Prior to his current position, he was a postdoctoral researcher in the Berkeley Energy & Climate Institute, a visiting lecturer in the Industrial Engineering and Operations Research department at UC Berkeley, and a Research Assistant Professor in the Electrical and Computer Engineering department at the University of Illinois at Urbana-Champaign. His research uses tools from control theory, economics, statistics, and optimization to understand the closed-loop effects of machine learning, with applications in cyber-physical systems such as the smart grid, modern transportation networks, and autonomous vehicles.