May 19 AM, Eastern Time
Theoretical Foundations for Markov Games (AAMAS 2025)
Detroit, Michigan, USA.
May 19 AM, Eastern Time
Theoretical Foundations for Markov Games (AAMAS 2025)
Detroit, Michigan, USA.
Presenters
Shuai Li (John Hopcroft Center for Computer Science, Shanghai Jiao Tong University)
Canzhe Zhao (John Hopcroft Center for Computer Science, Shanghai Jiao Tong University)
Contact email: shuaili8@sjtu.edu.cn
Descriptions
Markov games (MGs), also known as stochastic games, encompass a broad spectrum of multi-agent reinforcement learning applications across diverse fields, such as robotics, autonomous vehicles, finance, economics, and more. In MGs, given the full knowledge of the system (i.e., the utilities of the agents and the dynamics of the system), computing an equilibrium where no agents will benefit from a unilateral deviation of their own strategy, is an important problem in game theory and has been extensively studied. In practice, however, it might often happen that the knowledge of the system is not known a priori. In such cases, MGs are typically addressed by learning an equilibrium during repeated playthroughs of the system in an online manner, which has recently attracted substantial research attention in areas of sequential decision-making and game theory. The primary goal of this problem is to devise algorithms with provable sample complexities of finding an approximate equilibrium. This tutorial offers a comprehensive introduction to the latest advancements and pioneering discoveries in MGs, with a particular emphasis on both algorithmic design and theoretical results related to this problem.
Schedule
Learning in Single-agent Markov Games
Bandit Learning
Stochastic Bandits
Adversarial Bandits
Reinforcement Learning
Tabular Reinforcement Learning
Reinforcement Learning w/ Function Approximation
Computing Equilibria in Markov Games
Computing Nash Equilibrium (NE) in two-player zero-sum games
Computing NE beyond two-player zero-sum games
Relaxed Notions of NE
Correlated Equilibrium (CE)
Coarse Correlated Equilibrium (CCE)
Learning in One-step Markov Games
Two-agent Zero-sum Matrix Games
Multi-agent General-sum Matrix Games
Learning in Markov Games
Tabular Two-agent Zero-sum MGs
The Curse of Multi-Agency: Efficient Learning in Multi-agent MGs
Function Approximation: Efficient Learning in MGs w/ Large State-action Space
Open Problems
Optimal Sample Complexity for Learning 𝜀-CCE
Optimal Sample Complexity for Learning 𝜀-CE
General Function Approximation
This tutorial is designed for researchers in both the fields of reinforcement learning (RL) and game theory as well as practitioners who seek a deeper understanding of the theoretical foundations in MGs. For researchers from the domain of RL, this tutorial expounds on the challenges and the key algorithmic designs in the face of the interactions between the agents involved in the MGs. For those from the domain of game theory, this tutorial also provides valuable illustrations on dealing with unknown agents’ utilities and system dynamics. Prior familiarity with the basic preliminaries of multi-armed bandits (MABs), reinforcement learning, and the concept of equilibrium would be beneficial for a thorough understanding of the content covered in this tutorial.