Iterative Schemes for Markov Perfect Equilibria
with Mathieu Laurière, H. Mete Soner and Qinxin Yan
We study Markov perfect equilibria in continuous-time dynamic games with finitely many symmetric players. The corresponding Nash system reduces to the Nash-Lasry-Lions equation for the common value function, also known as the master equation in the mean-field setting. In the finite-state space problems we consider, this equation becomes a nonlinear ordinary differential equation admitting a unique classical solution. Leveraging this uniqueness, we prove the convergence of both Picard and weighted Picard iterations, yielding efficient computational methods. Numerical experiments confirm the effectiveness of algorithms based on this approach.
Markov Perfect Equilibria in Discrete Finite-Player and Mean-Field Games
with H. Mete Soner and Atilla Yılmaz
We study dynamic finite-player and mean-field stochastic games within the framework of Markov perfect equilibria (MPE). Our focus is on discrete time and space structures without monotonicity. Unlike their continuous-time analogues, discrete-time finite-player games generally do not admit unique MPE. However, we show that uniqueness is remarkably recovered when the time steps are sufficiently small, and we provide examples demonstrating the necessity of this assumption. This result, established without relying on any monotonicity conditions, underscores the importance of inertia in dynamic games. In both the finite-player and mean-field settings, we show that MPE correspond to solutions of the Nash-Lasry-Lions equation, which is known as the master equation in the mean-field case. We exploit this connection to establish the convergence of discrete-time finite-player games to their mean-field counterpart in short time. Finally, we prove the convergence of finite-player games to their continuous-time version on every time horizon.
Synchronization Games
with H. Mete Soner, forthcoming in Mathematics of Operations Research (2025)
We propose a new mean-field game model with two states to study synchronization phenomena, and we provide a comprehensive characterization of stationary and dynamic equilibria along with their stability properties. The game undergoes a phase transition with increasing interaction strength. In the subcritical regime, the uniform distribution, representing incoherence, is the unique and stable stationary equilibrium. Above the critical interaction threshold, the uniform equilibrium becomes unstable and there is a multiplicity of stationary equilibria that are self-organizing. Under a discounted cost, dynamic equilibria spiral around the uniform distribution before converging to the self-organizing equilibria. With an ergodic cost, however, unexpected periodic equilibria around the uniform distribution emerge.
Optimal Control and Potential Games in the Mean Field
with H. Mete Soner
We study a mean field optimal control problem with general non-Markovian dynamics, including both common noise and jumps. We show that its minimizers are Nash equilibria of an associated mean field game of controls. These types of games are necessarily potential, and the Nash equilibria derived as the minimizers of the control problem are closely connected to McKean-Vlasov equations of Langevin type. To illustrate the general theory, we present several examples, including a mean field game of controls with interactions through a price variable, and mean field Cucker-Smale Flocking and Kuramoto models. We also establish the invariance property of the value function, a key ingredient used in our proofs.