Mini Courses at LPS XIII

Deepak Dhar, IISER Pune: Stochastic evolution in the Minority Game

Abstract: In these lectures, I will discuss the stochastic evolution of a system of agents playing a variation of the Minority Game. In this game, every day, an odd number of agents choose one of two possible options, and every day the winning option is one that had fewer takers. This model is studied a lot as a primitive model of stock markets, and as a simple model of learning, adaptation and coevolution.

I will start with some quick summary of basic game theory concepts. Then introduce the standard Minority Game. Then I will discuss the game when the agents that use probabilistic strategies. I will show that standard Nash equilibrium does not give satisfactory strategies for this game, and discuss a different solution concept (i.e., the general guiding principle for finding optimal strategies) called co-action equilibrium. I will also discuss the limit when agents want to optimize there own long-time average winning, which requires them to coordinate their choices to get into a periodic state in least time.

Suggested readings:

Participants could start with The Minority Game: an introductory guide, by E. Moro [arXiv:cond-mat/ 0402651], or some equivalent review. The lectures are based on the following papers:

  • Emergent cooperation amongst competing agents in minority games: Deepak Dhar, V. Sasidevan, and Bikas K. Chakrabarti, Physica A 390 (2011) 3477–3485 [arXiv:1102:4230]
  • Strategy switches and co-action equilibria in a minority game: V. Sasidevan, Deepak Dhar: Physica A 402 (2014) 306–317. [ arXiv: 1212:6601]
  • Achieving Perfect Coordination amongst Agents in the Co-Action Minority Game: Hardik Rajpal, and Deepak Dhar, Games 2018, 9, 27; doi:10.3390/g9020027. [arXiv: 1802:06770]


Kavita Ramanan, Brown University: Scaling limits of interacting particle systems

Abstract: Random phenomena, in a variety of fields, ranging from engineering, computer science and statistical physics to biology, are modelled in terms of large collections of interacting stochastic processes (or particles), whose interactions are governed by an underlying (possibly random) graph. These processes are typically too complicated to be amenable to an exact analysis. Instead, one often looks for approximations that can provide qualitative insight into the processes, and can be rigorously justified by "scaling limit" theorems, in a suitable asymptotic regime.

In these lectures, I will introduce the interacting stochastic process models of interest, and describe theorems that characterize their limits, as the number of interacting processes goes to infinity. I will first consider the more classical case, when the underlying graph is complete or dense, which leads to so-called mean-field limits, as well as more recent work in the case when the underlying graph is sparse, which requires a different set of tools for its analysis. The course will be kept at a basic level and should be accessible at a masters level, assuming only basic notions of measure-theoretic probability and some knowledge of Markov chains.