About

This tutorial will be hosted at the eleventh international conference on autonomous agents and multiagent systems (AAMAS 2012).

Introduction

Multi-agent reinforcement learning (MARL) is an important and fundamental topic within agent-based research. After giving successful tutorials on this topic at EASSS 2004 (the European Agent Summer School), ECML 2005, ICML 2006, EWRL 2008 and AAMAS 2009-2011, with different collaborators, we now offer a revised and updated tutorial, covering both theoretical as well as practical aspects of MARL.

Participants will be taught the basics of single-agent reinforcement learning and the associated theoretical convergence guarantees related to Markov Decision Processes. We will then outline why these convergence guarantees no longer hold in a setting where multiple agents learn. We will explain practical approaches on how to scale single agent reinforcement learning to these situations where multiple agents influence each other and introduce a framework, based on game theory and evolutionary game theory, that allows thorough analysis of the dynamics of multi-agent learning, culminating in a taxonomy of learning algorithms. The tutorial will close with a demonstration session, showing the viability of reinforcement learning in several key application domains. This provides an interactive environment for participants to practice the theoretical discussion of challenges and solutions of multi-agent reinforcement learning.

Impact and target audience

Multi-agent systems are receiving increasing attention by the research community. This can be observed from the accepted full articles at AAMAS 2010, of which about 65% are dealing with multi-agent systems (The numbers in this paragraph are obtained by manually labeling the articles based on their title). Multi-agent systems are inherently complex if not impossible to control by design, which explains a keen interest of the community in adaptive multi-agent systems. Indeed, within the topic of multi-agent systems about 40% of the accepted full articles at AAMAS 2010 concern adaptive multi-agent systems. The tutorial aims at researchers that are faced with a multi-agent setting and consider to devise an adaptive solution. The tutorial assumes no prior knowledge specific to MARL or (evolutionary) game theory. Morning 2 and Afternoon 1 and 2 build on the fundamental concepts that are covered in Morning 1. Other than that, these three blocks are self-contained but complementary to each other.

Reinforcement learning is one of the most popular approaches to single-agent learning, because it is explicitly agent-centric, it is founded in psychological models and it provides convergence guarantees under the proper assumptions. It has been applied to multi-agent settings with promising results. However, the theoretical convergence guarantees based on classical proofs are lost, since common assumptions such as a Markovian environment are violated. A new perspective and theoretical framework for analyzing and improving MARL, including convergence guarantees, are presented in this tutorial. Participants receive the necessary knowledge to apply the analysis to their specific multi-agent setting in order to devise and refine solutions based on MARL (Morning session). In addition, participants will get an introduction to existing taxonomies of multi-agent learning MARL (Afternoon 1) and will see how MARL can be applied to practical problems (Afternoon 2).