Games, Approachability

and

Learning

Introduction

Dear Friends and Colleagues,

We invite you to the Games, Approachability and Learning Workshop that will take place on January 18-19, 2021 online.

Originally planned at IHP in Paris, we decided to switch to an online seminar following the recent strengthening of rules across europe.

The goal of the workshop is to emphasize recent connections between Game Theory, Approachability and Learning, and to enhance collaboration between these topics. To achieve this goal, the workshop will include three tutorials given by experts in the field. It will not include paper presentations by participants.

The three tutorials will be given by Edouard Pauwels (Toulouse 3 Paul Sabatier), Vianney Perchet (ENSAE& Criteo AI Lab ) and Marc Quincampoix (Université de Brest).

The workshop is sponsored by GameNet.

Venue

ONLINE

The presentations will take place on zoom at the adress obtained by combining

https://us02web.zoom.us/j/

and

82378898953

Registration

In order to be kept informed about the workshop, you can register by filling this form

Planning

Monday 18 January 2021

9h45 : Introduction

10h-11h : Vianney Perchet

11h-11h15 : Break

11h15 - 12h15 : Vianney Perchet

14h-14h45 : Marc Quincampoix

14h45-15h00 : Break

15h00-15h45 : Marc Quincampoix

15h45 - 16h : Open discussions

Tuesday 19 January 2021

10h-10h45 : Edouard Pauwels

10h45-11h : Break

11h - 11h45 : Edouard Pauwels

11h45-12h : Conclusion

Abstract

Vianney Perchet (ENSAE & CriteoAI Lab)

Approachability, regret and online learning. Theory and Open questions

In this tutorial, I will quickly present the concepts of approachability and regret minimization and how they are related. Roughly speaking, the latter is a variant of an optimisation problem where the loss function changes at every time step, possibly adversarially and/or chosen by some other player; the former would be the online variant of some multi-criteria optimisation. Quite interestingly, both concepts have practical applications in game theory that I will also review. It is nowadays quite understood that regret minimization can be achieved through approachability (and vice-versa) hence they are more or less equivalent; I will also illustrate these reductions.

On the other hand, many questions remain open, and will be discussed, when the feedback is impartial (the bandit case) and/or the data are stochastic.


Marc Quincampoix (Université de Brest)

Approachability and differential games

We investigate (weak and strong) approachability for repeated games through suitable differential games. We study approachability with partial monitoring by differential games on the Wasserstein space of probability measures. We also give a specific class of strategies for differential games which are very suitable to make the links with strategies of associated repeated games.


Edouard Pauwels (IRIT)

An introduction to Generative Adversarial Networks (GANs)

The purpose of this presentation is to introduce a class of popular models called Generative Adversarial Networks (GANs). The main building blocks of these generative models are artificial neural networks, a class of machine learning tools which are used to reproduce observed phenomena based on examples. The way GAN models are expressed and

trained is formulated as a zero sum game between two competing neural networks. The presentation will remain at a tutorial level, in particular no prior knowledge regarding neural networks is expected.

The presentation will start with a short history of the emergence of neural networks and GANs and some recent applications. A condensed overview of the main technical mechanisms behind neural network training will be given. We will then present the foundational work which introduced GANs and discuss the difficulties arising when training such models, these are mostly due to the zero sum game formulation. It turns out that GANs can be viewed as a generic algorithmic way to enforce equality between probability distributions. This point of view gave rise to variations around the original concept of GANs, such as Wasserstein GANs and provides insights to stabilize training algorithms. The last part of the presentation will focus on algorithmic solutions which were proposed to circumvent difficulties of GAN training, we will conclude with some open questions related to GANs.



Organizing Committee

Xavier Venel

Bruno Ziliotto

Eilon Solan


Sponsors