Slides_Talks

Vianney Perchet (ENSAE & CriteoAI Lab)

Approachability, regret and online learning. Theory and Open questions

In this tutorial, I will quickly present the concepts of approachability and regret minimization and how they are related. Roughly speaking, the latter is a variant of an optimisation problem where the loss function changes at every time step, possibly adversarially and/or chosen by some other player; the former would be the online variant of some multi-criteria optimisation. Quite interestingly, both concepts have practical applications in game theory that I will also review. It is nowadays quite understood that regret minimization can be achieved through approachability (and vice-versa) hence they are more or less equivalent; I will also illustrate these reductions.

On the other hand, many questions remain open, and will be discussed, when the feedback is impartial (the bandit case) and/or the data are stochastic.

Marc Quincampoix (Université de Brest)

Approachability and differential games

We investigate (weak and strong) approachability for repeated games through suitable differential games. We study approachability with partial monitoring by differential games on the Wasserstein space of probability measures. We also give a specific class of strategies for differential games which are very suitable to make the links with strategies of associated repeated games.

Edouard Pauwels (IRIT)


  • Home

  • Slides and Presentations

  • Registration

  • List of participant

Edit this menu via the Pages tab

Show me


An introduction to Generative Adversarial Networks (GANs)

The purpose of this presentation is to introduce a class of popular models called Generative Adversarial Networks (GANs). The main building blocks of these generative models are artificial neural networks, a class of machine learning tools which are used to reproduce observed phenomena based on examples. The way GAN models are expressed and trained is formulated as a zero sum game between two competing neural networks. The presentation will remain at a tutorial level, in particular no prior knowledge regarding neural networks is expected.

The presentation will start with a short history of the emergence of neural networks and GANs and some recent applications. A condensed overview of the main technical mechanisms behind neural network training will be given. We will then present the foundational work which introduced GANs and discuss the difficulties arising when training such models, these are mostly due to the zero sum game formulation. It turns out that GANs can be viewed as a generic algorithmic way to enforce equality between probability distributions. This point of view gave rise to variations around the original concept of GANs, such as Wasserstein GANs and provides insights to stabilize training algorithms. The last part of the presentation will focus on algorithmic solutions which were proposed to circumvent difficulties of GAN training, we will conclude with some open questions related to GANs.