The workshop will take place in Dan David Building, Room 205.
Wednesday 29-January-2020
- 9:30 - 11:00: Tutorial: David Levine. Title: Learning Nash Equilibrium
- 11:00 - 11:15: Coffee Break
- 11:15 - 12:45: Tutorial: Johannes Horner. Title: Dynamic Allocation Problems
- 12:45 - 14:15: Lunch
- 14:15 - 15:05: Open Problem Session. Chair: Miklos Pinter. Presenters: David Levine, Johannes Horner, Abraham Neyman, Steffen Eibelhauser, Bruno Ziliotto
- 15:05 - 16:30: Working Session. Chair: Agnieszka Wiszniewska-Matyszkiel
- 16:30 - 17:30: Transport to Walking Tour
- 17:30 - 20:00: Walking Tour
- 20:00: Dinner
Thursday 30-January-2020
- 10:30 - 12:00: Tutorial: Nicolas Vieille. Title: Social Learning
- 12:00 - 13:00: Lunch
- 13:00 - 14:30 : Tutorial: Eran Shmaya. Title: Worst Case Regret
- 14:30 - 14:45 : Coffee Break
- 14:45- 15:30: open problem session. Chair: Dario Bauso. Presenters: Eran Shmaya, Rann Smorodinski, David Lagziel, Catherine Rainer, Agnieszka Wiszniewska-Matyszkiel, Fedor Sandomorskii.
- 15:30 - 17:00: working session. Chair: Yevgeny Tsodikovich
Tutorials:
- David Levine: Learning Nash Equilibrium: The tutorial discusses learning processes that converge globally to Nash equilibrium. How do they work? Do they make sense? What sort of information assumptions are required? Are certain types of Nash equilibria more likely to arise than others? The tutorial will cover the basic ideas and recent results.
- Johannes Horner: Dynamic Allocation Problems: The tutorial discusses dynamic allocation rules, or "trades of favors". These are specific classes of strategies in repeated games with specific structures involving adverse selection or moral hazard. They might involving keeping track of the number of favors made, or introduce "chips" as fictitious currency. The tutorial will review the literature, cover the basic ideas and open problems.
- Nicolas Vieille: Social Learning. The central theme of the tutorial is (Bayesian) social learning. In the canonical version of such incomplete-information models, a sequence of short-lived agents make decisions in turn, learning from previous agents' behavior. The tutorial will cover seminal and recent results.
- Eran Shmaya: Worst Case Regret: The worst-case regret approach to uncertainty goes back at least to Savage (1954) and Hannan (1957). Under this approach, when a decision maker has to choose some action and his payoff depend on the action and an unknown state, he chooses the action that minimizes the worst-case regret across all possible states. The regret is defined as the difference between what the decision maker could achieve if he knew the state, and what he achieves under this action. I will present some applications of the worst case regret approach in game theory, online learning, and mechanism design.
Confirmed Participants: