**
****TITLE**
BILEVEL OPTIMIZATION: THE GOOD AND THE LESS GOOD, ILLUSTRATED THROUGH FOUR APPLICATIONS

**ABSTRACT**

Optimization problems that are constrained to the solution set of an alternate problem can be traced back to Stackelberg, or even further back, and are at the heart of fields in economics such as the principal agent theory, or product line design. However, it is much later that the term 'bilevel programming' (and many others) has been coined to depict the situation where a 'leader' integrates the rational reaction of a partially controlled 'follower' within its optimization process. The paradigm is now firmly entrenched in many areas, such as energy modelling, where it is routinely used to address user behaviour in a competitive environment. In this talk, I will argue that the main issue associated with the numerical resolution of bilevel programs is combinatorial, and that efficient algorithms must rely on the structure underlying the specific model considered. This will be illustrated through four applications. The first is concerned with the determination of **revenue-maximizing tolls **on a transportation network, under various assumptions concerning the selection of paths by 'selfish' users. The second is a **continuous network design problem** involving user-optimized ('Wardropian') flows for which heuristic procedures with guaranteed worst-case performance exist. The third is an energy model cast within the framework of a **smart grid **that acts as a middleperson between a revenue-maximizing provider and its customers. The fourth is a discrete-continuous **location-queueing **model where the goal of the leader firm is, within a competitive environment, to maximize its market share through location and service level decisions. In all cases, we focus on the theoretical complexity, which may greatly differ from practical complexity, as well as on the design of exact or 'semi-exact' algorithms.

**Bio: **Patrice Marcotte holds a PhD in computer science (operations research) from the University of Montreal. His research initially focused on continuous bilevel programming. It then drifted to optimization approaches for the solution of monotone variational inequalities in general, and the network equilibrium problem in particular. Later, he developed a bilevel pricing framework well suited to network-based revenue management, as well as to various situations that occur in the energy sector and that involve a 'smart grid'. He has also published a paper on badminton, and is the co-author of two cycling guides.

**Title:**

"Smooth Calibration, Leaky Forecasts, Finite Recall, and Nash Dynamics"

(joint work with Dean Foster)

**Abstract: **

How good is a forecaster? Assume for concreteness that every day the forecaster issues a forecast of the type "the chance of rain tomorrow is 30%." A simple test one may conduct is to calculate the proportion of rainy days out of those days that the forecast was 30%, and compare it to 30%; and do the same for all other forecasts. A forecaster is said to be calibrated if, in the long run, the differences between the actual proportions of rainy days and the forecasts are small--—no matter what the weather really was. We start from the classical result that calibration can always be guaranteed by randomized forecasting procedures (a simple proof will be provided). We propose to smooth out the calibration score, which measures how good a forecaster is, by combining nearby forecasts. While regular calibration can be guaranteed only by randomized forecasting procedures, we show that smooth calibration can be guaranteed by deterministic procedures. As a consequence, it does not matter if the forecasts are leaked, i.e., made known in advance: smooth calibration can nevertheless be guaranteed (while regular calibration cannot). Moreover, our procedure has finite recall, is stationary, and all forecasts lie on a finite grid. We then consider smooth calibrated learning in n-person games, and show that in the long run it is close to Nash equilibria most of the time.

**Bio: **Sergiu Hart is the Kusiel-Vorreuter University Professor, Professor of Mathematics, Professor of Economics, and Member of the Center for the Study of Rationality, at the Hebrew University of Jerusalem. He received his Ph.D. in mathematics at Tel-Aviv University in 1976. His previous academic appointments were at Stanford, Tel-Aviv, and Harvard Universities. The main area of research of Sergiu Hart is game theory and economic theory, with additional contributions in mathematics, computer science, probability and statistics. From 1991 to 1998 he was the founding director of the Center for the Study of Rationality. He served as President of the Israel Mathematical Union in 2005-2006, and as President of the Game Theory Society in 2008-2010. In 1998 he received the Rothschild Prize. Sergiu Hart was elected Fellow of the Econometric Society in 1985, Member of the Israel Academy of Sciences and Humanities in 2006, Member of Academia Europaea in 2012, and Foreign Member of the American Academy of Arts and Sciences in 2016.