This year's schedule is here.
20-June-2023 Shiri Alon Eron, Bar Ilan University
13-June-2023 Isaac Sonin, UNC Charlotte
23-May 2023 Dima Shaiderman, Technion
16-May 2023 Ran Snitkovsky, Tel Aviv University
9-May-2023 Guy Avni, Haifa University
2-May-2023 Itai Arieli, Technion
28-Mar-2023 Cancelled due to the protest against the changes in the lgal system
21-Mar-2023 David Lagziel, Ben Gurion University
14-Mar-2023 Yevgeni Tsodikovich, Bar Ilan University
17-Jan-2023 Konstantin Zabarnyi, Technion
3-Jan-2023 Ziv Hellman, Bar Ilan University
27-Dec-2022 Isaac Meilijson, Tel Aviv University
20-Dec-2022 Igal Milchtaich, Bar Ilan University
13-Dec-2022 Ron Peretz, Bar Ilan University
6-Dec-2022 Janos Flesch, Maastricht University
29-Nov-2022 Abraham Neyman, Hebrew University of Jerusalem
22-Nov-2022 Ronen Gradwohl, Ariel University
8-Nov-2022 Catherine Rainer, Université de Brest
20-Apr-2020 Bernhard von Stengel, LSE (Online)
6-Apr-2020 John Levy, The University of Glasgow (Online)
21-Jan-2020 Ehud Kalay, Northwestern University
7-Jan-2020 Bar Light
31-Dec-2019 Adam Lampert, Arizona State University
24-Dec-2019 Marco Scarsini, Luiss University
17-Dec-2019 Robert Simon, London School of Economics
10-Dec-2019 Oren Dean, Technion
3-Dec-2019 Itai Arieli, Technion
26-Nov-2019 Aditya Aradhye, Maastricht University
20-Apr-2020 Bernhard von Stengel, LSE
Algorithms for Rank-1 Bimatrix Games
Zoom Link: https://zoom.us/j/241150956?pwd=clc4L1I5M3BYTXEvQnErOVBtRFI0UT09
Abstract: The rank of a bimatrix game is the matrix rank of the sum of the two payoff matrices. For a game of rank r, the set of its Nash equilibria is the intersection of a one-dimensional set of equilibria of parameterized games of rank r-1 with a hyperplane. We comprehensively analyze games of rank one. They are economically more interesting than zero-sum games (which have rank zero), but are nearly as easy to solve. We show the following results: (1) One equilibrium of a rank-1 game can be found in polynomial time by binary search. (2) All equilibria of a rank-1 game can be found by a pivoting method that follows a piecewise linear path. In contrast, such a path-following method finds only one equilibrium of a general bimatrix game. (3) The number of equilibria of a rank-1 game may be exponential. (4) We present a new rank-preserving homeomorphism between bimatrix games and their equilibrium correspondence. It is a variation of the Kohlberg-Mertens homeomorphism used for the concept of strategic stability of an equilibrium component.
6-Apr-2020 John Levy, The University of Glasgow
Uniformly Supported Approximate Equilibria in Families of Games
Zoom Link: https://zoom.us/j/556474896?pwd=cEhWT0J6ZzhMVHhjSGZCY2txYW5JZz09
Abstract: This paper considers uniformly bounded classes of non-zero-sum strategic-form games with compact continuous action spaces. The class of games is assumed to be defined via a semi-algebraic condition. We show that for each varepsilon>0, the support size required for varepsilon-equilibrium can be taken to be uniform over the entire class. As a corollary, the value in the zero-sum case, as a function of a single-variable, is well behaved in the limit. More generally, the result only requires that the collection of payoff functions considered, as a function of other players actions, have finite Vapnik–Chervonenkis dimension.
21-Jan-2020 Ehud Kalay, Northwestern University
Viable Nash Equilibria: Formation and Sustainability
Abstract: Viable Nash equilibria are observed in functioning social systems and lab experiments, whereas unviable Nash equilibria are contrived and have the appearance of useless theory. I study simple dual indices to assess equilibrium viability. Surprisingly, despite their simplicity these indices:
(1) uncover new equilibrium properties and stability considerations that escape standard game theory refinements, and
(2) provide insights into the viability of Nash equilibria in functioning social system and in laboratory experiments.
7-Jan-2020 Bar Light
Mean field equilibrium: uniqueness, existence and comparative statics
Abstract: The standard solution concept for stochastic games is Markov perfect equilibrium (MPE); however, its computation becomes intractable as the number of players increases. Instead, we consider mean field equilibrium (MFE) that has been popularized in the recent literature. MFE takes advantage of averaging effects in models with a large number of players. We make three main contributions. First, our main result provides conditions that ensure the uniqueness of an MFE. We believe this uniqueness result is the first of its nature in the class of models we study. Second, we generalize previous MFE existence results. Third, we provide general comparative statics results. We apply our results to dynamic oligopoly models and to heterogeneous agent macroeconomic models commonly used in previous work in economics and operations.
31-Dec-2019 Adam Lampert, Arizona State University
Should multiple agents work together or split their job to control populations of harmful species?
Abstract: The management of harmful species, including invasive species, pests, parasites, and diseases, is a major global challenge. Harmful species cause severe damage to ecosystems, biodiversity, agriculture, and human health. In particular, the management of harmful species often requires cooperation among multiple agents, such as land‐owners, agencies, and countries. Each agent may have incentives to contribute less to the treatment, leaving more work for other agents, which may result in inefficient treatment. A central question is, therefore, how should a policymaker allocate the treatment duties among the agents? Specifically, should the agents work together in the same area, or should each agent work only in a smaller area designated just for her/him? I consider a dynamic game-theoretic model, where a Markovian Nash equilibrium corresponds to a possible set of contributions that the agents could adopt over time. In turn, the allocation by the policymaker determines which of the Nash equilibria could be adopted, which allows us to compare the outcome of various allocations. My results show that fewer agents abate the harmful species population faster, but multiple agents can better control the population to keep its density lower. This is proven in a general theorem and demonstrated numerically for two case studies. Therefore, following an outbreak, the better policy would be to split and assign one or a few agents to treat the species in a given location; but if the control/contamination of the harmful species population at a low density is needed, the agents should work together in all the locations.
24-Dec-2019 Marco Scarsini, Luiss University
The Buck-Passing Game
Abstract: We consider situations where a finite number of agents want to transfer the responsibility of doing a job (the buck) to their neighbors in a social network. This can be seen as network variation of the public good model. The goal of each agent is to see the buck coming back as rarely as possible. We frame this situation as a game where players are the vertices of a directed graph and the strategy space of each player is the set of her out-neighbors. Nature assigns the buck to a random player according to a given initial distribution. Each player pays a cost that corresponds to the asymptotic expected frequency of times that she gets the buck. We consider two versions of the game. In the deterministic one each player chooses one of her out-neighbors once and for all at the beginning of the game. In the stochastic version a player chooses a probability distribution that determines which of her out-neighbors will be chosen when she passes the buck. We show that in both cases the game admits a generalized ordinal potential whose minimizers provide equilibria in pure strategies, even when the strategy set of each player is uncountable. We also show the existence of equilibria that are prior-free, in the sense that they do not depend on the initial distribution used to initially assign the buck. We provide different characterizations for the potential, we analyze fairness of equilibria, and, finally, we discuss a buck holding variant in which players want to maximize the frequency of times they hold the buck. As an application of the latter we briefly discuss the PageRank game.
A joint work with Roberto Cominetti and Matteo Quattropani
17-Dec-2019 Robert Simon, London School of Economics
A Bayesian Game without ∈-equilibria
Abstract: We present a three-player Bayesian game for which there are no ∈-equilibria in Borel measurable strategies for small enough positive ∈, however there are non-measurable equilibria. The structure of the game employs a nonamenable semi-group action corresponding to the knowledge of the players. The equilibrium property is related to the proper colouring of graphs and the Borel chromatic number; but rather than keeping adjacent vertices coloured differently there are algebraic conditions relating to the topology of the space and some ergodic operators.
A joint work with Grzegorz Tomkowicz
10-Dec-2019 Oren Dean, Technion
Incentive-Compatible Multi-Level Marketing
Abstract: In multi-level marketing, users are encouraged to recruit new members by offering prizes to those who recruited the most. As the campaign proceeds, new members become themselves recruiters and a referral-forest, which represents the relations between members and the members who recruited them, emerges. The challenge in multi-level marketing stems from the desire to reward a user not only for his direct recruits, but for the whole subtree which resulted from those recruits (i.e., his progeny). In this paper, we look at the problem of offering a scheme that distributes a fixed-budget payments to the participants while being incentive-compatible (IC, strategy-proof). The goal is to find an IC scheme that awards as much of the budget to members with high progeny. We offer a measure of the quality of a scheme that varies between 0 and 1. A quality of 1 means that the scheme awards all the budget to the best recruiters; a quality of 0 means that the scheme pays no one. We present two payment schemes, both are IC with a quality of at least 1/3 in the worst case. The first, simpler scheme, has the nice feature of distributing payments which are progeny-proportional. The downside of this scheme is that it is not budget-balanced. Our second, more involved scheme, is budget-balanced but not progeny-proportional. We also prove an impossibility for an IC scheme that is both budget-balanced and progeny-proportional and has a positive quality. Our payment schemes, when considered as IC selection mechanisms, give a significant improvement to the results of Babichenko et al. (WWW 2018).
A joint work with Yakov Babichenko and Moshe Tennenholt
3-Dec-2019 Itai Arieli, Technion
Optimal Persuasion by Bi-Pooling Intervals
Abstract: TBD
A joint work with Yakov Babichenko and Rann Smorodinsky
26-Nov-2019 Aditya Aradhye, Maastricht University
Sender-receiver stopping games
Abstract: We introduce a model of sender-receiver stopping games, where the state of the world follows an iid–process throughout the game. At each period, the sender observes the current state, and sends a message to the receiver, suggesting either to stop or to continue. The receiver, only seeing the message but not the state, decides either to stop the game, or to continue which takes the game to the next period. The payoff to each player is a function of the state when the receiver quits, with higher states leading to better payoffs. The horizon of the game can be finite or infinite.We characterize the set of Perfect Bayesian Equilibria (PBE) of these games when the players are sufficiently patient; the payoffs are either undiscounted or discounted with a large discount factor. We also derive existence and unicity results for PBE. As we show, in all our results the central role is played by a specific strategy profile in which the sender uses a threshold strategy whereas the receiver blindly follows the recommendations of the sender. This implies that in PBE the sender plays the decisive role, and surprisingly, regardless of the payoff function of the receiver, the sender obtains the best possible payoff for himself. We also consider the extension in which there are multiple senders and just one receiver.
A joint work with János Flesch, Mathias Staudigl and Dries Vermeulen
11-Jun-2019 David Lagziel, Ben-Gurion University
04-Jun-2019 Xavier Venel, Paris School of Economics and the Center of Economy at Sorbonne (CES) at Université Paris 1
28-May-2019 Ziv Hellman, Bar-Ilan University
21-May-2019 Dominik Karos , Maastricht University
14-May-2019 Galit Ashkenazi-Golan, Tel-Aviv University
07-May-2019 Dima Shaiderman, Tel-Aviv University
30-Apr-2019 Alexander Nesterov, Higher School of Economics, St.Petersburg
26-Mar-2019 Yuval Heller, Bar-Ilan University
19-Mar-2019 Chang Zhao, Tel-Aviv University
12-Mar-2019 Ella Segev, Ben Gurion University of the Negev
05-Mar-2019 Fedor Sandomirskiy, Technion / HSE St. Petersburg
17-Jan-2019 Miquel Oliu Barton, Paris Dauphine University
09-Jan-2019 Steffen Eibelshäuser, Goethe University Frankfurt
08-Jan-2019 Rida Larki, CNRS (Dauphine, France) and University of Liverpool (UK)
03-Jan-2019 János Flesch, Maastricht University
01-Jan-2019 Edi Karni, Johns Hopkins University
25-Dec-2018 Olga Gorelkina, University of Liverpool Management School
18-Dec-2018 Omer Edhan, University of Manchester
11-Dec-2018 Yakov Babichenko, Technion
04-Dec-2018 Igal Milchtaich, Bar Ilan University
27-Nov-2018 Omer Ben-Porat, The Technion
20-Nov-2018 Arieh Gavious, Ben Gurion University and Ono Academic College
13-Nov-2018 Matthias Blonski, Goethe University Frankfurt
06-Nov-2018 Catherine Rainer, Brest University
23-Oct-2018 Tao Wang, Tel-Aviv University
11-Jun-2019 David Lagziel, Ben-Gurion University
Abstract: We define and characterize the notion of strong robustness to incomplete information, whereby a Nash equilibrium in a game u is strongly robust if, given that each player knows that his payoffs are those in u with high probability, all Bayesian-Nash equilibria in the corresponding incomplete-information game are close – in terms of action distribution – to that equilibrium of u. We prove, under some continuity requirements on payoffs, that a Nash equilibrium is strongly robust if and only if it is the unique correlated equilibrium. We then review and extend the conditions that guarantee the existence of a unique correlated equilibrium in games with a continuum of actions. The existence of a strongly robust Nash equilibrium is thereby established for several domains of games, including those that arise in economic environments as diverse as Tullock contests, Cournot and Bertrand competitions, network games, patent races, voting problems, and location games.
Joint with Ezra Einy and Ori Haimanko
04-Jun-2019 Xavier Venel, Paris School of Economics and the Center of Economy at Sorbonne (CES) at Université Paris 1
Abstract: In a partial Observation Markov Decision Process, the decision maker generates traditionally an infinite stream of stage payoffs. This stream of payoffs can be aggregated in many different ways: mean average, discounted average, inferior limit of the average. It was known since Rosenberg et al. (1998) that the limit of n-stage mean average when $n$ goes to infinity and the limit of discounted average when the decision maker become patient converge to the same limit. Renault and Venel (2017) showed that the result extends to sequences of non-decreasing evaluations by introducing a proper measure of the regularity of an evaluation. Simultaneously, Venel and Ziliotto (2016) proved that this value also coincide with the value under the liminf evaluation. In this article, we introduce a more general family of evaluation and a measure of irregularity adapted. We then prove that the result of Renault and Venel extends to this new family.
joint with Bruno Ziliotto
28-May-2019 Ziv Hellman, Bar-Ilan University
Abstract: I will present a survey of several published and working papers from recent years (nearly all jointly composed with Yehuda John Levy). The focus will be on a common thread running through all of these papers, namely the surprising effectiveness of concepts from descriptive set theory in attaining results on measurable equilibrium existence in Bayesian games, stochastic games, graphical games, and more. In particular, the countable Borel equivalence relation hierarchy, which relates to matters of measurability, plays a central role here. A new result on the existence of measurable approximate Harsanyi (i.e., ex ante) equilibria in Bayesian games will also be presented.
21-May-2019 Dominik Karos, Maastricht University
Abstract: A set of agents decides whether or not to start or join a protest. Each individual is endowed with a threshold so that she will participate if and only if the overall number of activists exceeds this threshold. Chwe (1999) provides a static model in which individuals to not know all thresholds; instead agents are connected by an undirected graph and know the thresholds of their neighbours. The present paper shall provide a dynamic version of this model: at the beginning of each period all agents observe the behaviour of all their neighbours in the past. They use this information to update their beliefs about the number of (potentially unobserved) active agents, before they revise their decision. The model leads to two key observations: (i) denser networks lead to more active agents, (ii) an increase of thresholds might lead to less or more active agents in the long run. The second, somewhat surprising, observation originates from the two-fold role of thresholds. An agent with a high threshold will become active only very late (if at all); but once she is active, all her neighbours know that her high threshold has been crossed, that is, that the number of active agents must be high even if the latter cannot be observed. In a variation of the model, we allow random agents to coordinate their behavior. Then an increase of thresholds has the effect that small protests will be observed less frequently, while the probability that a protest (once it has started) turns into a full grown revolution increases. The findings are illustrated using global data on mass protests, revolutions, and the political terror scale in a time window from 1976 to 2014.
14-May-2019 Galit Ashkenazi-Golan, Tel-Aviv University
Abstract: We consider value functions of Markov decision processes with constraints, as a function of the discount factor. We characterize the set of all functions that can be obtained as such functions. The set constrained Markov decision process functions is the set of all continuous piecewise-rational functions that satisfy a condition regarding they behavior of the rational functions at intersections points.
Joint with Eilon Solan.
07-May-2019 Dima Shaiderman, Tel-Aviv University
Abstract: Two agents with a common prior on the possible states of the world participate in a process of information transmission, consisting of sharing posterior probabilities to an event of interest. Aumann's Agreement Theorem implies that such a process must end with both players having the same posterior probability. We show that the sequence of posteriors obtained along this process cannot vary too much, and provide an upper bound for its variation.
30-Apr-2019 Alexander Nesterov, Higher School of Economics, St.Petersburg
Abstract: We study ex-post fairness in the object allocation problem where objects are valuable and commonly owned. A matching is fair from individual perspective if it has only inevitable envy towards agents who received most preferred objects -- minimal envy matching. A matching is fair from social perspective if it is supported by majority against any other matching -- popular matching. Surprisingly, the two perspectives give the same outcome: when a popular matching exists it is equivalent to a minimal envy matching.
We show the equivalence between global and local popularity: a matching is popular if and only if there does not exist a group of size up to 3 agents that decides to exchange their objects by majority, keeping the remaining matching fixed. We algorithmically show that an arbitrary matching is path-connected to a popular matching where along the path groups of up to 3 agents exchange their objects by majority. A market where random groups exchange objects by majority converges to a popular matching given such matching exists.
When popular matching might not exist we define most popular matching as a matching that is popular among the largest subset of agents. We show that each minimal envy matching is a most popular matching and propose a polynomial-time algorithm to find them.
Joint with Aleksei Y. Kondratev.
26-Mar-2019 Yuval Heller, Bar-Ilan University
Abstract: We study coordination games with pre-play communication in which agents have private preferences over the feasible coordinated outcomes. We present a novel intuitive equilibrium strategy with the following properties: (1) each agent reports his preferred outcome (and nothing else), (2) agents never miscoordinate, (3) if the agents have the same preferred outcome, then they coordinate on this outcome, and (4) otherwise, there is a “fallback norm” that determines the coordinated outcome. We show that this behavior is essentially the unique renegotiation-proof strategy, and that it satisfies appealing properties: independence of the distribution of private preferences, Pareto optimality, high ex-ante expected payoff, and evolutionary stability.
Joint with Christoph Kuzmics.
19-Mar-2019 Chang Zhao, Tel-Aviv University
Abstract: We study finitely repeated inspection games with one inspector and two agents, where the inspector has a commitment power, and we compare two monitoring structures. Under the public monitoring, both agents observe the past actions of the inspector; whereas under the private monitoring, each agent observes only whether himself was inspected or not. We show that the private monitoring is at least as good as the public monitoring from the perspective of the inspector. Moreover, private monitoring is strictly better when the game lasts for a relatively short time, or when the agents are relatively patient.
Joint with Eilon Solan.
12-Mar-2019 Ella Segev, Ben Gurion University of the Negev
Abstract: Two main challenges that service providers face when managing service systems is how to generate value and control congestion at the same time. To this end, classical queueing models suggest service prviders charge a per-use fee and invest in capacity to speed up the service. However, in discretionary services where customers value time in service and choose how long to stay, per-use fees result in suboptimal performance and speeding up does not apply. We consider two alternative mechanisms: price rates that are duration-based fees and time limits. We derive revenue and welfare implications and highlight operational consequences of these mechanisms. We employ a queueing model of a service provider and rational consumers who are heterogeneous in their requirements for service durations, though they may be flexible in deviations from their time needs. Customers incur disutility from waiting and choose whether to join and how long to stay in service. Service providers can charge per-use fees or price rates and may also decide to limit customers' time in service. We show that price rates and time limits benefit providers in different ways. Price rates are effective because of their superiority in extracting rents from heterogeneous customers and time limits successfully regulate congestion. Revenue maximizing firms benefit from implementing both. Social planners who seek to maximize consumer welfare, however, should focus on regulating congestion and should therefore offer the service for free, but implement time limits.
05-Mar-2019 Fedor Sandomirskiy, Technion / HSE St. Petersburg
Abstract: A fixed set of n agents share a random object: the distribution D of the profile of utilities is IID across periods, but arbitrary across agents. Our online division rules learn the realized utility profile, and only know from D the individual expected utilities; they have no record from past realized utilities and do not know either how many new objects may appear.
A rule is fair if each agent, ex ante, expects at least 1/n-th of his utility for the object if it is a good, at most 1/n-th of his disutility for it if it is a bad. Among fair rules to divide goods (bads) we uncover those collecting the largest (lowest) total expected (dis)utility. There is exactly one fair rule for bads that is optimal in this sense; for goods the set of optimal fair rules for goods, by contrast, is one dimensional.
Both in the worst case and in the asymptotic sense, our optimal rules perform much better than the natural Proportional rule (for goods or for bads), and not much worse than the optimal fair offline rule, that knows the full distribution D in addition to realized utilities.
Joint work with Anna Bogomolnaia (Glasgow University / HSE St. Petersburg) and Herve Moulin (Glasgow University / HSE St. Petersburg).
17-Jan-2019 Miquel Oliu Barton, Paris Dauphine University
Abstract: Stochastic games are two-player repeated games in which a state variable follows a Markov chain controlled by both players. The existence of the discounted values goes back to Shapley (1953), their convergence as the discount factor vanishes is due to Bewley and Kohlberg (1976), while Mertens and Neyman (1981) proved that the limit is in fact the uniform value. The problem of characterizing the limit has remained open since then. In this paper, we provide a characterization.
09-Jan-2019 Steffen Eibelshäuser, Goethe University Frankfurt
Abstract: We formally define Markov quantal response equilibrium (QRE) and prove existence for all finite dynamic stochastic games. The special case of logit Markov QRE constitutes a mapping from precision parameter λ to sets of logit Markov QRE. The limiting points of this correspondence are shown to be Markov perfect equilibria. Furthermore, the logit Markov QRE correspondence can be given a homotopy interpretation. We prove that for all games, this homotopy contains a branch connecting the unique solution at λ = 0 to a unique limiting Markov perfect equilibrium. This result can be leveraged both for the computation of Markov perfect equilibria, and also as a selection criterion.
08-Jan-2019 Rida Larki, CNRS (Dauphine, France) and University of Liverpool (UK)
Abstract: We consider 2-player zero-sum stochastic games where each player controls his own state variable living in a compact metric space. The terminology comes from gambling problems where the state of a player represents its wealth in a casino. Under natural assumptions (such as continuous running payoff and non expansive transitions), we consider for each discount factor the value vλ of the λ-discounted stochastic game and investigate its limit when λ goes to 0 (players are more and more patient). We show that under a new acyclicity condition, the limit exists and is characterized as the unique solution of a system of functional equations: the limit is the unique continuous excessive and depressive function such that each player, if his opponent does not move, can reach the zone when the current payoff is at least as good than the limit value, without degrading the limit value. The approach generalizes and provides a new viewpoint on the Mertens-Zamir system coming from the study of zero-sum repeated games with lack of information on both sides. A counterexample shows that under a slightly weaker notion of acyclicity, convergence of (vλ) may fail.
Paper link: https://bfi.uchicago.edu/sites/default/files/research/WP_2018-34.pdf
Joint work with Jérome Renault (TSE, France)
03-Jan-2019 János Flesch, Maastricht University
Abstract: Perfect information games are known to admit a subgame-perfect epsilon-equilibrium, for every positive error-term epsilon, under the condition that every player’s payoff function is bounded and continuous everywhere. In this paper, we address the question on which subsets of plays the condition of payoff continuity can be dropped without losing existence. Our main result is that if payoff continuity only fails on a sigma-discrete set (a countable union of discrete sets) of plays, then a subgame-perfect epsilon-equilibrium, for every epsilon> 0, still exists. For a partial converse, given any subset of plays that is not sigma-discrete, we construct a game in which the payoff functions are continuous outside this set but the game admits no subgame-perfect epsilon-equilibrium for small epsilon> 0.
01-Jan-2019 Edi Karni, Johns Hopkins University
Abstract: This is a study of the nature and prevalence of persistent fraud in competitive markets for credence-quality goods. We model the market as a game in which the players are customers and suppliers and analyze their equilibrium behavior. Customers characteristics, search cost and discount rate, are private information. Customers do not possess the expertise necessary to assess the service they need either ex ante or ex post. We show that there exists no fraud-free equilibrium in the markets for credence-quality goods and that fraud is a prevalent and persistent equilibrium phenomenon.
25-Dec-2018 Olga Gorelkina, University of Liverpool Management School
Abstract: This paper studies collusion via information sharing in the context of auctions. The model of collusion via information sharing builds on Aumann’s (1976) description of knowledge. Robustness of auction mechanisms to collusion via information sharing is defined as the impossibility of an agreement to collude. A cartel can agree to collude on a contract if it is common knowledge within that cartel that the contract is incentive compatible and individually rational. Robust mechanisms are characterized in a number of settings where some, all, or no bidders are bound by limited liability. Finally, the characterization is used in a simple IPV setting to design a mechanism that is both optimal and robust to collusion.
18-Dec-2018 Omer Edhan, University of Manchester
Abstract: We offer a general and comprehensive way to analyse exchange markets with indivisible goods and general concave utilities. Each market is associated with an Arrow-Debreu market in which agents consume 'mixtures' of bundles. With unimodular 'demand type', every market with indivisible goods and its associated Arrow-Debreu market have the same set of equilibrium prices. An immediate consequence is a "Unimodularity Theorem" for the existence of equilibrium - a necessary and sufficient condition on price elasticities of demand (or 'demand type') assuring the existence of equilibrium in all markets for any relevant market supply. Another immediate consequence is welfare theory - with unimodular 'demand types', we develop welfare theory for markets with indivisible goods and express it in terms of the associated Arrow-Debreu market. If time allows, the incorporation of budget constraints and reduced form frictions will also be discussed.
11-Dec-2018 Yakov Babichenko, Technion
Abstract: We show that computation of a Nash equilibrium in potential games requires high (i.e., polynomial in the game size) communication between the players, when players hold their own utility functions as private input. This result implies the non-existence of an uncoupled dynamic that converges reasonably fast to an equilibrium in potential games.
In the talk I will discuss this communication hardness result for pure Nash equilibria and for Nash equilibria (possibly mixed).
Based on joint works with Shahar Dobzinski, Noam Nisan, and Aviad Rubinstein.
4-Dec-2018 Igal Milchtaich, Bar Ilan University
Abstract: Polyequilibrium (Milchtaich, Games and Economic Behavior, 2018) is a set-valued generalization of Nash equilibrium that differs from it in specifying strategies that players do not choose, such that for each excluded strategy there is a non-excluded one that responds at least as well as the first strategy does to every profile of non-excluded strategies. This paper introduces a corresponding generalization of correlated equilibrium. Correlated polyequilibrium is defined as a polyequilibrium in an “augmented” game where players first receive private signals from some correlation device, or mechanism. The result is a set of distributions of strategy profiles, which may not include any correlated equilibrium distribution. Thus, some results that do not hold in any correlated equilibrium are obtained in a correlated polyequilibrium.
27-Nov-2018 Omer Ben-Porat, The Technion
Abstract: Recommendation systems are extremely popular tools for matching users and contents. However, when content providers are strategic, the basic principle of matching users to the closest content, where both users and contents are modeled as points in some semantic space, may yield low social welfare. This is due to the fact that content providers are strategic and optimize their offered content to be recommended to as many users as possible. Motivated by modern applications, we propose the widely studied framework of facility location games to study recommendation systems with strategic content providers. Our conceptual contribution is the introduction of a mediator to facility location models, in the pursuit of better social welfare. We aim at designing mediators that a) induce a game with high social welfare in equilibrium, and b) intervene as little as possible.
Based on joint work with Gregory Goren, Itay Rosenberg and Moshe Tennenholtz.
Short Bio: Omer Ben-Porat is a 4rd year PhD student (direct track) in the IE&M faculty of the Technion, advised by Prof. Moshe Tennenholtz. Omer's research lies in the intersection of Game Theory and Machine Learning, with applications in E-Commerce.
20-Nov-2018 Arieh Gavious, Ben Gurion University and Ono Academic College
Abstract: Developing effective screening processes in border crossings, in order to identify violators within large groups of mostly innocent people, is an important and difficult task, as it is not possible nor effective to screen every passenger with the same intensity required to detect a violator. Profiling has been applied for several decades as a tool to deal with this task, but there is still no proof of its effectiveness. Our main motivation is to study whether profiling is indeed helpful, and if so, how it should be used. As such, we consider an interaction that takes place in some crowded border crossing, where passengers can be affiliated into different groups. We offer a sequential game-model with three players: a defender, who acts first and decides on a screening process, an attacker, who acts secondly and may recruit a passenger as a violator, and the recruited violator, who acts last and may choose not to violate, as it has its own private violating motivation. We will study different variants of the base game, which differ by the choice of screening policy (an announced profiling , an unannounced profiling , no profiling ), the attacker’s knowledge of the screening policy, the defender’s and the attacker’s received signals regarding the violators’private motivations, the recruiting costs of passengers of different groups, and how to manage the trade-off between security and congestion. These variants will help us to understand the extremely challenging social and strategic questions regarding the controversy over the need of profiling.
13-Nov-2018 Matthias Blonski, Goethe University Frankfurt
Abstract: An aggregated form game is an alternative, coarser description of a game where players’ preferences are specified in a statistical way. In this article I characterize equilibria of aggregated form games in terms of compositions. A composition describes aggregated behavior and tells us how many players of which type play what. Kalai (2004) has shown that equilibria of extensive form incomplete information games are approximately extensively robust and ex-post Nash and therefore are Nash-equilibria of the corresponding normal form game if the game is large and players’ preferences are semi-anonymous. Therefore, the present characterization result is the more relevant since it applies to the latter more complicated but more general and realistic class of large games. The nature of the result is illustrated by a non-existence example of a migration equilibrium.
6-Nov-2018 Catherine Rainer, Brest University
Abstract: We are interested in the following model: A principal privately observes the evolution of a continuous-time Markov chain and sends messages over time to an agent.
The current payoff of the agent depends only on his action and on the current state of the process, while the payoff of the principal depends only on the action of the agent.
The analogue discrete time model was studied by Renault-Solan-Vieille (2017) (see also Ely (2017)). They focus on the problem in small dimensions and show that there is no simple universally optimal strategy for the principal. The same holds in continuous time. However, it is possible to characterize the optimal payoff of the principal as a solution of a PDE as well as the solution of a control problem over pure jump Markov processes. A verification theorem permits to solve explicitly several examples.
The talk is based on a joint work with Fabien Gensbittel from ESC Toulouse.
23-Oct-2018 Tao Wang, Tel-Aviv University
Abstract: In a dynamic entry problem a Bayesian decision maker (DM) faces an unknown payoff-relevant state of nature. At every period he decides whether to enter and collect a payoff or wait. In case he waits, the DM expects to receive a new partial information about the prevailing state. This information is given by a noisy channel whose stochastic structure is known. We show that the optimal entry strategy is characterized by a threshold: enter if and only if when the probability of the favorable state exceeds a pre-specified level. Furthermore, it is shown that the optimal entry threshold gets higher as the signal received gets more precise. In this sense decision makers that have a higher quality information sources tend to be more patient.
Join with Ehud Lehrer.
(June-05 2018) Itay Kavaler
(May-29 2018) David Lagziel
(May-15 2018) Galit Ashkenazi-Golan
(April-24 2018) Chang Zhao
(March-20 2018) Ron Peretz
(March-13 2018) Eilon Solan
(March-06 2018) Abraham Neyman
(Janurary-16 2018) Staudigl Mathias
(Janurary-09 2018) Sergiu Hart
(December-26 2017) Sophie Bade
(December-19 2017) Dima Shaiderman
(December-12 2017) Yevgeny Tsodikovich
(November-28 2017) Kristoffer Arnsfelt Hansen
(November-21 2017) Oren Yuval
(November-07 2017) Avishay Weinbaum
(October-24 2017) Shakhar Smorodinsky
(June-05 2018) Itay Kavaler , Technion
Title: ON COMPARISON OF EXPERTS - FINITE TIME
Abstract: A policy maker faces a sequence of unknown outcomes. At each stage two (self-proclaimed) experts provide probabilistic forecasts on the outcome in the next stage. A comparison test is a protocol for the policy maker to (eventually) decide which of the two experts is better informed. The protocol takes as input the sequence of pairs of forecasts and actual realizations and (weakly) ranks the two experts. We propose two natural properties that such a comparison test must adhere to and show that these essentially uniquely determine the comparison test. This test is a function of the derivative of the induced pair of measures at the realization.
Joint work with Rann Smorodinsky
(May-29 2018) David Lagziel, Ben-Gurion University
Title: A Failure of Screening & the Rise of Manipulation
Abstract: This paper offers two related insights into the problem of competitive and non-competitive screening. The first concerns a decision maker who screens elements, from a general set, based on noisy unbiased assessments. We show that stricter screening not only reduces the number of accepted elements, but possibly derogates their expected value, leading to a screening failure. The second insight extends the first to competitive scenarios where valuations and noise are strategic. We prove that competitors would tend to manipulation, disregarding extensive costs, in their strive to victory. Various applications are presented, including the fields of credit ratings, sports, news world, political races, trade, and auctions.
Joint work with Ehud Lehrer.
(May-15 2018) Galit Ashkenazi-Golan, Tel Aviv University
Title: Markov Games with a Single Transition and Incomplete Information on One Side
Abstract: We provide a finite-stage algorithm for calculating the value and an optimal strategy of the informed player in Markov games with incomplete information on one side with two states, where one state is absorbing.
Joint work with Eilon Solan and Catherine Rainer.
(April-24 2018) Chang Zhao, Tel Aviv University
Title: Can Public Monitoring Help the Regulator?
Abstract: We analyze a discounted repeated inspection game with two agents and one regulator. Both agents may profit by violating certain rules, while the regulator can inspect at most one agent in each period, inflicting a punishment on an agent who is caught violating the rules. Suppose that the regulator, whose goal is to minimize the discounted number of violations, has no commitment power, and he decides whether to make the past inspection outcomes public or not. We show that even though each agent's stage payoff depends only on his action and whether he is inspected or not at that stage, the public disclosure of inspection histories better disciplines the agents and it significantly benefits the regulator.
Joint work with Eilon Solan.
(March-20 2018) Ron Peretz, Bar Ilan University
Title: The Rate of Innovation Diffusion in Social Networks
Abstract: New technologies often take a long time to gain general acceptance after they have been invented. This phenomenon is called “diffusion of innovation.” It has been studied in various social science disciplines at least since the midst of the 20th century. Previous theoretic investigation used martingales to study the rate in which innovations diffuse. We apply a different approach known as the coalescing duality of linear voter models. Our approach allows us to derive uniform upper bounds on the rate of innovation diffusion that were unachievable with the previous martingale techniques.
As a teaser I will present a solution to an old puzzle: n individuals hold n different opinions. Each day a random individual approaches another random individual and the former convinces the latter with his opinion. What is the expected time until they all hold the same opinion. A solution can be found here: A Colorful Urn. However, the solution is based on the coalescing duality approach, as will be shown.
Joint work with Itai Arieli, Yakov Babichenko, and Peyton H Young.
(March-13 2018) Eilon Solan, Tel Aviv University
Title: Sunspot Equilibrium in General Quitting Games
Abstract: We prove that general quitting games, which are quitting games in which each player may have more than one continue action, admit a sunspot ε- equilibrium, for every ε> 0.
The proof that will be provided in the talk concerns the case in which one player has two continue actions and the other players have a single continue action, and it uses Browder's theorem.
Joint work with Omri Solan.
(March-06 2018) Abraham Neyman, Hebrew University
Title: The Big Match with a Clock and a bit of Memory
Abstract: The Big Match is a multi-stage two-player game. In each stage Player 1 hides one or two pebbles in his hand, and his opponent has to guess that number; Player 1 loses a point if Player 2 is correct, and otherwise he wins a point. As soon as Player 1 hides one pebble, the players cannot change their choices in any future stage.
Blackwell and Ferguson (1968) give an ε-optimal strategy for Player 1 that hides, in each stage, one pebble with a probability that depends on the entire past history. Any strategy that depends just on the clock or on a finite memory is worthless. The long-standing natural open problem has been whether every strategy that depends just on the clock and a finite memory is worthless.
The present paper proves that there is such a strategy that is ε-optimal. In fact, we show that just two states of memory are sufficient.
Joint work with Kristoffer Arnsfelt Hansen and Rasmus Ibsen-Jensen.
(January-16 2018) Staudigl Mathias, Maastricht University
Title: Stochastic Mirror Descent dynamics
Abstract: In view of solving convex optimization problems with noisy gradient input, we analyze the asymptotic behavior of gradient-like flows under stochastic disturbances. Specifically, we focus on the widely studied class of Mirror Descent schemes for convex programs with compact feasible regions, and we examine the dynamics' convergence and concentration properties in the presence of noise. In the vanishing noise limit, we show that the dynamics converge to the solution set of the underlying problem a.s. Otherwise, when the noise is persistent, we show that the dynamics are concentrated around interior solutions in the long run, and they converge to boundary solutions that are sufficiently ``sharp''. Finally, we show that a suitably rectified variant of the method converges irrespective of the magnitude of the noise (or the structure of the underlying convex program), and we derive an explicit estimate for its rate of convergence. Extensions to monotone Variational inequalities and non-convex problems are discussed.
Papers related to the talk:
· P. Mertikopoulos and M. Staudigl: On the Convergence of Gradient like flows with noisy gradient input:https://arxiv.org/abs/1611.06730 (forthcoming SIOPT)
· P. Mertikopoulos and M. Staudigl: Stochastic Mirror Descent Dynamics and their convergence in monotone Variational Inequalities: https://arxiv.org/abs/1710.01551 (Submitted)
· P. Mertikopoulos and M. Staudigl: Convergence to Nash equilibrium in continuous games with noisy first-order feedback (http://mescal.imag.fr/membres/panayotis.mertikopoulos/files/ContinuousGames-CDC.pdf) CDC '17: Proceedings of the 56th IEEE Annual Conference on Decision and Control.
(January-09 2018) Sergiu Hart, Hebrew University
Title: Evidence Games: Truth and Commitment
Abstract: An "evidence game" is a strategic disclosure game in which an informed agent who has some pieces of verifiable evidence decides which ones to disclose to an uninformed principal who chooses a reward. The agent, regardless of his information, prefers the reward to be as high as possible. We compare the setup in which the principal chooses the reward after the evidence is disclosed to the mechanism design setup where he can commit in advance to a reward policy, and show that under natural conditions related to the evidence structure and the inherent prominence of truth, the two setups yield the same outcome.
Joint work with Ilan Kremer and Motty Perry
(December-26 2017) Sophie Bade, Royal Holloway College, University of London
Title: Gibbard-Satterthwaite Success Stories and Obvious Strategyproofness
Abstract: The Gibbard-Satterthwaite Impossibility Theorem (Gibbard, 1973; Satterthwaite, 1975) holds that dictatorship is the only Pareto optimal and strategyproof social choice function on the full domain of preferences. Much of the work in mechanism design aims at getting around this impossibility theorem. Three grand success stories stand out. On the domains of single peaked preferences, of house matching, and of quasilinear preferences, there are appealing Pareto optimal and strategyproof social choice functions. We investigate whether these success stories are robust to strengthening strategyproofness to obvious strategyproofness, recently introduced by Li (2015). A social choice function is obviously strategyproof (OSP) implementable if even cognitively limited agents can recognize their strategies as weakly dominant.
For single peaked preferences, we characterize the class of OSP-implementable and unanimous social choice functions as dictatorships with safeguards against extremism — mechanisms (which turn out to also be Pareto optimal) in which the dictator can choose the outcome, but other agents may prevent the dictator from choosing an outcome that is too extreme. Median voting is consequently not OSP-implementable. Indeed, the only OSP-implementable quantile rules choose either the minimal or the maximal ideal point. For house matching, we characterize the class of OSP-implementable and Pareto optimal matching rules as sequential barter with lurkers — a significant generalization over bossy variants of bipolar serially dictatorial rules. While Li (2015) shows that second-price auctions are OSP-implementable when only one good is sold, we show that this positive result does not extend to the case of multiple goods. Even when all agents’ preferences over goods are quasilinear and additive, no welfare-maximizing auction where losers pay nothing is OSP-implementable when more than one good is sold. Our analysis makes use of a gradual revelation principle, an analog of the (direct) revelation principle for OSP mechanisms that we present and prove.
A joint work with Yannai A. Gonczarowski
(December-19 2017) Dima Shaiderman, TAU
Title: Exchangeable Processes: de Finetti Theorem Revisited
Abstract: A sequence of random variables is exchangeable if the distribution of any fi.nite set of variables is invariant to permutations. de Finetti Theorem states that every exchangeable sequence is a convex combination of i.i.d. processes. We explore the relationship between exchangeability and frequency-dependent posteriors. We show that any real-valued stationary process that has the following two properties is exchangeable: (a) the posteriors depend only on the empirical frequency of the history, and (b) a permutation of any history having positive probability has also positive probability.
A joint work with Prof. Ehud Lehrer.
(December-12 2017) Yevgeny Tsodikovich, TAU
Title: Tacit Collusion using Asynchronous Play
Abstract: We deal with n-firm competitive markets with possibilities of cooperation. The firms would rather coordinate their prices to increase the revenue, but in many countries cartels are illegal and explicit coordination is impossible. Instead, the firms might tacitly collude on a cooperative result, by coordinating their actions implicitly, without a binding agreement or communication.
We discuss how such tacit collusion occurs when the action are asynchronous, and some firms cannot change their price at certain instances in time. We compare asynchronous play to synchronous play and conclude conditions for the former to be preferred by all the firms.
(November-28 2017) Kristoffer Arnsfelt Hansen, Aarhus University
Title: The Real Computational Complexity of Nash Equilibrium Refinements
Abstract: We show that for n>=3, the computational task of verifying the conditions of several standard Nash equilibrium refinements (e.g., trembling hand perfect equilibrium and proper equilibrium) in n-player games is computationally equivalent to the decision problem of the existential theory of the reals. In particular the tasks are all computationally equivalent to each other. We shall also discuss the same questions for 2-player games.
Based on work published at SAGT 2017 and recent unpublished joint work with Troels Bjerre Lund.
(November-21 2017) Oren Yuval, Tel Aviv University
Title: The Team Assigment Problem
Abstract: We study the team assignment problem, which involves the creation of n teams
out of a community that is composed of d genders, where every team should contain ai≥1 participants of each gender i, 1 ≤ i ≤ d. We define the problem as a variant, or generalization in some cases, of the classical stable matching problem. The innovation in our model, is the flexibility with regard to the number of participants of each gender that should be assign in every team. We study several variations of this problem and for each variation define the concept of stable assignment, provide an efficient algorithm that finds such an assignment, or prove that the problem is NP-complete.
The main result of this work is the generalization of the multidimensional stable matching problem, that is, the problem of matching n teams of d genders, such that every team contains one member of each gender. This problem has been shown to be solvable in polynomial time under specific structure of preference relations and definition of stable assignment. We show that under this problem structure, the same result is obtained even if each team contains more then one participant of some genders. Another contribution of this work, is the proof of NP-completeness of several variations of the team assignment problem.
(November-07 2017) Avishay Weinbaum, Tel Aviv University
Title: Approachability with Constraints
Abstract: We study approachability theory in the presence of constraints. Given a repeated game with vector payoffs, we characterize the pairs of sets (A,D) in the payoff space such that Player 1 can guarantee that the long-run average payoff converges to the set A, while the average payoff always remains in D.
Joint work with: Gaëtan Fournier, Eden Kuperwasser, Orin Munk, Eilon Solan
(October-24 2017) Shakhar Smorodinsky, Ben-Gurion University
Title: The Price of Anarchy in Hypergraph Coloring Games
Abstract: A `hypergraph coloring game' is defined as follows - Let V be a set of players and $\cal E$ be some collection of subsets of V (also known as coalitions in GT or hyperedges in combinatorics). The pair $H=(V,\cal E)$ is called a hypergraph. Let $k$ be some positive integer. Each player in $V$ must choose one of given $k$ colors and for each coalition it is a member of, it receives some utility as a function of the coloring profile of that coalition. Its overall utility is the sum over the coalitions it is a member of. For example, its utility may be the number of non-monochromatic coalitions containing $v$.
In this paper we motivate the study of hypergraph coloring games and turn to study the price of anarchy for some variants of this class of games.
Joint work with Rann Smorodinsky
(June-26 2017) Omer Tamuz, Caltech
Title: Optimal schedules for security games
Abstract: A defender protects a set of targets from an attacker who chooses the location, time and duration of the attack. We study Stackelberg equilibria in which the defender commits to a mixed strategy and the attacker responds optimally. We find an optimal defender strategy, and show that an ergodic theoretical approach yields a simple and easy to implement strategy that is close to optimal.
Joint with David Kempe and Leonard J. Schulman
(June-20 2017) Assaf Romm, The Hebrew University
Title: The law of one price for heterogeneous goods
Abstract: The law of one price fails to hold in the presence of heterogeneity. Employing Shapley and Shubik’s (1971) assignment games to model markets in which buyers have different valuations for different goods, and under the assumption of bounded heterogeneity, we recover a tight connection between competitive prices. This connection holds for an increasingly large percent of markets. This approximate version of the law of one price implies extremely uneven surplus distribution even for slightly unbalanced markets, as is the case with homogeneous goods. The results hold even in the presence of goods with different qualities and buyers with different sensitivity to quality.
(June-13 2017) Chang Zhao, Tel Aviv University
Title: Inspection games
Abstract: Consider a discounted repeated inspection game with two agents and one principal. Both agents may profit by violating certain rules, while the principal can inspect on at most one agent in each period, inflicting a punishment on an agent who is caught violating the rules. Suppose the principal, whose sole aim is to deter the violation behavior, has Stackelberg leader advantage. We attempt to characterize the principal's optimal inspection strategy.
(June-11 2017) Eilon Solan, Tel Aviv University
Title: Public correlated equilibrium in quitting games
Abstract: I will present (very) recent results on the existence of public correlated equilibrium in quitting games, and relate this problem to linear complementarity problems.
(June-6 2017) Shmuel Zamir, The Hebrew University
Title: Sequential Aggregation of Judgments
Abstract: We consider a standard model of judgment aggregation as presented, for example, in Dietrich (2015). For this model we introduce a sequential aggregation procedure (SAP) which uses the majority rule as much as possible. The ordering of the issues is assumed to be exogenous. The exact definition of SAP is given in Section 3. In Section 4 we construct an intuitive {\it relevance relation} for our model, closely related to conditional entailment. Unlike Dietrich (2015), where the relevance relation is given exogenously as part of the model, we require that the relevance relation be derived from the agenda. We prove that SAP has the property of independence of irrelevant issues (III) with respect to (the transitive closure of) our relevance relation. As III is weaker than the property of proposition-wise independence (PI) we do not run into impossibility results as does List (2004) who incorporates PI in some parts of his analysis. We proceed to characterize SAP by anonymity, restricted monotonicity, local neutrality, restricted agenda property, and independence of past deliberations (see Section 5 for the precise details). Also, we use this occasion to show that Roberts's (1991) characterization of choice by plurality voting can be adapted to our model.
Joint work with Bezalel Peleg
(April-25 2017) Amir Ban, Tel Aviv University
Title:When Should an Expert Make Predictions?
Abstract: We investigate the behavior of experts who seek to make predictions with maximum impact on an audience. At a known future time, a certain continuous random variable will be realized. A public prediction gradually converges to the outcome, and an expert has access to a more accurate prediction. We study when the expert should reveal his information, when his reward is based on a proper scoring rule (i.e., is proportional to the change in log-likelihood of the outcome).
We consider two cases: The expert may make a single prediction, or the expert is allowed to revise previous predictions. In the first, we demonstrate that a threshold-based policy is called for, and show a tight connection to the Law of the Iterated Logarithm. When multiple predictions are allowed, it is optimal for the expert to always tell the truth, and to make a new prediction whenever he has a new signal.
Joint work with Yossi Azar and Yishay Mansour
(March-21 2017) Svetlana Boyarchenko, University of Texas
Title:Introspective Unawareness and Observable Choice
Abstract: Risks related to events that arrive randomly play important role in many real life decisions, and models of learning and experimentation based on two armed Poisson bandits addressed several important aspects related to strategic and motivational learning in cases when events arrive at jumps times of the standard Poisson process. At the same time, these models remain mostly abstract theoretical models with few direct economic applications. We suggest a new class of models of strategic experimentation which are almost as tractable as exponential models, but incorporate such realistic features as dependence of the expected rate of news arrival on the time elapsed since the start of an experiment and judgement about the quality of a “risky” arm based on evidence of a series of trials as opposed to a single evidence of success or failure as in exponential models with conclusive experiments.
We demonstrate that, unlike in the exponential models, players may stop experimentation before the first breakdown happens. Moreover, ceteris paribus, experimentation in a model with breakthroughs may last longer than experimentation
in the corresponding model with breakdowns.
(March-14 2017) Evan Piermont, Pittsburgh University
Title:Introspective Unawareness and Observable Choice
Abstract: This paper considers a framework in which the decision maker's (DM) knowledge and awareness are explicitly modeled, as is her ability to reason about her own (un)awareness. The DM has a ranking over consumption alternatives that is informed by her epistemic state (i.e., what she knows and what she is aware of), which can serve as a foundation for well known models. The main result is a characterization, via observable choice, of introspective unawareness --a DM who is both unaware of some information and aware she is unaware. In static environments, or when the DM is blind to her own ignorance, the presence of unawareness does not produce any observable choice patterns. However, under dynamic introspective unawareness, the DM will be unwilling to commit to making future choices, even when given the ability to write a contingent plan that executes a choice conditional on the realization of uncertain events. This is a behavior that cannot be explained by uncertainty alone (i.e., without appealing to unawareness).
(January-17 2017) Eilon Solan, Tel Aviv University
Title: The Modified Stochastic Game: Application
Abstract: In the first talk of the series I presented the modified stochastic game and studied some of its properties. In this talk I will use this tool to prove the existence of a uniform equilibrium in a certain class of stochastic games.
This talk will be self contained and will use the modified game as a black box. In particular, people who were not present in the first talk or that were present and did not follow all the details will be able to enjoy this talk.
(January-3 2017) Eilon Solan, Tel Aviv University
Title: The Modified Stochastic Game: Theory
Abstract: This talk is the first of a series of two talks, in which I will present the concept of the modified stochastic game and demonstrate its usefulness in studying the uniform equilibrium of multiplayer stochastic games. In the first talk I will present the theory that underlies this approach: I will define a modified game for every multiplayer stochastic game, and study its equilibria, max-min value, and min-max value.
(December-27 2016) Eran Shmaya, Northwestern University
Abstract: We consider a decision maker with recursive utility, as formalized by Epstein and Zin (1989). We show that, as this decision maker becomes more patient, his ranking of conditionally i.i.d. processes is approximately that of an expected utility decision maker. The utility function we derive has a simple closed form that makes explicit the connection between the parameters of the Epstein-Zin utility and the distinction between risk (variability of outcomes with known probability) and structural uncertainty about the process. Our results also imply that such a decision maker is indifferent to the timing of resolution of uncertainty, yet may display different attitudes towards risk and inter-temporal substitution. Joint with Nabil Al-Najjar
(December-20 2016) Roee Teper, University of Pittsburgh
Title: Plans of Action
Abstract: We study the extent to which contemporaneous correlations across actions affect an agent’s preferences over the different strategies in exploration problems. We show that such correlations carry no economic content and do not affect the agent’s preferences and, in particular, her optimal strategy. We argue that for similar reasons there is an inherent partial identification of the beliefs in exploration problems. Nevertheless, even under the partial identification, we show there are meaningful behavioral restrictions allowing the modeler to test whether the agent is acting according to some Bayesian model.
Joint work with Evan Piermont
(December-13 2016) Ilan Nehama, Tel Aviv University
Title: Analyzing Games with Amgibuous Player Types Using the MINthenMAX Decision Model
Abstract: In many common interactive scenarios, participants lack information about other participants, and specifically about the preferences of other participants. In this work, we model an extreme case of incomplete information, which we term games with type ambiguity, where a participant lacks even information enabling him to form a belief on the preferences of others. Under type ambiguity, one cannot analyze the scenario using the commonly used Bayesian framework, and therefore one needs to model the participants using a different decision model.
To this end, we present the MINthenMAX decision model under ambiguity. This model is a refinement of Wald's MiniMax principle, which we show to be too coarse for games with type ambiguity. We characterize MINthenMAX as the finest refinement of the MiniMax principle that satisfies three properties we claim are necessary for games with type ambiguity. This prior-less approach we present here also follows the common practice in computer science of worst-case analysis.
Finally, we define and analyze the corresponding equilibrium concept, when all players follow MINthenMAX. We demonstrate this equilibrium by applying it to two common economic scenarios: coordination games and bilateral trade. We show that in both scenarios, an equilibrium in pure strategies always exists, and we analyze the equilibria.
(November-22 2016) Yevgeny Tsodikovich, Tel Aviv University
Title: Pricing with stochastic horizon
Abstract: In this paper we model sticky prices - a situation where prices are set and remain constant for a random amount of time, despite of possible changes in demand. We present two different methods to model this phenomena and deduce a domination relation between the random variables that govern the stickiness of the prices.
(November-15 2016) Kristoffer Arnsfelt Hansen
Title: The computational complexity of trembling hand perfection
Abstract: In this talk we consider the computational complexity of
trembling hand perfect equilibria in n-player games in strategic
form. We show the following results for n at least 3:
1. It is NP-hard and SQRT-SUM-hard to decide if a given pure
strategy Nash equilibrium is trembling hand perfect.
2. The task of computing an approximation of a trembling hand
perfect equilibrium is FIXP_a-complete. This shows the task of
computing an approximation of a trembling hand perfect
equilibrium is equivalent to the task of computing an
approximation of a Nash equilibrium!
Based on joint work with Peter Bro Miltersen and Troels Bjerre
Sørensen published at SAGT'2010 and with Kousha Etessami, Peter
Bro Miltersen, and Troels Bjerre Sørensen published at SAGT'2014.
(November-8 2016) Dudu Lagziel, Tel Aviv University.
Title: The Tried-Stone Scheme and a Million-Dollar Bet.
Abstract: We design incentives schemes for portfolio managers that filter out suboptimal portfolio managers: only the best portfolio managers, in terms of expected payoffs, agree to participate in the single-period investment. The results hold in general financial markets, where uninformed investors face managers of different capabilities, and can only observe their one-shot realized returns. Policy implications are derived accordingly.
(November-1 2016) Xavier Venel, Paris School of Economics and The Center of Economy at Université Paris 1, Sorbonne.
Title: The Dynamical strategic influence in a social network.
Abstract: We consider a dynamical model of influence with a set of non-strategic agents and two strategic agents. The non-strategic agents are organized in a network along which they communicate . The strategic agents have two opposite opinions 0 and 1. At each stage, the opinion of the non-strategic agents changes depending on the opinion of their neighbors at the previous stage and of the strategic agents if they are under their influence. We study this model with the tools of stochastic games and prove the existence of the uniform value.
(October-27 2015) Eilon Solan.
(November-3 2015) Yaron Azrieli.
(November-10 2015) David Lagziel.
(November-17 2015) Gaetan Fournier.
(November-24 2015) Igal Milchtaich.
(December-8 2015) Moshe Tennenholtz.
(December-15 2015) Ron Holzman.
(December-22 2015) Rann Smorodinsky.
(December-29 2015) Omer Tamuz.
(January-5 2016) Shiri Alon-Eron.
(January-12 2016) Francis Bloch.
(March-1 2016) Sergiu Hart.
(March-8 2016) Ron Peretz.
(March-15 2016) Simina Brânzei.
(March-22 2016) Itai Arieli.
(March-29 2016) Yakov Babichenko.
(April-5 2016) Galit Ashkenazi-Golan.
(April-12 2016) Yaron Azrieli.
(May-10 2016) Niv Buchbinder.
(May-24 2016) Eilon Solan.
(May-31 2016) Yannai Gonczarowski.
(June-7 2016) Adam Lampert.
(June-6 2016) Adam Lampert, Arizona State University.
Title: Dynamic game theory approaches to analyze management of ecological systems by multiple agents.
Abstract: A major threat on human well being is the accelerated rate of environmental degradation. Restoration of degraded ecological systems entails cooperation among multiple agents, such as land owners, agencies, and sometimes countries. These agents may have incentives to invest less and freeride on other's investment, which may lead to inefficient treatment. In this talk, I will introduce a dynamic game approach to study how to establish a common environmental project by multiple agents, taking into account the costs of both treatment and degradation over time. As a first step, I will use optimal control to show some general principle governing dynamic investment. Next, I will introduce a dynamic game model where each agent decides how and how much to invest over time, considering some statefeedback. I will show that, under certain conditions, there exists a subgame perfect Nash equilibrium where all agents invest simultaneously, albeit slowly enough, and no agent has an incentive to reduce his/her investment. This solution that exists in the “dynamic” model is different from solutions to “static” games of public good provision. Finally, I will introduce a model with incomplete information where each agent may either suffer or not from the environmental degradation, and the agent's type is initially unknown to the other agents. I will examine different mechanisms to establish the project despite the lack of information that often reduces efficiency. I will demonstrate that if the agents adopt the simultaneous investment Nash equilibrium proposed here, the lack of information may actually increase efficiency in some cases. This study suggests that utilizing the course of time may reduce freeriding and facilitate cooperation.
(May-31 2016) Yannai Gonczarowski, Hebrew University of Jerusalem and Microsoft Research.
Title: The Menu-Size Complexity of Revenue Approximation (with Moshe Babaioff and Noam Nisan).
Abstract: We consider a monopolist that is selling n items to a single additive buyer, where the buyer's values for the items are drawn according to independent distributions F1, F2, ..., Fn that possibly have unbounded support. It is well known that - unlike in the single item case - the revenue-optimal auction (a pricing scheme) may be complex, sometimes requiring a continuum of menu entries. It is also known that simple auctions can extract a constant fraction of the optimal revenue. Nonetheless, the question of the possibility to extract an arbitrarily high fraction of the optimal revenue via a finite menu size remained open.
In this paper, we give an affirmative answer to this open question, showing that for every n and for every epsilon>0, there exists a complexity bound C=C(n,epsilon) such that auctions of menu size at most C suffice for obtaining a (1-epsilon) fraction of the optimal revenue. We prove upper and lower bounds on the revenue approximation complexity C(n,epsilon).
Link: http://arxiv.org/abs/1604.06580
(May-24 2016) Eilon Solan, Tel Aviv University
Title: Acceptable Strategy Profiles in Stochastic Games.
Abstract: In this paper we present a new solution concept for multiplayer stochastic games, namely, acceptable strategy profiles. For each player $i$ and state $s$ in a stochastic game, let $w_i(s)$ be a real number. A strategy profile is \emph{$w$-acceptable}, where $w=(w_i(s))$, if the discounted payoff to each player $i$ is at least $w_i(s)$ when the initial state is $s$, provided the discount factor of the players is sufficiently close to 1. Our goal is to provide simple strategy profiles that are $w$-acceptable for payoff vectors $w$ in which all coordinates are high.
(May-10 2016) Niv Buchbinder, Tel Aviv University
Title: A Primal-Dual Approach to Online Optimization Problems.
Abstract: The primal-dual method is one of the fundamental design methodologies in the areas of linear programming, combinatorial optimization, and approximation algorithms. In this talk I will provide an introduction to the area of online combinatorial optimization. I will then discuss the primal-dual method as a unified framework to the design and analysis of online algorithms. Examples for which this approach is applicable include central online problems such as paging, routing, scheduling, online set cover, and graph connectivity problems. Finally, I will show some connections between online combinatorial optimization and online learning.
(April-12 2016) Yaron Azrieli, Ohio State University
Title: Symmetric mechanism design (joint with Ritesh Jain).
Abstract: Designers of economic mechanisms often have an incentive to bias the rules of the mechanism in favor of certain groups of agents. This paper studies the extent to which a policy prohibiting biased mechanisms is effective in achieving fair outcomes. Our main result is a characterization of the class of social choice functions that can be implemented by symmetric mechanisms. When the solution concept used is Bayes-Nash equilibrium, symmetry is typically not very restrictive and discriminatory social choice functions can be implemented by symmetric mechanisms. Our characterization in this case is based on a ‘revelation principle’ type of result, where we show that a social choice function can be symmetrically implemented if and only if a particular kind of (indirect) symmetric mechanism implements it. When implementation in dominant strategies is considered, only symmetric social choice functions can be implemented by symmetric mechanisms. We illustrate our results in environments of voting with private values, voting with a common value, and assignment of indivisible goods.
link: http://yaronazrieli.weebly.com/uploads/6/5/6/8/65683525/symmetric-design-web.pdf
(April-5 2016) Galit Ashkenazi-Golan, Seminar Hakibutzim and Tel Aviv University
Title: What You Get is What You See: Repeated Games with Observable Payoffs (joint with Ehud Lehrer).
Abstract: We consider two-player repeated where the players observe their own payoffs. We prove that any strictly efficient payoff can be obtained as sequential equilibrium payoff, when costly communication is available and the players are sufficiently patient.
(March-29 2016) Yakov Babichenko, Technion
Title: Random Extensive form games (joint with Itai Arieli).
Abstract: We consider two-player random extensive form games where the payoff s at the leaves are independently drawn uniformly at random from a given feasible set C. We study the asymptotic distribution of the subgame perfect equilibrium outcome for binary-trees with increasing depth in various random (or deterministic) assignments of players to nodes. We characterize the assignments under which the asymptotic distribution concentrates around a point. Our analysis provides a natural way to derive from the asymptotic distribution a novel solution concept for two-player bargaining problems with a solid strategic justification.
(March-22 2016) Itai Arieli, Technion.
Title: Private Bayesian Persuasion.
Abstract: We consider a multi-agent Bayesian persuasion problem where an informed sender tries to persuade a group of agents to adopt a certain product. The sender is allowed to commit to a signalling policy where she sends a private signal to every agent. The payoff to the sender is a function of the subset of adopters. We characterize an optimal signalling policy and the maximal revenue to the sender for three di fferent types of payo ff functions: supermodular, symmetric submodular, and a supermajority function. Moreover, using tools from cooperative game theory we provide a necessary and sufficient condition under which public signalling policy is optimal.
(March-15 2016) Simina Brânzei, The Hebrew University of Jerusalem.
Title: Recent developments in fair division for strategic agents
Abstract: I will talk about fair division for strategic agents and will start with the cake cutting problem, which models the division of a heterogeneous divisible good among players with different preferences. Here I will talk about a dictatorship theorem, where under mild technical conditions, we show that any deterministic strategy-proof protocol for two agents in the standard Robertson-Webb query model is dictatorial, that is, there is a fixed agent to which the protocol allocates the entire cake. For n > 2 agents, a similar impossibility holds, namely there always exists an agent that gets the empty piece (i.e. no cake). In contrast, we exhibit randomized protocols that are truthful in expectation and compute approximately fair allocations. I will also talk about an algorithmic framework for strategic agents, where fair outcomes can be implemented in the equilibria of protocols.
If enough time I will also mention the problem of allocating multiple divisible goods for the fairness solution concept of Nash social welfare. When the preferences are known, this solution is achieved in the Fisher market equilibrium outcomes. However, with strategic agents the strong fairness guarantees of market equilibria can be destroyed. We find that for additive valuations, the Fisher market approximates the NSW within a factor of 2, but for Leontief utilities the approximation degrades linearly with the number of players. Surprisingly, a mechanism known as Trading Post not only retains the constant 2-approximation of Fisher for additive utilities, but approximately implements the NSW objective for Leontief valuations: its price of anarchy is 1 + epsilon for every epsilon > 0.
These are joint works with several people: Peter Bro Miltersen, Ioannis Caragiannis, David Kurokawa, Ariel Procaccia, Vasilis Gkatzelis, and Ruta Mehta.
(March-8 2016) Ron Peretz, Bar Ilan University.
Title: Limits of Correlation with Bounded Complexity (joint with Gilad Bavly).
Abstract: While Peretz (2013) showed that, perhaps surprisingly, players whose recall is bounded can correlate in a long repeated game against a player of greater recall capacity, we show that correlation is already impossible against an opponent whose recall capacity is only linearly larger. This result closes a gap in the characterisation of minmax levels, and hence also equilibrium payoffs, of repeated games with bounded recall.
(March-1 2016) Sergiu Hart, the Hebrew University of Jerusalem.
Title: Smooth Calibration, Leaky Forecasts, Finite Recall, and Nash Dynamics.
Abstract: We propose to smooth out the calibration score, which measures how good a forecaster is, by combining nearby forecasts. While regular calibration can be guaranteed only by randomized forecasting procedures, we show that smooth calibration can be guaranteed by deterministic procedures. As a consequence, it does not matter if the forecasts are leaked, i.e., made known in advance: smooth calibration can nevertheless be guaranteed (while regular calibration cannot). Moreover, our procedure has finite recall, is stationary, and all forecasts lie on a finite grid. We also consider related problems: online linear regression, weak calibration, and uncoupled Nash dynamics in n-person games.
(January-12 2016) Francis Bloch, Universite Paris 1.
Title: Dynamic allocation of objects to queueing agents (joint with David Cantala).
Abstract: This paper analyzes the optimal assignment of objects which arrive sequentially to agents organized in a waiting list. Applications include the assignment of social housing and organs for transplants. We analyze the optimal design of probabilistic queuing disciplines, punishment schemes, the optimal timing of applications and information releases. We consider three efficiency criteria: the vector of values of agents in the queue, the probability of misallocation and the expected waste. Under private values, we show that the first-come first-served mechanism dominates a lottery according to the first two criteria but that lottery dominates first-come first-serve according to the last criterion. Under common values, the first-come first serve mechanism dominates all other mechanisms according to the first two criteria while the lottery dominates all other mechanisms according to the last one. Punishment schemes accelerate turnover in the queue at the expense of agents currently in the waiting list, application schemes with commitment dominate sequential offers and information release always increases the value of agents at the top of the waiting list.
(January-5 2016) Shiri Alon-Eron, Bar Ilan University.
Title: Competitive equilibrium as a bargaining solution: an axiomatic approach.
Abstract: The paper introduces an axiomatic characterization of a solution to bargaining problems. The bargaining problems addressed are specifi.ed by: (a) the preference relations of the bargaining parties (b) resources that are the subject of bargaining, and (c) a pre-speci.fied disagreement bundle for each party that would result in case bargaining fails. The approach is ordinal, meaning that the solution is independent of the specifi.c utility indices chosen to represent parties' preferences. We propose axioms that characterize a solution that matches each bargaining problem with an exchange economy whose parameters are derived from the problem, and returns the set of equilibrium allocations corresponding to one equilibrium price vector of that economy. The axioms describe a solution that is the result of an impartial arbitration process, expressing the view that arbitration is a natural method to settle disputes in which agents have conflicting interests, but can all gain from a compromise.
(December-29 2015) Omer Tamuz, CalTech.
Title: The Speed of Social Learning (Joint with Matan Harel, Elchanan Mossel, and Philipp Strack).
Abstract: We consider Bayesian agents who learn from exogenously provided private signals, as well as the actions of the others. We show that learning from actions is slower than learning from signals, that increased interaction between agents can lower the speed of learning, and that very large groups do not learn very quickly.
(December-22 2015) Rann Smorodinsky, Technion.
Title: Perception Games and Privacy (joint with Ronen Gradwohl).
Abstract: Players (people, firms, states, etc.) have privacy concerns that may affect their choice of actions in strategic settings. We use a variant of signaling games to model this effect and study its relation to pooling behavior, misrepresentation of information, and inefficiency. We discuss these issues and show that common intuitions may lead to inaccurate conclusions about the implications of privacy.
(December-15 2015) Ron Holzman, Technion.
Title: Strong equilibrium in network congestion games (joint work with Dov Monderer).
Abstract: A network congestion game is played on a directed, two-terminal network. Every player chooses a route from his origin to his destination. The cost of a route is the sum of the costs of the arcs on it. The arc cost is a function of the number of players who use it. Rosenthal proved that such a game always has a Nash equilibrium in pure strategies. Here we pursue a systematic study of the classes of networks for which a strong equilibrium is guaranteed to exist, under two opposite monotonicity assumptions on the arc cost functions. Our main results are: (a) If costs are increasing, strong equilibrium is guaranteed on extension-parallel networks, regardless of whether the players' origins and destinations are the same or may differ. (b) If costs are decreasing, and the players have the same origin but possibly different destinations, strong equilibrium is guaranteed on series-parallel networks. (c) If costs are decreasing, and both origins and destinations may differ, strong equilibrium is guaranteed on multiextension-parallel networks. In each case, the network condition is not only sufficient but also necessary in order to guarantee strong equilibrium. These results extend and improve earlier ones by Holzman and Law-Yone in the increasing case, and by Epstein et al. in the decreasing case.
(December-8 2015) Moshe Tennenholtz, Technion.
Title: Sequential Commitment Games (joint work with Itai Arieli and Yakov Babichenko).
Abstract: A main social puzzle is how a set of participants finds its way to a Pareto-efficient outcome while each participant maximizes his own utility. In his seminal book Schelling introduced several concepts that may serve this purpose. Two of the main issues discussed in Schelling's book that led to central developments in game theory one may further exploit, are credible commitments and legitimate authority. Commitments can be used in order to promise cooperation orthreatening against deviation from cooperation by other participants, but must subscribe to sequential rationality. A legitimate authority, aka mediator, can be used to enforce commitment and in order to facilitate moving the society from one equilibrium to another. In this work we extend the above foundational literature, by introducing the use of sequentialunconditional voluntary commitments in extensive form games, where the main novel aspect is the design of proper ordering of commitments that will guarantee Pareto-optimal outcome, where the ordering is determined without knowing the utility structure. Our first main result demonstrates a family of depth first search commitment protocols for which Pareto efficiency is implementable in every two-player extensive form game. We further show that in general efficiency is not implementable if the number of players in the game is at least four. Lastly, we consider the class of quitting gamesand show that within this class Pareto efficiency is implementable for every number of players.
The work on sequential commitment games is joint work with Itai Arieli and Yakov Babichenko. Our study of sequentialcommitments games is part of a general agenda of mediator design. If time permits we will briefly discuss some additional recent results under this agenda.
(November-24 2015) Igal Milchtaich, Bar Ilan University.
Title: Perfect Bayesian Polyequilibrium.
Abstract: Polyequilibrium is a generalization of Nash equilibrium that is applicable to any strategic game, whether finite or otherwise, and to dynamic games, with perfect or imperfect information. It differs from equilibrium in specifying strategies that players do not choose and by requiring an after-the-fact justification for the exclusion of these strategies rather than the retainment of the non-excluded ones. Specifically, for each excluded strategy of each player there must be a non-excluded one that responds to every profile of non-excluded strategies of the other players at least as well as the first strategy does. A polyequilibrium’s description of the outcome of the game may be more or less specific, depending on the number and the identities of the non-excluded strategy profiles. A particular property of the outcome is said to hold in a polyequilibrium if it holds for all non-excluded profiles. Such a property does not necessarily hold in any Nash equilibrium in the game. In this sense, the generalization proposed in this work extends the set of justifiable predictions concerning a game’s results.
(November-17 2015) Gaetan Fournier, Tel-Aviv University.
Title: Market games.
Abstract: A market with asymmetric information can be viewed as a repeated game between informed and uniformed agents. We discuss several results in this literature, and in particular the case of a risk averse market (De Meyer and Fournier 2015) and the case of a multi-asset market (Fournier 2015). The main results of these papers are that the price processes in such markets should be, after a change of probability, a particular kind of Brownian martingale, called CMMV.
(November-10 2015) Dudu Lagziel, Tel-Aviv University.
Title: Reward Schemes (joint with Ehud Lehrer).
Abstract: An investor has some funds invested through investment firms. She also has additional funds to allocate among these investment firms according to the firms' performance. While the investor tries to maximize her total expected earnings, each investment firm tries to maximize the overall amount of funds it will be allocated to manage. A reward scheme is a rule that determines how funds are to be allocated among the investment firms based on their past performance. A reward scheme is optimal if it induces the (self-interested) firms to act in accordance with the interests of the investor. We show that an optimal reward scheme exists under quite general conditions.
(November-3 2015) Yaron Azrieli, Ohio State University.
Title: On the self-(in)stability of weighted majority rules (joint work with Semin Kim).
Abstract: A voting rule f is self-stable (Barber`a and Jackson [4]) if any alternative rule g does not have sufficient support in the society to replace f, where the decision between f and g is based on the rule f itself. While Barber`a and Jackson focused on anonymous rules in which all agents have the same voting power, we consider here the larger class of weighted majority rules. Our main result is a characterization of self-stability in this setup, which shows that only few rules of a very particular form satisfy this criterion. This result provides a possible explanation for the tendency of societies to use more conservative rules when it comes to changing the voting rule. We discuss self-stability in this latter case, where a different rule F may be used to decide between f and g.
Link to the paper: http://web.econ.ohio-state.edu/azrieli/papers/self-stable-web.pdf
(October-27 2015) Eilon Solan, TAU.
Title: Overtaking Optimality in Markov Decision Problems (Joint work with Janos Flesch and Arkadi Predtetchinski).
Abstract: We consider the notion of overtaking optimality as introduced in Meder et al. (2012) for Markov decision problems (MDP). For the class of deterministic MDPs, we prove the existence of pure stationary overtaking optimal strategies under both the discounted and the average payoff evaluations. Moreover, we examine logical connections between overtaking optimalilty and Blackwell optimality. In the class of non-deterministic MDPs, we give examples that admit no overtaking optimal strategy and discuss a number of alternative definitions of overtaking optimality.
2014 - 2015
(November-4 2014) Janos Flesch.
(November-11 2014) Yakov Babichenko.
(November-18 2014) Yannai Gonczarowski.
(November-25 2014) Ran Spiegler.
(December-2 2014) Rann Smorodinsky.
(December-9 2014) Eilon Solan.
(December-16 2014) David Lagziel.
(December-30 2014) Dov Samet.
(January-6 2015) Edi Karni.
(January-13 2015) Gaetan Fournier.
(March-10 2015) Nabil Al-Najjar.
(March-15 2015) Wolfgang Kuhle.
(March-24 2015) Yaron Azrieli.
(April-14 2015) Marco Scarsini.
(April-21 2015) Abraham Neyman.
(April-28 2015) Steven Kou.
(May-5 2015) Avinatan Hassidim.
(May-19 2015) Artyom Jenov.
(May-26 2015) Niv Buchbinder.
(June-9 2015) Rafi Hassin.
(June-9 2015) Rafi Hassin, TAU.
Title: Non-price control of queueing systems.
Abstract: We present old and new results on controlling the equilibrium behavior in simple queueing systems. Replacing the common first-come first-served order by last-come first-served order can often increase welfare and profits and also enables simpler computation of the optimal control in the original first-come first-served queue.
(May-26 2015) Niv Buchbinder, TAU.
Title: A Primal-Dual Approach to Online Optimization Problems.
Abstract: The primal-dual method is one of the fundamental design methodologies in the areas of linear programming, combinatorial optimization, and approximation algorithms. In this talk I will provide an introduction to the area of online combinatorial optimization. I will then discuss the primal-dual method as a unified framework to the design and analysis of online algorithms. Examples for which this approach is applicable include central online problems such as paging, routing, scheduling, online set cover, and graph connectivity problems. Finally, I will show some connections between online combinatorial optimization and online learning.
(May-19 2015) Artyom Jenov, Department of Economics and Business Administration, Ariel.
Title: Attacking the Unknown Weapons of a Possible Provocateur: How Intelligence Affects the Strategic Interaction (joint with Y.Tauman and R.Zeckhauser).
Abstract: We consider the interaction of two enemy nations. Nation 1 wants to develop a nuclear bomb (or other weapons of mass destruction). Nation 2 wants to prevent such a development through the deterrence of a threatened attack, or an actual attack if it thought the bomb was produced. 2 has an intelligence system that imperfectly indicates the presence of a bomb. 1, if lacking the bomb, can open its facilities to prevent an attack. A further uncertainty is that nation 2 does not know nation 1's type. He could be a Deterrer, whose prime goal is to avoid an attack, or he could a Provocateur who prefers an unjustified attack if he does not possess the bomb, so as to build support from inside his nation and the outside world. The game has a unique sequential equilibrium. The qualitative nature of that equilibrium depends on parameters, on preferences and information conditions.
A number of initially counterintuitive results emerge. For example, it may sometimes be rational (an equilibrium strategy) for 2 to attack even though 1 does not have a bomb, and even though 2's high quality intelligence system indicates that a bomb is not present. Fortunately, intuitive explanations can be provided for all such results.
Illustrations of the model's implications are provided from the experiences of the West (as nation 2) with Saddam Hussein (as nation 1).
(May-5 2015) Avinatan Hassidim, Bar Ilan University and Google Israel.
Title: Submodular maximization under noise (joint with Yaron Singer).
Abstract: For the past decade there has been incredible progress in the theory and application of submodular optimization in computer science. In many applications however, we do not have access to the submodular function we aim to optimize, but rather some erroneous or noisy version of it. This raises the question of whether provable guarantees are obtainable in presence of error and noise. We provide initial answers, by focusing on the question of maximizing a submodular function under noise, and show that: 1. If the noise is adversarial, no non trivial approximation guarantee can be given. 2. For general matroids, there is a constant factor approximation. We flesh a special case of this result for uniform matroids, and explain how to extend it 3. For uniform matroids with cardinality $k = \omega(\log n)$ there is a $1 - 1/e -\epsilon$ approximation algorithm. We show that no such algorithm can be obtained for the case $k = 1$.
(April-28 2015) Steven Kou, Columbia University.
Title: Separating Skilled and Unskilled Fund Managers by Contract Design (joint with Xuedong He and Sang Hu).
Abstract: Foster and Young (2010, Quarterly Journal of Economics) shows that it is very difficult to design performance fee contracts that reward skilled fund managers while screening out unskilled fund managers. In particular, none of the standard practices, such as postponing bonuses and claw back provisions, can separate skilled and unskilled managers. We show that if: (1) there is a liquidation boundary, meaning that the fund investors will close the fund if the fund return is bad enough to hit the boundary, and (2) the fund manager has to use his/her own money to set up a deposit to offset the potential losses from the fund investors, then the skilled and unskilled fund managers can be separated. The deposit can be a combination of cash or an equity stake in the fund.
(April-21 2015) Abraham Neyman, Hebrew University.
Title: Discounting-robust equilibria of Markov decision processes and stochastic games.
Abstract: A player's discounting valuation of a stream of stage payoffs is a weighted average of his stage payoffs, where the weights are nonincreasing in the stage. A stochastic game $\Gamma$ together with a profile of discounting valuations $w$ define a game $\Gamma_w$. We introduce the concept of discounting-robust equilibria of a discrete-time stochastic game. Its existence means that there exists approximate equilibria that are robust to small changes in the discounting valuations. It is proved that a stochastic game has a discounting-robust equilibrium if and only if it has a uniform equilibrium payoff. We show that if a stochastic game $\Gamma$ has discounting-robust equilibria then for every $\ve>0$ there is a finite list of strategies and payoffs such that for every profile $w$ of discounting valuations there is a member in the list that is an $\ve$-equilibrium of $\Gamma_w$.
(April-14 2015) Marco Scarsini (Luiss Guido Carl).
Title: On Information Distortions in Online Ratings.
Abstract: Consumer reviews and ratings of products and services have become ubiquitous on the Internet. This paper analyzes, given the sequential nature of reviews and the limited feedback of such past reviews, the information content they communicate to future customers. We focus on the informational setting in which customers only observe the sample mean of past reviews, and ask if customers can recover the true quality of the product based on the feedback they observe. We first analyze the benchmark setting, in which customers interpret the mean as the proxy of quality. In such a case, we show that in the long run, the sample mean of review stabilizes and two cases may arise. If customers are relatively homogeneous, then social learning takes place. If customers are sufficiently heterogeneous, then they consistently overestimate the underlying quality of the product in the long run. This bias stems from the selection associated with observing only reviews of customers who purchase. We show, however, that if customers are sophisticated, then there exists a simple quality inference and purchasing rule that corrects for the selection bias and leads to social learning. In addition, we show that the cumulative consumer surplus losses scale with the square root of the number of customers who have considered a purchase to date, which is of the same order as when customers observe the reviews of all preceding customers. In this framework, we also analyze the externality of sophisticated customers on more naïve ones and quantify the impact that manipulated reviews may have.
(March-24 2015) Yaron Azrieli (Ohio State University).
Title: The price of `One Person, One Vote’.
Abstract: The principle of `One Person, One Vote' is viewed by many as one of the cornerstones of Democracy. But in environments where agents have heterogenous stakes, a utilitarian social planner would typically prefer non-anonymous rules, i.e.\ rules that violate this principle. We consider a simple Bayesian voting environment in which agents have private valuations for the alternatives and a voting mechanism is used to determine the outcome. The price of `One Person, One Vote' is defined as the fraction reduction in expected welfare when the optimal anonymous mechanism is used relative to the optimal (not necessarily anonymous) mechanism. We analyze how this price behaves as a function of the environment (distribution of preferences). In particular, it is shown that if the inequality of stakes in the population increases, in a well-defined sense, then the price of `One Person, One Vote' is higher. A similar comparative statics holds if the uncertainty about agents' preferences increases. Our results may be useful for social planners who weigh the moral argument (or other arguments) in favor of `One Person, One Vote' with the loss of efficiency that may result by sticking to this principle.
(March-15 2015) Wolfgang Kuhle (Max Planck Institute).
Title: A Global Game with Heterogenous Priors.
(March-10 2015) Nabil Al-Najjar (Northwestern University).
Title: Uncertainty and Hedging in Large Populations (joint work with Luciano Pomatto).
(January-13 2015) Gaëtan Fournier (Paris-Sorbonne University).
Title: Efficiency of Equilibria in Hotelling Games. (joint work with Marco Scarsini)
Abstract: We consider a Hotelling game where a finite number of retailers choose a location, given that their potential customers are distributed on a network. Retailers do not compete on price but only on location. We show that when the number of retailers is large enough, the game admits a pure Nash equilibrium and we construct it. We then compare the equilibrium cost bore by the consumers with the cost that could be achieved if the retailers followed the dictate of a benevolent planner. We look at this efficiency of equilibrium asymptotically in the number of retailers.
(January-6 2015) Edi Karni (Johns Hopkins University).
Title: Awareness Of Unawareness.
Abstract: In the wake of growing awareness, decision makers anticipate that they might acquire knowledge that, in their current state of ignorance, is unimaginable. Supposedly, this anticipation manifests itself in the decision makers' choice behavior. In this paper we model the anticipation of growing awareness, lay choice-based axiomatic foundations to subjective expected utility representation of beliefs about likelihood of discovering unknown consequences, and assign utility to consequences that are not only unimaginable but may also be nonexistent. In so doing, we maintain the flavor of reverse Bayesianism of Karni and Vierø (2013, 2014).
(December-30 2014) Dov Samet (Tel-Aviv University).
Title: Cracking the mystery of weak dominance or Elementary game theory simplified.
Abstract: A common informal argument shows that if it is common knowledge that players do not play strongly dominated strategies then players can play only profiles that survive the iterative elimination of strongly dominated strategies. By providing a simple setup that formalizes this argument, we are able to state analogues result concerning weakly dominated strategies. We characterize the profiles that can be played when it is common knowledge that players do not play weakly dominated strategies. These characterizations lead us to study the notion of non-probabilistic correlated equilibrium which is a probability-free version of correlated equilibrium.
(December-16 2014) David Lagziel (Tel-Aviv University).
Title: Failures in Second-Price Auctions.
Abstract: We examine sealed-bid second-price auctions where, with a certain probability, the bidders fail to pay their debt to the seller. In such auctions, both the seller and the bidders might gain if the bidders lower their bids relative to the equilibrium bids. Given some assumptions on the probability to default, we show that an upper bound on the bids, induced by the seller, increases the expected payoff of all the parties involved.
(December-9 2014) Eilon Solan (Tel-Aviv University).
Title: Acceptable Strategy Profiles in Stochastic Games.
Abstract: I will present a new solution concept for multiplayer stochastic games, namely, \emph{$\ep$-acceptable} strategy profiles. Those are strategy profiles that, for every initial state, yield to each player at least his discounted max-min value up to $\ep$, simultaneously for every discount factor, provided the players' discount factors are sufficiently close to 1. We prove the existence of a stationary $\ep$-acceptable strategy profile in every multiplayer stochastic game. Such strategy profiles have two desirable features: they are simple (stationary) and yield to each player a reasonable payoff (though not the best payoff he can obtain).
(December-2 2014) Rann Smorodinsky (Technion).
Title: Job Security, Stability and Production Efficiency.
Abstract: We study a 2-sided matching market with a set of heterogeneous firms and workers in an environment where jobs are secured by regulation. Without job security Kelso and Crawford have shown that stable outcomes and efficiency prevail when all workers are (weak) gross substitutes to each firm, in the sense that increases in other workers’ salaries can never cause a firm to withdraw an offer from a worker whose salary has not risen. It turns out that by introducing job security, stability and efficiency may still prevail, and even for a significantly broader class of production functions.
(November-25 2014) Ran Spiegler (Tel-Aviv University).
Title: Behavioral imputation.
Abstract: A decision maker (DM) wishes to learn a joint probability distribution p over n variables. He obtains a large number of independent observations. However, in each observation the values of some of the variables are missing, according to some independent random process. The DM wishes to extend his incomplete dataset into a fully specified, "rectangular" dataset. To do so, he employs an intuitive iterative imputation procedure, which is somewhat related to the "stochastic regression" method from the literature on statistical inference with missing data. The frequencies in the completed dadaset constitute the DM's subjective belief. I characterize the subjective beliefs that emerge from this imputation procedure. When the support of the missing-values process satisfies the so-called "running intersection property", the imputation procedure always terminates succcessfully and produces a subjective belief with a "Bayesian network" representation, which factorizes the objective distribution p according to a perfect directed acyclic graph. When the running intersection property is violated, the procedure is aborted for some p. This result provides a foundation for various models of systematic belief distortions in the literature.
(November-18 2014) Yannai A. Gonczarowski (Hebrew University and Microsoft Research).
Title: Cascading to Equilibrium: Hydraulic Computation of Equilibria in Resource Selection Games (Joint work with Moshe Tenneholtz).
Abstract: Drawing intuition from a (physical) hydraulic system, we present a novel framework, constructively showing the existence of a strong Nash equilibrium in resource selection games with nonatomic players, the coincidence of strong equilibria and Nash equilibria in such games, and the invariance of the cost of each given resource across all Nash equilibria. Our proofs allow for explicit calculation of Nash equilibrium and for explicit and direct calculation of the resulting (invariant) costs of resources, and do not hinge on any fixed-point theorem, on the Minimax theorem or any equivalent result, on the existence of a potential, or on linear programming. A generalization of resource selection games, called resource selection games with I.D.-dependent weighting, is defined, and the results are extended to this family, showing that while resource costs are no longer invariant across Nash equilibria in games of this family, they are nonetheless invariant across all strong Nash equilibria, drawing a novel fundamental connection between group deviation and I.D.-congestion. A natural application of the resulting machinery to a large class of constraint-satisfaction problems is also described.
(November-11 2014) Yakov Babichenko (Technion).
Title: Query complexity of approximate Nash equilibria.
Abstract: We study the query complexity of approximate notions of Nash equilibrium in games with a large number of players. Our main result states that for n-player binary-action games and for constant ε, the query complexity of an ε-well-supported Nash equilibrium is exponential in n. As a consequences of this result, we get an exponential lower bound on the rate of convergence of adaptive dynamics to approximate Nash equilibria.
(November-4 2014) Janos flesch (Maastricht University).
Title: Subgame-perfect equilibrium in a class of perfect information games, where each player controls one state (Joint work with Jeroen Kuipers, Gijs Schoenmakers and Koos Vrieze).
Abstract: We consider a class of multi-player games that are played as follows. Player 1 starts and he has two options: he can decide to stop the game or he can choose another player, possibly only from a subset of his opponents. In the latter case, the chosen player faces a similar decision: he can stop the game, or choose one of his opponents, maybe even player 1 again. This process is repeated until a player stops the game. A payoff to each player is determined as follows: If no one ever stops the game, then the payoff to each player is zero, and otherwise the payoff to each player only depends on the identity of the player who stopped the game.
We construct approximate subgame-perfect equilibria for these games. This is particularly challenging when the payoffs at stopping can be negative, because it makes the strategic situation fairly complex. The proof is combinatorial, and uses an iterative elimination procedure that terminates in a finite number of steps. We also discuss possible consequences of this result for the existence of approximate subgame-perfect equilibria in more general classes of games.
(June-24 2014) Paul Milgrom (Stanford University).
This week's seminar will be held in Room 309, Schreiber building, at 12:00 PM.
Title: Deferred-Acceptance Auctions and Radio Spectrum Reallocation.
Abstract: Deferred-acceptance auctions choose allocations by an iterative process of rejecting the least attractive bid. These auctions have distinctive computational and incentive properties that make them suitable for application in some challenging environments, such as the planned US auction to repurchase television broadcast rights. For any set of values, any deferred acceptance auction with “threshold pricing”is weakly group strategy-proof, can be implemented using a clock auction, and leads to the same outcome as the complete-information Nash equilibrium of the corresponding paid-as-bid auction. A paid-as- bid auction with a non-bossy bid-selection rule is dominance solvable if and only if it is a deferred acceptance auction.
(June-8 2014) Aloisio Araujo (IMPA, Rio de Janeiro, Brazil).
Title: General Equilibrium, Risk Loving, Ambiguity, and Volatility.
Abstract: What is the role of risk loving in "smooth" consumption at equilibrium? Can risk sharing be characterized at equilibrium? What is the role of rank-dependency and ambiguity loving? The difficulty in addressing these questions is in the lack of convexity that may prevent the existence of equilibrium. Related questions arise whether volatility tends to increase and whether regulation affects welfare and volatility.
We prove that with sufficient aggregate risk, equilibrium exists, even with a finite number of agents, for a very large class of preferences. For rank-dependent preferences, there is risk-sharing for these equilibria.
We also provide robust examples in which:
1. Risk-taking decreases the volatility and improves welfare.
2. Regulation increases volatility and reduces welfare.
(May-29 2014) Fabien Gensbittel (Toulouse).
Title: Continuous-time limit of dynamic games with incomplete information and a more informed player.
Abstract: We study a model of two-player, zero-sum, dynamic game with incomplete information on one side in which the players receive exogenous information about the payoff-relevant state variable during the play. We assume that one of the players is always more informed than his opponent and that signals observed during the play correspond to the observation of some continuous-time Markov process at some fixed times on a finite grid. We show the existence of a limit value as the players play more and more frequently, and we provide two different characterizations for this value: through a stochastic optimization problem and through a variational inequality, related to some second-order Hamilton-Jacobi equation in some particular cases.
(May-20 2014) Roee Teper (University of Pittsburgh).
Title: Subjective independence and concave expected utility (joint with Ehud Lehrer).
Abstract: When a potential hedge between alternatives does not reduce the exposure to uncertainty as perceived by the decision maker, we say that she considers these alternatives as similar. While similarity in the current context seems like a subjective notion and should be different across decision makers, thus far it was treated in an objective manner. We take the former approach. Similarity of alternatives can be recovered through a property (axiom) of the individual’s preferences referred to as subjective codecomposable independence. This axiom characterizes a class of event-separable models and provides a behavioral foundation to Concave Expected Utility preferences. Stemming from subjective similarity, the flexibility of Concave Expected Utility allows it to accommodate recent paradoxes posed by Machina to Choquet expected utility, which is characterized by objective similarity.
(May-13 2014) Michal Feldman (Tel-Aviv University).
Title: A Unifying Hierarchy of Valuations with Complements and Substitutes (Joint work with Uriel Feige, Rani Iszak, Nicole Immorlica, Brendan Lucier, and Vasilis Syrgkanis).
Abstract: We introduce a new hierarchy over monotone set functions, that we refer to as $\MOPH$ (Maximum over Positive Hypergraphs), and study its applications to valuation functions for agents in multi-item markets. The highest level of the hierarchy, $\MOPH$-$m$ (where $m$ is the total number of items) captures all monotone functions. The lowest level, $\MOPH$-$1$, captures all monotone submodular functions, and more generally, the class of functions known as $\XOS$. Every monotone function that has a positive hypergraph representation of rank $k$ (in the sense defined by Abraham, Babaioff, Dughmi and Roughgarden [EC 2012]) is in $\MOPH$-$k$. Every monotone function that has supermodular degree $k$ (in the sense defined by Feige and Izsak [ITCS 2013]) is in $\MOPH$-$(k+1)$. In both cases, the converse direction does not hold, even in an approximate sense. We present additional results for the expressiveness of $\MOPH$-$k$, some of which can be regarded as surprising. We also present intriguing open questions in this respect.
One can obtain good approximation ratios for some natural optimization problems, provided that functions are required to lie in low levels of the $\MOPH$ hierarchy. We present two such applications. One shows that the maximum welfare problem can be approximated within a ratio of $k+1$ if all players hold valuation functions in $\MOPH$-$k$. The other is an upper bound of $2k$ on the price of anarchy of simultaneous first price auctions.
Being in $\MOPH$-$k$ can be shown to involve two requirements -- one is monotonicity and the other is a certain requirement that we refer to as $\PLE$ (Positive Lower Envelope). Removing the monotonicity requirement, one obtains the $\PLE$ hierarchy over all non-negative set functions (whether monotone or not), which can be fertile ground for further research.
(April-1 2014) Yannai A. Gonczarowski (Hebrew University and Microsoft Research).
Title: Manipulation of Stable Matchings using Minimal Blacklists.
Abstract: Gale and Sotomayor [1985] have shown that in the Gale-Shapley matching algorithm [1962], the proposed-to side W (referred to as women there) can strategically force the W-optimal stable matching as the M-optimal one by truncating their preference lists, each woman possibly blacklisting all but one man. As Gusfield and Irving have already noted in 1989, no results are known regarding achieving this feat by means other than such preference-list truncation, i.e. by also permuting preference lists.
We answer Gusfield and Irving's open question by providing tight upper bounds on the amount of blacklists and their combined size, that are required by the women to force a given matching as the M-optimal stable matching, or, more generally, as the unique stable matching. Our results show that the coalition of all women can strategically force any matching as the unique stable matching, using preference lists in which at most half of the women have nonempty blacklists, and in which the average blacklist size is less than 1. This allows the women to manipulate the market in a manner that is far more inconspicuous, in a sense, than previously realized. When there are less women than men, we show that in the absence of blacklists for men, the women can force any matching as the unique stable matching without blacklisting anyone, while when there are more women than men, each to-be-unmatched woman may have to blacklist as many as all men. Together, these results shed light on the question of how much, if at all, do given preferences for one side a priori impose limitations on the set of stable matchings under various conditions. All of the results in this paper are constructive, providing efficient algorithms for calculating the desired strategies.
(March-25 2014) Yehuda John Levy (University of Oxford).
Title: Rational Learning Need Not Lead to Exact Equilibria.
Abstract: A long-standing open question raised in Kalai and Lehrer (1993) is whether or not the play of a subjective equilibrium of the infinitely repeated game, in the rational learning model introduced there, must eventually resemble play of an exact equilibria, and not just play of approximate equilibria as demonstrated there. This paper shows, by example, that play of subjective equilibria may remain distant - in fact, mutually singular - to play of exact equilibria.
(March-11 2014) Shai Vardi, Tel Aviv University.
Title: Local Computation Mechanism Design - Stable Matching (with Avinatan Hassidim and Yishay Mansour).
Abstract: We introduce the notion of Local Computation Mechanism Design - designing game theoretic mechanisms which run in polylogarithmic time and space. Local computation mechanisms reply to each query in polylogarithmic time and space, and the replies to different queries are consistent with the same global feasible solution. In addition, the computation of the payments is also done in polylogarithmic time and space. Furthermore, the mechanisms need to maintain incentive compatibility with respect to the allocation and payments.
We present local computation mechanisms for a variety of classical game-theoretical problems, focusing on stable matching.
We show that our local results have general implications. Specifically, we show that when the men’s preference lists are bounded, we can achieve an arbitrarily good approximation to the stable matching within a fixed number of iterations of the Gale-Shapley algorithm.
(March-4 2014) Tai-Wei Hu, Nothwestern University.
Title: Information aggregation in a large two-stage market game (joint work with Neil Wallace).
Abstract: A market-game mechanism is studied in a two-divisible-good, pure-exchange setting with potential information-aggregation. The mechanism is simple: agents’ actions are quantities and outcomes are determined by simple algorithms that do not depend on the details of the economy. There are two stages. First, agents make offers, which are provisional for all but a small, randomly selected group. Then, those offers are announced, and everyone else gets to make new offers with payoffs determined by a Shapley-Shubik market game. For a finite and large number of players, there exists an almost ex-post efficient equilibrium. Conditions for uniqueness are provided.
(February-25 2014) Gilad Bavly, Tel Aviv University.
Title: How to Gamble Against All Odds (joint work with Ron Peretz).
Abstract: Player 0, the cousin of the casino owner, is allowed to bet sums of money only within a set A (a subset of the real numbers). The regular casino players 1,2,3,... (countably many players) are allowed to make bets only within another set B. Player 0 announces her betting strategy first; then the regular players announce theirs. Now the casino owner wants to fix an infinite sequence of Reds and Blacks, such that player 0 makes infinite gains, while every regular player gains only a finite amount. Can this be done when, e.g., A is the set of all even integers, and B the set of all odds? We present some sufficient and necessary conditions on the pair of sets A and B.
(January-12 2014, 14:00-15:00, Kaplun 324) Bruno Zilloto.
Title: Zero-sum repeated games: counterexamples to the existence of the asymptotic value and the conjecture maxmin=lim v(n).
Abstract: Mertens (1986) proposed two general conjectures about repeated games. First, in any two-person zero-sum repeated game, the asymptotic value exists, and second, when Player 1 is more informed than Player 2, in the long run Player 1 is able to guarantee the limit value of the n-stage game.
We disprove these two long-standing conjectures by providing an example of a zero-sum repeated game with public signals and perfect observation of the actions, where neither the value of the lambda-discounted game nor the value of the n-stage game converges, when lambda goes to 0 or n goes to infinity, respectively.
The aforementioned example involves seven states, two actions and two signals for each player. Remarkably, players observe the payoffs, and play in turn.
(January-5th 2014) Nabil Al-Najjar, Northwestern University.
The seminar will be held in Schreiber 209.
Title: Aggregative utility in large populations (joint with Luciano Pomatto).
(December-31st 2013) Yaron Azrieli, Ohio State University.
Title: Incentives in experiments: A theoretical analysis (with Christopher Chambers and PJ Healy).
Abstract: The purpose of an experiment is to observe choice in a controlled setting. When subjects are given multiple decisions, however, subjects’ choices in one decision may be distorted by the choices made in others. This paper formulates a simple abstract model of an experiment and defines a notion of Incentive Compatibility (IC) of the associated payment mechanism. Assuming that preferences over lotteries satisfy first order stochastic dominance (and nothing else), we characterize the class of IC payment mechanisms. In particular, it follows from our characterization that in most experiments the only IC mechanism is the Random Problem Selection mechanism, in which one decision problem is randomly chosen for payment
(December-17th 2013) Eran Hanany, Tel Aviv University
Title: Subgame Perfect Farsighted Stability.
Abstract: Farsighted game theoretic solution concepts have been shown to provide insightful results in many applications. However, we provide simple examples demonstrating that the existing solution concepts are not sufficiently farsighted, and in this paper we propose a new farsighted solution concept, the Subgame Perfect Consistent Set (SPCS). Based on von Neumann Morgenstern type consistency and subgame perfect equilibrium, this solution is shown to lead to more satisfactory predictions in many situations as compared to existing myopic or farsighted solution concepts. Strikingly, the SPCS is shown to always achieve Pareto efficiency in farsighted normal form games. This result is demonstrated in various oligopolistic settings, and is shown to imply, for example, that in contrast with existing farsighted solution concepts, players who follow the SPCS reasoning are always able to share the monopolistic profit in Bertrand and Cournot competition negotiations, and are always able to achieve coordination and Pareto efficiency in decentralized supply chain contracting, even when they are unable to form coalitions.
(December-3rd 2013) Moshe Haviv, The Hebrew University.
Title: Regulating Arrivals to a Queue.
Abstract: Selfish customers do not necessarily join a queue at a socially optimal rate. Hence, queueing systems may call for regulation. For customers arriving to unobservable queue who are homogeneous with respect to waiting costs and service rewards, we show how queueing systems can be regulated by imposing an entry fee, a holding fee (i.e., a fee based on time in the system), or a service fee (i.e., a fee based on the required service time) in the case where customers know their service requirements. We start with a unified approach and state the socially optimal fees. We then show that customers are always worse off under a flat entry free in comparison with holding and service fees. As for holding vs. service fees, the answer depends on the queueing regime and/or the service length itself. For example, under the first-comes first-serve regime, service fees are preferred by all. We also review the case where customers know only the distribution of service times, but not their actual requirements
(November-26th 2013) Itai Arieli, The Technion.
Title: Inferring Beliefs from Actions (joint with M. Muller-Frank).
Abstract: This paper considers single-agent decision making under uncertainty and addresses the question to which degree the private information of an agent is revealed through his optimal action. We show that the agents optimal action reveals his posterior distribution for all but a negligible subset (formally, a meager subset) of the space of continuous utility functions if and only if the set of actions contains no isolated points. If the action set is uncountable (not necessarily perfect), then there exists a continuous utility function such that actions reveal beliefs. On the basis of the single-agent belief revelation result we establish information aggregation results in the framework of repeated interaction in social networks and in the sequential social learning model.
(November-19th 2013) Igal Milchtaich, Bar-Ilan University.
Title: Time-Differentiated Monopolies (joint with A. Glazer and R. Hassin)
Abstract: We consider sequential competition among sellers, with different consumers desiring the good at different times. Each consumer could buy from a later seller. Each seller recognizes that future sellers are potential competitors, and therefore does not necessarily set his monopoly price, which is the price he would set if consumers could only buy from the first seller they encounter. We show, however, that whether an equilibrium price is indeed lower than the monopoly price of each seller depends on the form of consumers’ impatience. With time discounting, this is so. But when impatience reflects decreasing valuations, the equilibrium price may (and, in certain cases, must) coincide with the sellers’ monopoly prices, which means that their market power is not diminished by competition with future sellers. Even in this case, however, prices below the monopoly price may exist if sellers are aware of the history of sales and may react to previous price changes.
(November-12th 2013) Victoria Kreps, St. Petersburg Institute for Econ. and Math.
Title: Repeated games with asymmetric information modeling financial markets with two risky assets (joint with Victor Domansky).
Abstract: We consider multistage bidding models where shares of two types of risky assets are traded between two agents that have different information on the liquidation prices of traded assets. These prices are random integer variables that are determined by the initial chance move according to a probability distribution p over the two-dimensional integer lattice that is known to both players. Player 1 is informed on the prices of both types of shares, but Player 2 is not. The bids may take any integer values.
The model of n-stage bidding is reduced to a zero-sum repeated game with lack of information on one side. We show that, if liquidation prices of shares have finite variances, then the sequence of values of n-step games is bounded. This makes it reasonable to consider the bidding of unlimited duration that is reduced to the infinite game G1(p). We give the solutions for these games. The optimal strategy of Player 1 generates a random walk of transaction prices. But unlike the case of one-type assets (Domansky, 2007), (Domansky, Kreps, 2009), the symmetry of this random walk is broken at the final stages of the game (Domansky, Kreps, 2013).
(November-5th 2013) Ezra Einy, Ben Gurion University.
Title: Tullock Contests with Asymmetric Information.
Abstract: We show that under standard assumptions a Tullock contest with incomplete (and possibly asymmetric) information has a pure strategy Bayesian Nash equilibrium. The existence result extends to a very broad class of generalized Tullock contests, characterized by certain mild assumptions on the success function. Next we study common-value Tullock contests. We show that in equilibrium the expected payoff of a player is greater or equal to that of any other player with less information, i.e., an information advantage is rewarded. Moreover, if there are only two players and one of them has an information advantage, then in the unique equilibrium both players exert the same expected effort, although the less informed player wins the prize more frequently. These latter properties do not extend to contests with more than two players. Interestingly, players may exert more effort in a Tullock contest than in an all-pay auction.
(October-29th 2013) Aviad Heifetz, Open University.
Title: Robust Multiplicity with a Grain of Naivite (joint work with Willemien Kets).
Abstract: Weinstein and Yildiz (2007) showed that, surprisingly, in games with incomplete information that satisfy a richness condition, players generically have a unique rationalizable action. We show that this result does not carry over to environments where players may have even a tiny "grain of naivite", i.e. with a (possibly transfinite) sequence of mutual suspicions that some player might eventually have a finite depth of reasoning. Our results demonstrate that both uniqueness and multiplicity are robust phenomena in this richer environment. In particular, if we do not insist on players having common belief in the event that players have an infinite depth of reasoning (as in standard Harsanyi type spaces), then there need not be a discontinuity in rationalizable behavior when perturbing a complete-information game to a game where payoffs are approximate common belief.
(October-22nd 2013) Isaac Meilijson, Tel Aviv University.
Title: Preference for safety under the Choquet model: in search of a characterization.
Abstract: Victor prefers safety more than Ursula if whenever Ursula prefers a constant to an uncertain act, so does Victor. This paradigm, whose Expected Utility (EU) version is Arrow & Pratt's more risk aversion concept, will be studied in Schmeidler’s Choquet Expected Utility (CEU) model. Unlike the EU case, preference for safety versus dichotomous acts does not imply preference for safety over general acts.
The treatment, by Michèle Cohen and the speaker, preserves the flavor of the "more pessimism than greediness" characterization of monotone risk aversion by Chateauneuf, Cohen & Meilijson in the Rank-Dependent Utility (RDU) model and its extension by Grant & Quiggin to CEU.
A connection with multiple-chip betting in Dubins & Savage roulette proves useful for the analysis.
(January-15th 2013) John Levy, The Hebrew University.
Title: A Cantor Set of Games with No Shift-Homogeneous Equilibrium Selection.
Abstract: We construct a continuum of games on a countable set of players that does not possess a measurable equilibrium selection that satisfies a natural homogeneity property. The explicit nature of the construction yields counterexamples to the existence of equilibria in models with overlapping generations and in games with a continuum of players.
(January-8th 2013) Xavier Venel, Tel Aviv University.
Title: A distance for probability spaces and long-term values in Markov Decision Processes.
Abstract: Given a finite set K, we denote by X the set of probabilities on K and by Z the set of Borel probabilities on X with finite support. Studying a Markov Decision Process with partial information on K naturally leads to a Markov Decision Process with full information on X. We introduce a new metric d* on Z such that the transitions become 1-Lipschitz from (X, ||.||_1) to (Z,d*). We prove several results about d_* and characterize the limit values in Partial Observation MDP with finitely many states. Moreover, we show an existence result of a generalized notion of uniform value when the number of stages is large enough.
(January-1st 2013) Ron Peretz, LSE.
Title: Effective betting with restricted wagers.
Abstract: A sequence of players enter the Casino. Player 1 declares her betting strategy, a function from finite histories of Heads and Tails to real-valued wagers. Then the rest of the players 2,3,... (countably many of them) declare their strategies, but they are restricted to integer-valued wagers. The Casino wants Player 1 to win while all the others lose. That is, the Casino should choose a sequence of Heads and Tails so that the limit of Player 1's capital is infinite while all others' is finite. Is it possible? Bienvenu et al (2012) show it is using Baire category theorem. Their proof applies even when Players 2,3,... are allowed to wager any real numbers from the set V={a: |a|>1 or a=0}. They ask whether or not it is possible to separate V and Z in a similar way? We answer "yes" to that question and address several similar problems.
(December-25th 2012) Roee Teper, University of Pittsburgh.
Title: Endowment as a Blessing (Joint with Sivan Frenkel and Yuval Heller).
Abstract: Experimental evidence and field data suggest that agents hold two seemingly unrelated biases: failure to account for the fact that the behavior of others reflects their private information (“winner’s curse”), and a tendency to value a good more once it is owned (“endowment effect”). In this paper we propose that these two phenomena are closely related: the biases fully compensate for each other in various economic interactions, and induce an “as-if rational” behavior. We pay specific attention to barter trade, of the kind that was common in prehistoric societies, and suggest that the endowment effect and the winner’s curse could have jointly survived natural selection together. Our results suggest that studying behavioral biases jointly rather than one at a time can deliver new insights about the role of behavioral biases in economic environments.
(December-18th 2012) Ron Holzman, Technion.
Title: Impartial nominations for a prize (joint work with Herve Moulin).
Abstract: We consider rules that allow a set of agents to elect one of them to receive a prize (or undertake a task). The main requirement is impartiality: no agent should ever be able, by changing his message, to affect his own election. We construct such rules that are non-trivial and also obey some common social choice requirements. However, our main impossibility result states that if individual messages are nominations (every agent nominates one other agent for the prize), then no impartial rule exists that simultaneously respects positive unanimity (an agent nominated by everyone else always wins) and negative unanimity (an agent whom no one nominates never wins).
(December-11th 2012) Gilad Bavly, Tel Aviv University.
Title: Elasticity of Games.
Abstract: We develop an elasticity index of a strategic game. The index measures the robustness of the set of rational outcomes of a game. The elasticity index of a game is the maximal ratio between the change of the rational outcomes and the size of an infinitesimal perturbation. The perturbation is on the players’ knowledge of the game.
The elasticity of a strategic game is a nonnegative number. A small elasticity is indicative of the robustness of the rational outcomes (for example, if there is only one player the elasticity is 0), and a large elasticity is indicative of non-robustness. For example, the elasticity of the (normalized) n-stage finitely repeated prisoner’s dilemma is at least exponential in n, as is the elasticity of the n-stage centipede game and the n-ranged traveler’s dilemma.
The concept of elasticity enables us to look from a different perspective at Neyman’s (1999) repeated games when the number of repetitions is not commonly known, and Aumann’s (1992) demonstration of the effect of irrationality perturbations.
(December-4th 2012) Xu Zibo, The Hebrew University.
Title: Evolutionary stability in general extensive-form games of perfect information.
Abstract: We consider a basic dynamic evolutionary model with rare mutation and a best-reply (or better-reply) selection mechanism. A state is evolutionarily stable if its long-term relative frequency of occurrence is bounded away from zero as the mutation rate decreases to zero. We prove that, for all finite extensive-form games of perfect information, only Nash equilibria are evolutionarily stable. We show that, in games where a player may play at more than one node along some path, even when the populations increase to infinity, there may be some evolutionarily stable states which are not part of the backward induction equilibrium component. We give a sufficient condition for evolutionary stability and show how much extra value is needed in the terminal payoffs to make an equilibrium evolutionarily stable.
(November-27th 2012) Anna Khmelnitskaya, Saint-Petersburg State University.
Title: Two solution concepts for TU games with directed communication structure.
Abstract: In classical cooperative game theory it is assumed that any coalition of players may form. However, in many practical situations the collection of feasible coalitions is restricted by some social, economical, hierarchical, communicational, or technical structure. The study of TU games with limited cooperation introduced by means of undirected communication graphs was initiated by Myerson (1977).
In our study we consider directed graph games in which all players are partially ordered and possible communication via bilateral agreements between participants is presented by a directed graph (digraph). The approach to the solution for digraph games depends on the interpretation of a directed communication structure. If it is assumed that a digraph represents a flow situation when some links may merge while others split into several separate ones, we restrict our consideration to the class of cycle-free digraph games (only without directed cycles, undirected cycles are not excluded). We introduce web-type values, in particular the tree value, for cycle-free digraph games axiomatically, each one with respect to a chosen coalition of players that is assumed to be an anti-chain in the digraph and is considered as a management team. We provide the explicit formula representation and simple recursive algorithms to calculate these values and we study their efficiency and stability. We also define the average web value as the average of the web values over all management teams in the digraph. In general these values are not component efficient. As application we consider the water distribution problem of a river with multiple sources, a delta and possibly islands.
Another possible interpretation of a communication digraph is the assumption that a digraph represents the subordination of players such that after each player any of his subordinates may follow as long as this does not hurt the total subordination among the players prescribed by the digraph. In this case we define a covering tree for a digraph as a tree that keeps the subordination of players prescribed by the digraph but in general is not a spanning tree. We introduce the average covering tree value as the average of the tree values over all covering trees in the digraph. The average covering tree value appears to be component efficient.
(November-22nd 2012) Andrey Bernstein, Technion.
Title: Approachability in Repeated Games: Algorithms and Refinements.
Abstract: The theory of approachability, introduced by Blackwell (1956), provides fundamental results on repeated games with vector-valued payoffs, which proved useful in a variety of applications from repeated games with incomplete information to on-line learning algorithms. Given a repeated game with vector payoffs, a target set S is approachable by a designated player (the agent) if he can ensure that the average payoff vector converges to that set no matter what the opponent does. Blackwell provided two sets of necessary and sufficient conditions for a convex set to be approachable. The .first (primary) condition is a geometric separation condition, while the second (dual) condition requires that the set be non-excludable, namely that for every mixed action of the opponent there exists a mixed action of the agent (a response) such that the resulting expected payoff belongs to S. Existing approachability algorithms rely on the primal condition and require a projection direction from a given point to S to be computed at each stage. In the first part of the talk, we propose an approachability algorithm for convex target sets that uses Blackwell's dual condition. Thus, rather than projection, the algorithm requires to compute a response to a certain action of the opponent at each stage.
In the second part of the talk, we consider a refinement of the classical approachability problem. The notion of approachability accommodates a worst-case scenario with respect to the opponent’s strategy, as it requires the target set to be approachable for *any* strategy of the opponent. However, as the game unfolds, it may turn out that the opponent’s choices of actions are limited in some way. If these restrictions were known in advance, the agent could possibly ensure convergence of the average reward to some desired subset of the original set S, or even approach a set S that is not approachable in general. We formulate appropriate goals for opportunistic approachability algorithms, in the sense that they may exploit such limitations on the opponent’s action sequence in an on-line manner, without knowing them beforehand. In particular, we propose a class of algorithms that use a response to a calibrated forecast of the opponent’s actions, which are opportunistic in the above mentioned sense.
The utility of the proposed algorithms is demonstrated by applying it to some generalized regret minimization problems, in particular regret minimization with side constraints, where the computation of projections is generally complex but a response is readily obtainable. Moreover, in this problem, the best-reward-in-hindsight is not generally attainable, but only its convex relaxation. Our opportunistic algorithms applied to this problem does attain the best-reward-in-hindsight if the opponent’s play happens to be stationary (either in terms of its mixed actions, or the empirical frequencies of its pure actions).
(November-6th 2012) Victor Domansky and Victoria Kreps.
Title: Repeated games with asymmetric information modeling multistage bidding for risky assets.
Abstract: We investigate a discrete variant of multistage bidding model for risky assets (shares) introduced by De Meyer and Moussa Saley (2002) to analyze the evolution of prices at finance markets with asymmetric information. Only integer prices and bids are admissible in contrast to De Meyer and Moussa Saley model. The zero-sum repeated games with incomplete information are considered modeling the bidding with countable sets of possible prices and bids. We show that, if the liquidation price of a share has a finite variance, then the sequence of total profits of Player 1 – the insider – in n-step games is bounded from above. This property distinguish the discrete model from the continuous De Meyer and Moussa Saley model and allows to consider the game with infinite number of steps without a beforehand given artificial restriction of the game duration. We construct explicitly the optimal strategies for this game.
For constructing the optimal strategy of Player 1 (the insider) with arbitrary liquidation price of a share with finite variance we use the symmetric representation of distributions with fixed mean values as convex combinations of distributions with two-point supports and with the same mean values. The solutions for the games with two-point distributions was obtained in Domansky (2007). The optimal strategy of Player 1 generates a symmetric random walk of posterior expectations of liquidation price with absorption. The expected duration of this walk is equal to the initial variance of price. The guaranteed total gain of Player 1 (the value of the game) is equal to this expected duration multiplied with the fixed gain per step. The described results are contained in the preprint Domansky, Kreps (2009).
(June-25th 2012) - Prof. Eilon Solan
Title: Repeated games with costly observations.
Abstract: In this seminar I will present a recent joint work with Ehud Lehrer on repeated games with costly observations. Those are two-player repeated games in which the players must pay a fixed cost in order to observe the action played by the other player. Our goal is to characterize the set of equilibrium payoffs in this model. Surprisingly the limit set of equilibrium payoffs, as the cost of observation goes to 0, is a strict subset of the set of feasible and individually rational payoffs. I will present all proofs and constructions.