ABSTRACTS

Preliminary

Modibo CAMARA: “Hadwiger Separability, or: Turing meets von Neumann and Morgenstern”

This paper incorporates time constraints into decision theory, via computational complexity theory. I use the resulting framework to better understand common behavioral heuristics known as choice bracketing. My main result shows that a time-constrained agent who satisfies the expected utility axioms must have a Hadwiger separable utility function. This separability condition is a relaxation of additive separability that allows for some complementarities and substitutions but limits their frequency. One implication of this result is that a time-constrained agent may be better off violating the expected utility axioms. This can occur when the agent wants to maximize the expected value of a utility function that is not Hadwiger separable.

Isa CHAVES: “Bargaining in securities”

Many corporate negotiations involve contingent payments or securities, yet the bargaining literature overwhelmingly focuses on pure cash transactions. We characterize equilibria in a continuous-time model of bargaining in securities. A privately informed buyer and a seller negotiate the terms of a joint project. The buyer’s private information affects both his standalone value and the net returns from the project. The seller makes offers in a one-dimensional family of securities (e.g., equity splits). We show how outcomes change as the underlying security becomes more sensitive to the buyer’s information, and we apply the framework to mergers and acquisitions under financial constraints.

Xiaoyu CHENG: “Improving decisions under ambiguity with data”


Consider a decision-maker (DM) who faces uncertainty governed by an unknown data generating process (DGP) and observes sample data. This paper studies how the data should be used in order to improve decisions when the DGP can only be partially identified. When faced with a set of possible DGPs, the DM is assumed to apply the maxmin expected-utility criterion. The data is said to improve decisions if it leads to choices that guarantee, under the true DGP, a higher expected utility than the maxmin expected utility obtained without observing the data. This paper shows that the data improves decisions if and only if the DM's updated belief given the data accommodates the true DGP. It then proposes two novel updating rules that can guarantee accommodating the true DGP (either asymptotically, or with a pre-specified probability in finite samples). In contrast, common existing updating rules either cannot strictly improve decisions (full Bayesian) or may even worsen decisions (maximum likelihood based rules). This paper also explores the implications of the proposed novel updating rules in applications.

Julien COMBE: “Dynamic assignment without money: Optimality of spot mechanisms” (with Vladyslav Nora and Olivier Tercieux)

 We study a large market model of dynamic matching with no monetary transfers and a continuum of agents. Time is discrete and horizon finite. Agents are in the market from the first date and, at each date, have to be assigned items (or bundles of items). When the social planner can only elicit ordinal preferences of agents over the sequences of items, we prove that, under a mild regularity assumption, incentive compatible and ordinally efficient allocation rules coincides with spot mechanisms. A spot mechanism specifies "virtual prices" for items at each date and, at the beginning of time, for each agent, randomly selects a budget of virtual money according to a (potentially non-uniform) distribution over [0,1]. Then, at each date, the agent is allocated the item of his choice among the affordable ones. Spot mechanisms impose a linear structure on prices and, perhaps surprisingly, our result shows that this linear structure is what is needed when one requires incentive compatibility and ordinal efficiency. When the social planner can elicit cardinal preferences, we prove that, under a similar regularity assumption, incentive compatible and Pareto efficient mechanisms coincide with a class of mechanisms we call Menu of Random Budgets mechanisms. These mechanisms are similar to spot mechanisms except that, at the beginning of the time, each agent must pick a distribution in a menu. This distribution is used to initially draw the agent's budget of virtual money.


Harry DI PEI: “Robust Mechanism Design and Costly Information Acquisition” (with Bruno Strulovici)

We consider a mechanism design problem in which multiple agents simultaneously and independently decide whether to acquire costly information about some payoff-relevant state after which they each send a message to a principal. The principal cannot verify the state ex post and commits to a mechanism that maps agents' messages to monetary transfers. For every social choice function, we construct a mechanism that robustly implements this social choice function in the following sense. Whenever the principal knows agents' preferences with probability close to one and it is common knowledge that agents' payoffs do not directly depend on their messages, there exists a mechanism and corresponding equilibrium that implements the desired social choice function with probability close to one regardless of agents' preferences, beliefs, and higher order beliefs about each other's preferences.

George GEORGIADIS: “Optimal feedback in contests”

We derive an optimal dynamic contest for environments where the principal monitors effort through a coarse, binary performance measure and chooses prize-allocation and termination rules together with a real-time feedback policy. The optimal contest takes a stark cyclical form: contestants are kept fully apprised of their own successes, and at the end of each fixed-length cycle, if at least one agent has succeeded, the contest ends and the prize is shared equally among all successful agents regardless of when they succeeded; otherwise, the designer informs all contestants that nobody has yet succeeded and the contest resets.

Margarita KIRNEVA: “Multidimensional Bayesian Polarization"


We study a mechanism through which polarization of opinions may occur among Bayesian agents learning about the state of nature from the public signals. The mechanism is based on the dimension reduction present in the signals: the state of nature is of a higher dimension than the signal space and each signal is a projection of the true state on the signal space. The signal generating process is common knowledge between agents. Agents may differ initially in their prior beliefs on the parameters, or in their beliefs about the correlation between the dimensions of the state of nature. We show that under these conditions, agents’ beliefs on some dimensions of the state of nature may diverge while they commonly observe a sequence of signals. We characterize conditions under which this divergence occurs, and provide interpretations from the perspective of media and its effect on opinion polarization.

Marie LACLAU: “Robust communication on networks” (with Ludovic Renou and Xavier Venel)


We consider sender-receiver games, where the sender and the receiver are two distinct nodes in a communication network. Communication between the sender and the receiver is thus indirect. We ask when it is possible to robustly implement the equilibrium outcomes of the direct communication game as equilibrium outcomes of indirect communication games on the network. Robust implementation requires that: (i) the implementation is independent of the preferences of the intermediaries and (ii) the implementation is guaranteed at all histories consistent with unilateral deviations by the intermediaries. We show that robust implementation of direct communication is possible if and only if either the sender and receiver are directly connected or there exist two disjoint paths between the sender and the receiver. We also show that having two disjoint paths between the sender and the receiver guarantees the robust implementation of all communication equilibria of the direct game. We use our results to reflect on organizational arrangements.

Raphael LEVY: Stationary social learning in a changing environment, (with Marcin Peski and Nicolas Vieille)

We consider social learning in a changing world. Each period, new-born agents observe a finite sample of past actions and can acquire a signal about the current state before acting. When the state of the world is close to persistent, a consensus in which almost all the population chooses the same action typically emerges. The consensus action is not perfectly correlated with the state though, because the society exhibits some inertia whenever the state changes. The possibility that the state changes drastically limits the value of social learning. Indeed, when signals are too precise -- in particular, with perfect signals -- actions within a sample are too correlated and even observing unanimous samples is not informative enough to allow herding on past behavior.


Margaret MEYER: “Selecting the Best when Selection is Hard” (with Mikhail Drugov and Marc Moeller)

In dynamic promotion contests where the organization's objective is to identify the more able agent and performance measurement is constrained to be ordinal, selective efficiency can be improved by biasing the later contest in favor of the agent who performed better in the initial one. Even in the worst-case scenario, where external random factors dominate the difference in agents' abilities in determining their relative performance (a large ratio of noise to heterogeneity), optimal bias is

i) strictly positive and ii) locally insensitive to changes in this ratio. The same two properties would hold for the expected optimal bias if the organization were able to condition its choice on cardinal information about the first-period margin of victory. As a consequence of these two properties, the simple rule of setting the bias as if in the worst-case scenario achieves most of the potential gains in selective efficiency from biasing dynamic rank-order contests.

Josh MOLLNER: “Principal Trading Procurement: Competition and Information Leakage” (with Markus Baldauf)


We model procurement auctions held by institutional traders seeking fulfillment for large trades. The dealer who wins such an auction might fill the order out of inventory or access the market for additional volumes. How many dealers should the trader contact? There is a general tradeoff: an additional dealer intensifies competition and may improve matchmaking, but also intensifies information leakage. We show that information leakage can be an endogenous search friction in that the trader does not always contact all available dealers. There is also a question of information design: what should the trader reveal about her desired trade? In the model, it is optimal to provide no information at the bidding stage. There are also implications for market design and regulation.

Paula ONUCHIC: "Signaling and Discrimination in Collaborative Projects" (with Debraj Ray)

We propose a model of collaborative work in pairs. Each potential partner draws an idea from a distribution that depends on their unobserved ability. The partners then choose to combine their ideas, or work separately. These decisions are based on the intrinsic value of their projects, but also on signaling payoffs, which depend on the public's assessment of individual contributions to joint work. In equilibrium, collaboration strategies both justify and are justified by public assessments. When partners are symmetric, equilibria with symmetric collaborative strategies are often fragile, in a sense made precise in the paper. In such cases, asymmetric equilibria exist: upon observing a collaborative outcome, the public ascribes higher credit to one of the partners based on payoff-irrelevant ``identities." Such favored identities do receive a higher payoff relative to their disfavored counterparts conditional on collaborating, but may receive lower overall expected payoff. Finally, we study a policy that sometimes (but not always) clarifies the ordinal ranking of partners' contributions, and find that such disclosures can be Pareto-improving and reduce the scope for discrimination across payoff-irrelevant identities.

Daniel QUIGLEY: “Conjugate Persuasion” (with Ian Jewitt)

Itzhak RASOOLY: “Going... Going... Wrong: A Test of the Level-k (and Cognitive Hierarchy) Models of Bidding Behaviour”


In this paper, we design and implement an experiment aimed at testing the level-k model of auctions. We begin by asking which (simple) environments can best disentangle the level-k model from its leading rival, Bayes Nash equilibrium. We find two environments that are particularly suited to this purpose: an all-pay auction with uniformly distributed values, and a first-price auction with the possibility of cancelled bids. We then implement both of these environments in a (virtual) laboratory in order to see which theory can best explain observed bidding behaviour. We find that, when plausibly calibrated, the level-k model substantially under-predicts the observed bids and is clearly out-performed by equilibrium. Moreover, attempting to fit the level-k model to the observed data results in implausibly high estimated levels, which in turn bear no relation to the levels inferred from a game known to trigger level-k reasoning. Finally, subjects almost never appeal to iterated reasoning when asked to explain how they bid. Overall, these findings suggest that, despite its notable success in predicting behaviour in other strategic settings, the level-k model (and its close cousin cognitive hierarchy) cannot explain behaviour in auctions.

Benoit SCHMUTZ: “A Dynamic Theory of the Urban Network" (with Modibo Sidibé)

This paper proposes a dynamic theory of the urban network based on imperfect labor mobility. When cities are anchored in specific locations, the interplay between firms’ entry decisions and workers’ migration strategies generates an equilibrium allocation where the most productive firms and workers cluster into the largest cities. Small deviations from isotropy create non-convexities that translate into power laws in city size distribution, even if individual heterogeneity is uniformly distributed and technology operates under constant returns to scale. The model delivers sufficient statistics to identify key urban phenomena and an application illustrates the potential welfare gains of well-targeted place-based policies.

Aidan SMITH: “Reputational Incentives with Networked Customers”

Sareh VOSOOGHI: “Large, self-enforcing climate coalitions: an integrated analysis of farsighted countries” (with Maria Arvaniti and Rick van der Ploeg)

By taking into account long-run  incentives  of  countries  and  climate  dynamics of  their  energy  consumption,  we  examine  the  formation  of  international  climate coalitions.  We show that the number of signatories of climate treaties is always a Tribonacci number.  In an infinite-horizon general equilibrium model of economy and climate, we suggest a simple algorithm to fully characterise the equilibrium coalition structure.

Wei ZHAO: Dynamic progress report"


This paper studies information design in a dynamic moral hazard environment. An agent and an expert face a common uncertainty regarding the effectiveness of a collective decision. The agent bears the cost of effort of information acquisition and makes the final decision. The expert is the only observer of research outcomes and provides information over time to the agent. Both parties are equally affected by the decision. I show that one optimal information policy consists in disclosing truthfully with delay. In the first periods of time, the delay is zero, then strictly increases and finally vanishes. By the time the delay decreases back to zero, the agent has taken the decision with probability one.