A researcher wants to ask a decision-maker about a belief related to a choice the decision-maker made. When can the researcher provide incentives for the decision-maker to report her belief truthfully without distorting her choice? We identify necessary conditions and sufficient conditions for non-distortionary elicitation and fully characterize which questions can be incentivized in this way in three canonical classes of problems. For these questions, we construct simple variants of the classic Becker-DeGroot-Marschak mechanism that can be used to elicit beliefs.
Who Opts In? Selection and Disappointment through Participation Payments, with Sandro Ambuehl and Axel Ockenfels, Review of Economics and Statistics Vol. 107(1), January 2025, 78–94
Participation payments are used in many transactions about which people know little, but can learn more: incentives for medical trial participation, signing bonuses for job applicants, or price rebates on consumer durables. Who opts into the transaction when given such incentives? We show theoretically and experimentally that incentives can act as a selection mechanism that disproportionately selects individuals for whom learning is harder. Moreover, these individuals use less information to decide whether to participate, which makes disappointment more likely. The learning-based selection effect is stronger in settings in which information acquisition is more difficult.
Evidence suggests that consumers do not perfectly optimize, contrary to a critical assumption of classical consumer theory. We propose a model in which consumer types can vary in both their preferences and their choice behavior. Given data on demand and the distribution of prices, we identify the set of possible values of the consumer surplus based on minimal rationality conditions: every type of consumer must be no worse off than if they either always bought the good or never did. We develop a procedure to narrow the set of surplus values using richer datasets and provide bounds on counterfactual demands.
Attention Please!, with Olivier Gossner and Jakub Steiner, Econometrica Vol. 89(4), July 2021, 1717–1751
We study the impact of manipulating the attention of a decision‐maker who learns sequentially about a number of items before making a choice. Under natural assumptions on the decision‐maker's strategy, directing attention toward one item increases its likelihood of being chosen regardless of its value. This result applies when the decision‐maker can reject all items in favor of an outside option with known value; if no outside option is available, the direction of the effect of manipulation depends on the value of the item. A similar result applies to manipulation of choices in bandit problems.
Information Aggregation with Costly Reporting, with Martin J. Osborne and Jeffrey S. Rosenthal, The Economic Journal Vol. 130, January 2020, 208–232
A group of individuals with common interests has to choose a binary option whose desirability depends on an unknown binary state of the world. The individuals independently and privately observe a signal of the state. Each individual chooses whether to reveal her signal, at a cost. We show that if for all revelation choices of the individuals the option chosen by the group is optimal given the signals revealed and the set of individuals who do not reveal signals, then in a large group few signals are revealed, and these signals are extreme. The correct decision is taken with high probability in one state but with probability bounded away from one in the other. No anonymous decision-making mechanism without transfers does better. However, the first-best average payoff can be attained using transfers among agents, and approximately attained with a non-anonymous mechanism without transfers.
Optimal Adaptive Testing: Informativeness and Incentives, with Rahul Deb, Theoretical Economics, Vol. 13(3), September 2018, 1233–1274
We introduce a learning framework in which a principal seeks to determine the ability of a strategic agent. The principal assigns a test consisting of a finite sequence of questions or tasks. The test is adaptive: each question that is assigned can depend on the agent’s past performance. The probability of success on a question is jointly determined by the agent’s privately known ability and an unobserved action that he chooses to maximize the probability of passing the test. We identify a simple monotonicity condition under which the principal always employs the most (statistically) informative question in the optimal adaptive test. Conversely, whenever the condition is violated, we show that there are cases in which the principal strictly prefers to use less informative questions.
We solve a general class of dynamic rational-inattention problems in which an agent repeatedly acquires costly information about an evolving state and selects actions. The solution resembles the choice rule in a dynamic logit model, but it is biased towards an optimal default rule that depends only on the history of actions, not on the realized state. We apply the general solution to the study of (i) the status quo bias; (ii) inertia in actions leading to lagged adjustments to shocks; and (iii) the tradeoff between accuracy and delay in decision-making.
In one-shot games, an analyst who knows the best response correspondence can only make limited inferences about the players’ payoffs. In repeated games, this is not true: we show that, under a weak condition, if the game is repeated sufficiently many times and players are sufficiently patient, the best response correspondence completely determines the payoffs (up to positive affine transformations).
When an agent chooses between prospects, noise in information processing generates an effect akin to the winner’s curse. Statistically unbiased perception systematically overvalues the chosen action because it fails to account for the possibility that noise is responsible for making the preferred action appear to be optimal. The optimal perception patterns share key features with prospect theory, namely, overweighting of small probability events (and corresponding underweighting of high probability events), status quo bias, and reference-dependent S-shaped valuations. These biases arise to correct for the winner’s curse effect.
Price Distortions under Coarse Reasoning with Frequent Trade, with Jakub Steiner, Journal of Economic Theory, Vol. 159 (Part A), September 2015, 574–595
We study the effect of frequent trading opportunities and categorization on pricing of a risky asset. Frequent opportunities to trade can lead to large distortions in prices if some agents forecast future prices using a simplified model of the world that fails to distinguish between some states. In the limit as the period length vanishes, these distortions take a particular form: the price must be the same in any two states that a positive mass of agents categorize together. Price distortions therefore tend to be large when different agents categorize states in different ways, even if each individual’s categorization is not very coarse.
Influential Opinion Leaders, with Antoine Loeper and Jakub Steiner, The Economic Journal, Vol. 124, December 2014, 1147–1167
We present a two-stage coordination game in which early choices of experts with special interests are observed by followers who move in the second stage. We show that the equilibrium outcome is biased toward the experts’ interests even though followers know the distribution of expert interests and account for it when evaluating observed experts’ actions. Expert influence is fully decentralized in the sense that each individual expert has a negligible impact. The bias in favor of experts results from a social learning effect that is multiplied through a coordination motive. We show that the total effect can be large even if the direct social learning effect is small. We apply our results to the diffusion of products with network externalities and the onset of social movements.
We study coordination in dynamic global games with private learning. Players choose whether and when to invest irreversibly in a project whose success depends on its quality and the timing of investment. Players gradually learn about project quality. We identify conditions on temporal incentives under which, in sufficiently long games, players coordinate on investing whenever doing so is not dominated. Roughly speaking, this outcome occurs whenever players' payoffs are sufficiently tolerant of non-simultaneous coordination. We also identify conditions under which players coordinate on the risk-dominant action. We provide foundations for these results in terms of higher order beliefs.
This paper considers the problem of testing an expert who makes probabilistic forecasts about the outcomes of a stochastic process. I show that, as long as uninformed experts do not learn the correct forecasts too quickly, a likelihood test can distinguish informed from uninformed experts with high prior probability. The test rejects informed experts on some data-generating processes; however, the set of such processes is topologically small. These results contrast sharply with many negative results in the literature.
We study the effects of stochastically delayed communication on common knowledge acquisition (common learning). If messages do not report dispatch times, communication prevents common learning under general conditions even if common knowledge is acquired without communication. If messages report dispatch times, communication can destroy common learning under more restrictive conditions. The failure of common learning in the two cases is based on different infection arguments. Communication can destroy common learning even if it ends in finite time, or if agents communicate all of their information. We also identify conditions under which common learning is preserved in the presence of communication. This paper largely supercedes our earlier note, "Communication Can Destroy Common Learning."
Contagion through Learning, with Jakub Steiner, Theoretical Economics, Vol. 3 (4), December 2008, 431–458
Previously titled "Learning by Similarity in Coordination Problems."
We study learning in a large class of complete information normal form games. Players continually face new strategic situations and must form beliefs by extrapolation from similar past situations. The use of extrapolations in learning may generate contagion of actions across games even if players learn only from games with payoffs very close to the current ones. Contagion may lead to unique long-run outcomes where multiplicity would occur if players learned through repeatedly playing the same game. The process of contagion through learning is formally related to contagion in global games, although the outcomes generally differ. We characterize the long-run outcomes of learning in terms of iterated dominance in a related incomplete information game with subjective priors, which clarifies the connection to global games.
We consider a cross-calibration test of predictions by multiple potential experts in a stochastic environment which tests whether each expert is calibrated conditional on the predictions made by other experts. We show that this test is good in the sense that a true expert – one informed of the true distribution of the process – is guaranteed to pass the test no matter what the other potential experts do, and false experts will fail the test on all but a small (category one) set of true distributions. Furthermore, even when there is no true expert present, a test similar to cross-calibration cannot be simultaneously manipulated by multiple false experts, but at the cost of failing some true experts.
This paper considers the equilibrium selection problem in coordination games when players interact on an arbitrary social network. We examine the impact of the network structure on the robustness of the usual risk dominance prediction as mutation rates vary. For any given network, a sufficiently large bias in mutation probabilities favoring the non-risk dominant action overturns the risk dominance prediction; bounds are obtained on the size of this bias depending on the network structure. As the size of the population grows large, the risk dominant equilibrium is highly robust in some networks. This is true in particular if the risk dominant action spreads contagiously in the network and there does not exist a sufficiently cohesive finite group of players. Examples demonstrate that robustness does not coincide with fast convergence.