Job Market Paper
How Trust Supports the Spread of Disinformation
Abstract: In this paper, we challenge the notion that people's rationality protects them from being deceived by disinformation. Through a Bayesian persuasion framework, we find that an information designer can successfully deceive a rational agent through strategically designed messages, even if the agent is not interacting with others or under the influence of an echo chamber. A utility-maximizing agent must choose between two actions for which she has no particular preference. Her goal is to pick the action that aligns with the true state of the world, based on her beliefs about the two possible states. She receives a message from a principal whose type (i.e., preference over the two actions) is unknown. The principal has some private information on the state of the world, but it’s based only on a noisy signal. We define the subgame perfect Nash equilibria of this sender-receiver game and formulate the optimal strategies for the sender to manipulate the receiver’s beliefs.
Our first key finding is that, a high enough proportion of truthful senders in society creates an environment that allows deceptive information spreaders to step on the stage and consistently mislead the receiver. Our second finding is that when there is greater uncertainty in the world, the receiver is more willing to accept outside information from unknown types of senders, knowing that they might be deceptive. Our model explains why disinformation can persistently propagate among rational individuals, and simply being aware of the potential presence of disinformation is not sufficient to protect us from its influence.
Working Papers
Why Don't We Learn from Mistakes? [With Andreas Pape]
Abstract: People make suboptimal decisions even after experiencing a series of negative outcomes, which challenges the traditional economic assumption of rational decision-making. This paper argues that such behavior may stem from humans' selective memory mechanisms. We extend the case-based decision model by incorporating specific memory biases, such as Pleasant Memory Bias, Confirmation Bias, and Extreme Experience Bias, and simulate repeated decision scenarios like gambling. Our results show that the case-based decision model with selective memory bias predicts people’s gambling behavior better. This explains that selective memory may drive individuals to persistently repeat costly mistakes, struggle to learn effectively from negative experiences.
Monte-Carlo Tests for Identification and Validation of Stochastic Agent-Based Models [With Christopher Zosh, Nency Dhameja, and Andreas Pape]
Abstract: Agent-based models (ABMs) are increasingly used for formal estimation and inference, but their complexity and algorithmic nature pose persistent challenges for the formal assessment of estimator properties.
This paper highlights the indispensable role that Monte Carlo simulations (MCS) can play in addressing these challenges. We show that MCS can systematically evaluate whether parameters of an ABM can be reliably estimated, how estimate accuracy and precision depend on factors such as search algorithm choice and the number of model runs conducted, and can even speak to model validity in some cases. We also introduce a novel Monte Carlo test that disentangles imprecision due to the stochasticity of the model and estimation process itself versus that sourced by sampling variation.
We apply these techniques to two example applications: first, a repeated prisoner's dilemma model with learning agents, and second, a model of information diffusion over a network. Our results demonstrate that, while the parameters of these models can be identified in principle, estimator performance can be highly sensitive to the choice of fitness function, the search method used in the estimation process, and to features of the model itself, so establishing if a particular specification works for a particular problem is vital. These findings underscore the practical importance of applying MCS-based diagnostics before drawing substantive conclusions from estimated ABM parameters.
AgentCarlo: A Python Library for Estimating Agent-Based Model Parameters and their Confidence Intervals [With Christopher Zosh, Nency Dhameja, and Andreas Pape]
Abstract: Although many agent-based models (ABMs) have traditionally served as tools for demonstrating proof-of-principle findings, it is increasingly common and desirable for such models to be used directly for empirical estimation across a range of disciplines. This shift underscores the need for accessible and econometrically sound estimation methods tailored to ABMs.
Taking the view that ABMs are, in many respects, analogous to structural equation models, we present a practical and broadly generalizable approach for fitting virtually any agent-based model to panel data in a manner akin to structural regression. We also introduce AgentCarlo, a Python package designed to facilitate this process out of the box.
This paper is structured as an accessible guide for analysts who may be unfamiliar with these techniques, explaining the underlying intuition behind the empirical methods while also demonstrating how they can be easily applied using our AgentCarlo package. We demonstrate how to estimate best-fitting parameters via the Simulated Method of Moments, covering the summarization and aggregation of model output, construction of a fitness function, and selection of an optimization algorithm, and how to obtain critical values using block bootstrapping, including the interpretation of confidence intervals and hypothesis testing in this context.
We also implement a suite of Monte Carlo simulations to evaluate key properties of the estimation procedure, including, most importantly, whether the parameters of interest are reasonably identifiable.
Work In Progress
Laddered Persuasion
Abstract: This paper examines how a moderate, credible claim early in the disclosure process can create a “belief ladder” that makes later, more extreme claims surprisingly persuasive, even when the later claims are weakly supported. We develop a two-round framework in which the receiver exhibits inattention and memory stickiness, and show how a strategic information designer can exploit these features to shift beliefs towards extreme conclusions. By using a moderate claim in the first round as a “ladder”, the sender can induce the receiver to accept an extreme belief in the second round even when communication there is mere cheap talk. We further extend the model to settings where a platform can impose penalties when deception is detected, and where a fact-checker with limited resources intervenes across rounds. Together, the results highlight how a belief ladder, which is built through early credible claims and reinforced by inattention and memory stickiness, affects the effectiveness of penalty and verification in countering disinformation.
Social Context Matters: How Large Language Model Agents Reproduce Real-World Segregation Patterns in the Schelling Model [With Mohammed Mahinur Alam, Christopher Zosh, Nency Dhameja, Srikanth Iyer, Carl Lipo, and Andreas Pape]
Accent Discrimination in Work/Teaching [With Ozlem Tonguc, Elisa Taveras Pena, and Maria Zhu]