Job Market Paper
How Trust Supports the Spread of Disinformation
Abstract: In this paper, we challenge the notion that people's rationality protects them from being deceived by disinformation. Through a cheap talk framework, we find that an information designer can successfully deceive a rational receiver through strategically designed messages, even if the receiver is not interacting with others or under the influence of an echo chamber. A decision-maker must choose between two actions for which she has no particular preference. Her goal is to pick the action that aligns with the true state of the world, based on her beliefs about the two possible states. She receives a message from a sender whose type (i.e., preference over the two actions) is unknown. The sender has some private information on the state of the world, but it’s based only on a noisy signal. We define the Bayesian Nash equilibria of this sender-receiver game and formulate the optimal strategies for the sender to manipulate the receiver’s beliefs.
Our model predicts that a high enough proportion of truthful senders in society allows deceptive information spreaders to consistently mislead the receiver, and that greater uncertainty in the world makes the receiver more willing to accept potentially deceptive information. We test these predictions experimentally using LLM agents. The results show that malicious senders (“Bad senders”) are capable of systematically influencing receiver beliefs, yet they also exhibit strong over-communication behaviors. In roughly half of the observed cases, Bad senders choose truth-telling or sending no message instead of deception, despite the model’s prediction that lying should always be optimal. This suggests that the moral costs of lying significantly constrain the spread of disinformation. Moreover, the greater prevalence of silence compared to truth-telling suggests that the decision to withhold communication may serve as a more effective safeguard against disinformation than previously recognized.
Working Papers
Why Don't We Learn from Mistakes? [With Andreas Pape]
Abstract: People make suboptimal decisions even after experiencing a series of negative outcomes, which challenges the traditional economic assumption of rational decision-making. This paper argues that such behavior may stem from humans' selective memory mechanisms. We extend the case-based decision model by incorporating specific memory biases, such as Pleasant Memory Bias, Confirmation Bias, and Extreme Experience Bias, and simulate repeated decision scenarios like gambling. Our results show that the case-based decision model with selective memory bias predicts people’s gambling behavior better. This explains that selective memory may drive individuals to persistently repeat costly mistakes, struggle to learn effectively from negative experiences.
Monte-Carlo Tests for Identification and Validation of Stochastic Agent-Based Models [With Christopher Zosh, Nency Dhameja, and Andreas Pape]
Abstract: Agent-based models (ABMs) are increasingly used for formal estimation and inference, yet their complexity and algorithmic nature pose persistent challenges for assessing estimator properties through the conventional inferential frameworks that underpin most econometric practice.
This paper shows how Monte Carlo simulations (MCS) can address these challenges. We show that MCS can systematically assess whether ABM parameters are identifiable, how accuracy and precision of estimates depend on factors such as search algorithm, number of model runs, and fitness function specification. MCS can also speak to model validity. We further introduce a novel Monte Carlo test that disentangles the source of imprecision arising from model and estimation stochasticity versus sampling variation.
We demonstrate these methods using two applications: a repeated prisoner’s dilemma with learning agents and a model of information diffusion on a network. In the first, we find parameters are recoverable with grid search and particle swarm optimization with sufficient runs but not with a genetic algorithm, and estimates are largely unbiased when sufficient runs are used. In the second, we see parameters are recoverable but highly sensitive to the moments used in the fitness function, and estimator behavior diverges when applied to real data, suggesting model validity issues.
Our results show that even when ABM parameters can be identified, estimator performance can be sensitive to the fitness function, search method, and model features, underscoring the need for MCS-based diagnostics before drawing substantive conclusions.
Social Context Matters: How Large Language Model Agents Reproduce Real-World Segregation Patterns in the Schelling Model [With Srikanth Iyer, Nency Dhameja, Christopher Zosh, Mohammed Mahinur Alam, Carl Lipo, and Andreas Pape]
Abstract: We extend the Schelling segregation model by replacing traditional, rule-based agents with Large Language Model (LLM) agents that make residential decisions using natural language reasoning grounded in social context. To our knowledge, this is the first application that substitutes the mechanical agents of the Schelling model with LLM-driven agents. We compare LLM agent behavior across four social and two other contexts: Income (High vs. Low), Ethnic (Asian vs. Hispanic), Racial (White vs. Black), Political (Liberal vs. Conservative), two pairs of colors to serve as control groups, and a mechanical baseline (the standard Schelling model). We measure the resulting segregation via a Dissimilarity Index, which measures how dissimilar neighborhoods are in an overall geographical area. We find LLM agents partially reproduce empirical segregation patterns collected at the census block level and measured across counties.
AgentCarlo: A Python Library for Estimating Agent-Based Model Parameters and their Confidence Intervals [With Christopher Zosh, Nency Dhameja, and Andreas Pape]
Abstract: Although many agent-based models (ABMs) have traditionally served as tools for demonstrating proof-of-principle findings, it is increasingly common and desirable for such models to be used directly for empirical estimation across a range of disciplines. This shift underscores the need for accessible and econometrically sound estimation methods tailored to ABMs.
Taking the view that ABMs are, in many respects, analogous to structural equation models, we present a practical and broadly generalizable approach for fitting virtually any agent-based model to panel data in a manner akin to structural regression. We also introduce AgentCarlo, a Python package designed to facilitate this process out of the box.
This paper is structured as an accessible guide for analysts who may be unfamiliar with these techniques, explaining the underlying intuition behind the empirical methods while also demonstrating how they can be easily applied using our AgentCarlo package. We demonstrate how to estimate best-fitting parameters via the Simulated Method of Moments, covering the summarization and aggregation of model output, construction of a fitness function, and selection of an optimization algorithm, and how to obtain critical values using block bootstrapping, including the interpretation of confidence intervals and hypothesis testing in this context.
We also implement a suite of Monte Carlo simulations to evaluate key properties of the estimation procedure, including, most importantly, whether the parameters of interest are reasonably identifiable.
Work In Progress
Laddered Persuasion
Abstract: This paper examines how a moderate, credible claim early in the disclosure process can create a “belief ladder” that makes later, more extreme claims surprisingly persuasive, even when the later claims are weakly supported. We develop a two-round framework in which the receiver exhibits inattention and memory stickiness, and show how a strategic information designer can exploit these features to shift beliefs towards extreme conclusions. By using a moderate claim in the first round as a “ladder”, the sender can induce the receiver to accept an extreme belief in the second round even when communication there is mere cheap talk. We further extend the model to settings where a platform can impose penalties when deception is detected, and where a fact-checker with limited resources intervenes across rounds. Together, the results highlight how a belief ladder, which is built through early credible claims and reinforced by inattention and memory stickiness, affects the effectiveness of penalty and verification in countering disinformation.
Accent Discrimination in Work/Teaching [With Ozlem Tonguc, Elisa Taveras Pena, and Maria Zhu]