Working Papers
We propose calibrated menu effects, a novel class of models that offers a framework for examining various menu effects in decision-making processes. Our model class features clear welfare implications: A menu-dependent preference ranking over chosen alternatives across menus can be uniquely identified. Our framework captures the attraction effect, the compromise effect, and their generalizations, and is compatible with limited attention models. We characterize the boundary of calibrated menu effects in relation to bounded rationality models and highlight the distinction between menu effects and other deviations from rationality.
We establish a strategic equivalence between cursed equilibrium and the introduction of fictitious players in Bayesian games, allowing for controlled manipulation of cursedness in lab settings. We consider a cheap-talk setting involving one sender and multiple receivers, one real and several fictitious. The sender knows the real receiver’s type but his message is shared with all receivers. Uninformed of her being real or fictitious, the real receiver will neglect the correlation between the message and her type—she has cursed beliefs. By adjusting the number of fictitious receivers, our lab results align with the comparative statics predicted by cursed equilibrium.
We provide a framework for analyzing a range of well-documented non-Bayesian behaviors including base rate neglect, conjunction fallacy and disjunction fallacy. The model that we propose formally links the concept of similarity in theoretical psychology with belief updating. We follow Kahneman and Tversky (1974) and assume that when attempting to respond to the question “how likely is A given B”, people mistakenly respond to the question “how similar are A to B”. With a similarity-based updating rule the posterior of A∪C given B may be less than the posterior of A given B, simply because A∪C differs more from B than A does when B∩C=Ø. Our axioms yield a Cobb-Douglas weighted geometric mean of μ(A|B) and μ(B|A) as the behavioral conditional probability of A given B, where μ is the correct subjective probability and μ(·|·) is the Bayesian conditional of μ. That is, our decision makers confuse these two conditional probabilities but have correct unconditional beliefs. This combination of correct priors and incorrect updating occurs often since in many experiments subjects are explicitly given the relevant prior probabilities.
Publications
Logit Neural-Network Utility: with Sung-lin Hsieh, Shaowei Ke, and Zhaoran Wang, Journal of Economic Behavior & Organization, 2025
We introduce stochastic choice models that feature neural networks, one of which is called the logit neural-network utility (NU) model. We show how to use simple neurons, referred to as behavioral neurons, to capture behavioral effects, such as the certainty effect and reference dependence. We find that simple logit NU models with natural interpretation provide better out-of-sample predictions than expected utility theory and cumulative prospect theory, especially for choice problems that involve lotteries with both positive and negative prizes. We also find that the use of behavioral neurons mitigates overfitting and significantly improves our models' performance, consistent with numerous successes in introducing useful inductive biases in the machine-learning literature.
We build on AGM belief revision (Alchourron, Gardenfors, and Makinson (1985)) and propose a class of updating rules called pragmatic rules. Pragmatic updating applies to multiple priors and requires that the agent’s posteriors be the subset of her priors under which the realized event occurs with probability 1, if such priors exist. We construct a propositional language based on qualitative probability, and demonstrate the strong relation between belief updating rules and belief revision rules in this language. We show that an updating rule is consistent with AGM belief revision if and only if it is pragmatic. While maximum likelihood updating is pragmatic in general, full-Bayesian updating is not. We characterize maximum likelihood updating within the AGM framework, and show that full-Bayesian updating can be obtained by dropping one of AGM’s postulates.
We study a decision maker’s learning behavior when she receives recommendations from a black box, i.e., the decision maker does not understand how the recommendations are generated. We introduce four reasonable axioms and show that they cannot be satisfied simultaneously. We analyze various relaxations of the axioms. In one relaxation, we introduce and characterize an updating rule, the contraction rule, which has two parameters that map each recommendation to a recommended belief and the trustworthiness of the recommendation, respectively. The decision maker's posterior is formed by mixing her prior with the recommended belief according to the trustworthiness measure.
We introduce and analyze two preference-based notions of local linearity in the spirit of Machina (1982). We show how the weaker among the two extends Machina's local utility analysis, and that the stronger among the two characterizes continuous finite piecewise linear (CFPL) utility functions. We introduce a representation of the decision maker's preference called the neural-network utility representation that is equivalent to the CFPL representation, in which the decision maker evaluates an alternative through a neural network.
A working paper version with characterizations of ambiguity preferences featuring signed subjective probability and measure can be found here.
Cheap Talk with Prior-biased Inferences: with Wooyoung Lim and Yong-Ju Lee, Games and Economic Behavior, 2023
We investigates how prior-biased inferences change players’ strategic incentives and result in novel welfare implications in the canonical framework of strategic information transmission. The ex ante social welfare achieved in our model exceeds the upper bound characterized in the standard environment without prior bias. The welfare gain stems from the fact that the receiver’s prior bias weakens the link between the sender’s message and the receiver’s response without contaminating the actual content of the messages. We further show that direct communication is optimal among all possible communication protocols in the presence of a sufficient degree of prior bias.
I propose an axiomatic framework for belief revision when new information is qualitative, of the form “Event A is at least as likely as event B.” My decision maker need not have beliefs about the joint distribution of the signal she will receive and the payoff-relevant states. I propose three axioms, Exchangeability, Stationarity, and Reduction, to characterize the class of pseudo-Bayesian updating rules. The key axiom, Exchangeability, requires that the order in which the information arrives does not matter if the different pieces of information neither reinforce nor contradict each other. I show that adding one more axiom, Conservatism, which requires that the decision maker adjust her beliefs just enough to embrace new information, yields Kullback-Leibler minimization: The decision maker selects the posterior closest to her prior in terms of Kullback-Leibler divergence from the probability measures consistent with newly received information. I show that pseudo-Bayesian agents are susceptible to recency bias, which may be mitigated by repetitive learning.
Work in Progress
Identifying Risk Attitude with Moment-Restricted Lotteries
Draft coming soon...
Ordered Beliefs and Signal Realizations
Draft coming soon...