When does society eventually learn the truth, or take the correct action, via observational learning? In a general model of sequential learning over social networks, we identify a simple condition for learning dubbed excludability. Excludability is a joint property of agents' preferences and their information. We develop two classes of preferences and information that jointly satisfy excludability: (i) for a one-dimensional state, preferences with single-crossing differences and a new informational condition, directionally unbounded beliefs; and (ii) for a multi-dimensional state, intermediate preferences and subexponential location-shift information. These applications exemplify that with multiple states ``unbounded beliefs'' is not only unnecessary for learning, but incompatible with familiar informational structures like normal information. Unbounded beliefs demands that a single agent can identify the correct action. Excludability, on the other hand, only requires that a single agent must be able to displace any wrong action, even if she cannot take the correct action.
Draft Coming Soon!
In many economic activities, people use data to infer underlying states. However, due to model complexity or data limitation, the model used for inference is often subject to misspecification. One natural question is whether, as misspecification becomes vanishingly small, learning is always guaranteed to be correct asymptotically. This paper shows that the extent to which a data-generating process (DGP) is robust to misspecification hinges on the distinguishability between states under that DGP. This implies that learning is not uniformly robust across the set of identified DGPs. Furthermore, given any small magnitude of misspecification, learning can fail arbitrarily in the sense that, for any true state and an arbitrary target state, there exists a pair of true and perceived DGPs such that learning fails correspondingly. This result is applied to an information design setting, where the sender can incorporate an arbitrarily small amount of misspecification into a DGP to manipulate the receiver arbitrarily. Although the magnitude of misspecification cannot affect feasible persuasion outcomes, by providing a uniform bound on the rate of misspecified learning, we show that the magnitude roughly determines the rate at which the receiver is manipulated.
An analyst observes the frequency with which a decision maker (DM) takes actions, but does not observe the frequency of actions conditional on the payoff-relevant state. We ask when can the analyst rationalize the DM's choices as if the DM first learns something about the state before taking action. We provide a support function characterization of the triples of utility functions, prior beliefs, and (marginal) distributions over actions such that the DM's action distribution is consistent with information given the agent's prior and utility function. Assumptions on the cardinality of the state space and the utility function allow us to refine this characterization, obtaining a sharp system of finitely many inequalities the utility function, prior, and action distribution must satisfy. We apply our characterization to study comparative statics and ring-network games, and to identify conditions under which a data set is consistent with a public information structure in first-order Bayesian persuasion games. We characterize the set of distributions over posterior beliefs that are consistent with the DM's choices. Assuming the first-order approach applies, we extend our results to settings with a continuum of actions and/or states.
Extended abstract accepted in Proceedings of ACM EC, 2024 (Revised April 2024)
We study competitive data markets in which consumers own their personal data and can trade it with intermediaries, such as e-commerce platforms. Intermediaries use this data to provide services to the consumers, such as targeted offers from third-party merchants. Our main results identify a novel inefficiency, resulting in equilibrium data allocations that fail to maximize welfare. This inefficiency hinges on the role that intermediaries play as information gatekeepers, a hallmark of the digital economy. We provide three solutions to this market failure: establishing data unions, which manage consumers’ data on their behalf; taxing the trade of data; and letting the price of data depend on its intended use.