Research

When does society eventually learn the truth, or take the correct action, via observational learning?  In a general model of sequential learning over social networks, we identify a simple condition for learning dubbed excludability. Excludability is a joint property of agents' preferences and their information. We develop two classes of preferences and information that jointly satisfy excludability: (i) for a one-dimensional state, preferences with single-crossing differences and a new informational condition, directionally unbounded beliefs; and (ii) for a multi-dimensional state, intermediate preferences and subexponential location-shift information. These applications exemplify that with multiple states ``unbounded beliefs'' is not only unnecessary for learning, but incompatible with familiar informational structures like normal information. Unbounded beliefs demands that a single agent can identify the correct action. Excludability, on the other hand, only requires that a single agent must be able to displace any wrong action, even if she cannot take the correct action.

We study competitive data markets in which consumers own their personal data and can trade it with intermediaries, such as e-commerce platforms. Intermediaries use this data to provide services to the consumers, such as targeted offers from third-party merchants. Our main results identify a novel inefficiency, resulting in equilibrium data allocations that fail to maximize welfare. This inefficiency hinges on the role that intermediaries play as information gatekeepers, a hallmark of the digital economy. We provide three solutions to this market failure: establishing data unions, which manage consumers’ data on their behalf; taxing the trade of data; and letting the price of data depend on its intended use.

Work in Progress

Correlation Ambiguity and Information Overload

Misleading via Misspecification: Nonuniformity of Learning Robustness