Scoring Strategic Agents [Job Market Paper]
I introduce a model of predictive scoring. A receiver wants to predict a sender's quality. An intermediary observes multiple features of the sender and aggregates them into a score. Based on the score, the receiver takes a decision. The sender wants the most favorable decision, and she can distort each feature at a privately known cost. I characterize the most accurate scoring rule. This rule underweights some features to deter sender distortion, and overweights other features so that the score is correct on average. The receiver prefers this score to full disclosure because the aggregated information mitigates his commitment problem.
We introduce a model of probabilistic verification in a mechanism design setting. The principal verifies the agent's claims with statistical tests. The agent's probability of passing each test depends on his type. In our framework, the revelation principle holds. We characterize whether each type has an associated test that best screens out all the other types. In that case, the testing technology can be represented in a tractable reduced form. In a quasilinear environment, we solve for the revenue-maximizing mechanism by introducing a new expression for the virtual value that encodes the effect of testing.
I study the provision of information as an incentive instrument in a continuous-time sender-receiver model. The sender observes a persistent, evolving state and sends signals over time to the receiver, who sequentially chooses actions which affect the welfare of both players. I solve for the sender's optimal dynamic information policy in closed form. Under this policy, the sender provides information gradually, contingent on the receiver's past actions. I show that the sender can implement this policy by truthfully reporting the state with a delay that shrinks over time.
A principal delegates decisions to a biased agent. Payoffs depend on a state that the principal cannot observe. The agent does not initially observe this state, but he can learn about it by privately experimenting, at a cost. We characterize the principal's optimal delegation set. This set has a cap to restrict the agent's bias, but it may have a hole around safe decisions in order to encourage information acquisition. Unlike in standard delegation models, the principal's payoff is maximized when the agent's bias is nonzero.
We study a sender-receiver game. The sender observes the state and costlessly transmits a message to the receiver, who selects one component of the state to check and then chooses a binary action. The receiver’s preferred action depends on the state. The sender has a state-independent preference for one action over the other. Nevertheless, communication can strictly benefit both players. We characterize the symmetric equilibria. In each one, the sender tells the receiver which components of the state are highest. The same equilibria exist in an extension where the receiver can check multiple components. We also find that with commitment power, the sender can extract more rents from the receiver by randomizing between signals that induce different actions. However, the receiver’s ability to partially verify the state has an ambiguous effect on the sender’s utility unless the sender has commitment power, in which case verification can only restrict the set of posteriors the sender can induce.