# Ian Ball

### I am an Assistant Professor at the MIT Department of Economics.

### My research interests are in economic theory, particularly information design and mechanism design.

### My CV.

### Email: ianball@mit.edu.

## Working papers

Comment on Jackson and Sonnenschein (2007) “Overcoming Incentive Constraints by Linking Decisions” (with Matt Jackson and Deniz Kattwinkel) **Forthcoming, ****Econometrica **

We correct a bound in the definition of approximate truthfulness used in the body of the paper of Jackson and Sonnenschein (2007). The proof of their main theorem uses a different permutation-based definition, implicitly claiming that the permutation-version implies the bound-based version. We show that this is only true if the bound is loosened. The new bound is still strong enough to guarantee that the fraction of lies vanishes as the number of problems grows, so the theorem and proof are correct as stated once the bound is loosened.

Content Filtering with Inattentive Information Consumers (with Justin Grana, James Bono, Nicole Immorlica, Brendan Lucier, and Alex Slivkins)

We develop and analyze a model of content-filtering where the content consumers incur deliberation costs when deciding the veracity of the content. Examples of such a scenario include censoring misinformation, information security (spam and phish filtering, for example) and recommender systems. With an exogenous attack probability, we show that increasing the quality of the filter is typically weakly Pareto improving though may sometimes have no impact on equilibrium outcomes and payoffs. Furthermore, when the filter does not internalize the consumer's deliberation costs, the filter's lack of commitment power may render a low-fidelity filter useless and lead to inefficient outcomes. Consequently, improvements to a moderately effective filter will have no impact on equilibrium payoffs until the filter is sufficiently accurate. With an endogenous attacker, improvements to filter quality may lead to strictly lower equilibrium payoffs since the content consumer increases its trust in the filter and thus incentivizes the attacker to increase its attack propensity.

I introduce a model of predictive scoring. A receiver wants to predict a sender's quality. An intermediary observes multiple features of the sender and aggregates them into a score. Based on the score, the receiver takes a decision. The sender wants the most favorable decision, and she can distort each feature at a privately known cost. I characterize the most accurate scoring rule. This rule underweights some features to deter sender distortion, and overweights other features so that the score is correct on average. The receiver prefers this score to full disclosure because the aggregated information mitigates his commitment problem.

Experimental Persuasion (with José Antonio Espín Sánchez)

We introduce experimental persuasion between Sender and Receiver. Sender chooses an experiment to perform from a feasible set of experiments. Receiver observes the realization of this experiment and chooses an action. We characterize optimal persuasion in this baseline regime and in an alternative regime in which Sender can commit to garble the outcome of the experiment. Our model includes Bayesian persuasion as the special case in which every experiment is feasible; however, our analysis does not require concavification. Since we focus on experiments rather than beliefs, we can accommodate general preferences including costly experiments and non-Bayesian inference.

**EC '19 **[**e**xtended abstract ** **presentation]

We introduce a model of probabilistic verification in a mechanism design setting. The principal verifies the agent's claims with statistical tests. The agent's probability of passing each test depends on his type. In our framework, the revelation principle holds. We characterize whether each type has an associated test that best screens out all the other types. In that case, the testing technology can be represented in a tractable reduced form. In a quasilinear environment, we solve for the revenue-maximizing mechanism by introducing a new expression for the virtual value that encodes the effect of testing.

Dynamic Information Provision: Rewarding the Past and Guiding the Future

**R&R, ****Econometrica**

I study the provision of information as an incentive instrument in a continuous-time sender-receiver model. The sender observes a persistent, evolving state and sends signals over time to the receiver, who sequentially chooses actions which affect the welfare of both players. I solve for the sender's optimal dynamic information policy in closed form. Under this policy, the sender provides information gradually, contingent on the receiver's past actions. I show that the sender can implement this policy by truthfully reporting the state with a delay that shrinks over time.

Benefiting from Bias (with Xin Gao)

**R&R, ****Journal of Economic Theory**

A principal delegates decisions to a biased agent. Payoffs depend on a state that the principal cannot observe. The agent does not initially observe this state, but he can learn about it by privately experimenting, at a cost. We characterize the principal's optimal delegation set. This set has a cap to restrict the agent's bias, but it may have a hole around safe decisions in order to encourage information acquisition. Unlike in standard delegation models, the principal's payoff is maximized when the agent's bias is nonzero.

Checking Cheap Talk (with Xin Gao)

We study a sender-receiver game. The sender observes the state and costlessly transmits a message to the receiver, who selects one component of the state to check and then chooses a binary action. The receiver’s preferred action depends on the state. The sender has a state-independent preference for one action over the other. Nevertheless, communication can strictly benefit both players. We characterize the symmetric equilibria. In each one, the sender tells the receiver which components of the state are highest. The same equilibria exist in an extension where the receiver can check multiple components. We also find that with commitment power, the sender can extract more rents from the receiver by randomizing between signals that induce different actions. However, the receiver’s ability to partially verify the state has an ambiguous effect on the sender’s utility unless the sender has commitment power, in which case verification can only restrict the set of posteriors the sender can induce.