Research

(with Franz Ostrizek),  Games and Economic Behavior, 2023, Volume 139, Pages 26-55

AbstractWe propose a tractable framework to introduce externalities in a screening model. Agents differ in both payoff-type and influence (how strongly their actions affect others). Applications range from pricing network goods to regulating industries that create externalities. Inefficiencies arise only if payoff-types are unobservable. When both dimensions are unobserved, the optimal allocation satisfies lexicographic monotonicity: increasing along the payoff-type to satisfy IC, but tilted towards influential agents to produce the externality. A two-step ironing procedure addresses the nonmonotonicity in virtual values specific to our setting. If observable, influence is used as a signal of the payoff-type and may create rents.

with Franz Ostrizek

AbstractAdapting cursed equilibrium to a beauty contest game, we study the impact of information policies in settings where agents underinfer from equilibrium statistics. To discipline information acquisition with mislearning, we propose a subjective envelope condition which allows for a tractable analysis while maintaining behaviorally plausible assumptions: agents correctly anticipate their actions but incorrectly deem them optimal. We show that this condition characterizes the rest points of a simple learning process. Cursed agents use and acquire more private information, creating a positive externality. Welfare increases for low degrees of cursedness, as these gains exceed the losses from incorrect use. Transparency crowds out private information but always increases welfare. Policies targeting fundamental information may backfire as they distract cursed agents from a source of information they already underuse. Finally, we investigate the behavior and welfare of an atomistic rational agent in a cursed economy.

with Federica Carannante and Marco Pagnozzi

Abstract. A seller running repeated auctions with bidders who have constant valuations over time can exploit the information obtained in past auctions to set reserve prices in future ones. We consider an environment where bidders are naive, losers are replaced by new bidders, and past winners leave with an exogenous probability. Our model reflects the main characteristics of the market for online display advertising, where publishers use real-time first- or second-price auctions to sell impressions to advertisers. The optimal reserve price in infinitely repeated auctions is either equal to the value of the last winner, or lower than it when the last winner’s value is sufficiently high. In this second case, the optimal reserve price is decreasing in the last winner’s value in a first-price auction, while it is independent of it in a second-price auction, and typically lower than in a first-price auction. The second-price auction may yield a higher seller’s revenue than the first-price auction, because in the second-price auction a past winner who is outbid acts as a reserve price. We also describe typical paths of reserve prices and characterize the stationary distribution of winners’ values. The probability of trade may be non-monotonic in the persistence of past winners.


with Federica Carannante and Marco Pagnozzi

Abstract We analyze the expected seller's revenue in the efficient equilibria of sealed-bid auctions conditional on the valuation of one of the bidders (interim revenue). We show that the interim revenue is higher in the first-price auction than in the second-price auction if and only if the special bidder's valuation is lower than a threshold. This result also holds when the auctions have a (common) reserve price. The first-price auction is also interim dominant (among all static auctions) when the bidder's valuation is the close to the lowest bound. By contrast, when the bidder's valuation is close to upper bound, the interim dominant auction is an atypical mechanism where only the lowest bidder pays his bid (Last-Pay-Auction), which achieves unbounded interim revenues.

with Marco Pagnozzi


Abstract  A seller sells two identical objects through sequential second-price auctions with reserve prices. Bidders’ values follow independent and persistent stochastic processes. The seller observes the transaction price in the early auction and uses this information to set the reserve in the late one. We show that outcomes that convey more optimistic information on bidders’ values may lead the seller to set a lower reserve price. Because bids reveal information, bidders in the early auction (i) are less likely to participate and (ii) shade their bids when they do participate. Decreasing persistence reduces the first margin, but widens the second one. Compared to a static auction, in the early auction the seller prefers to induce (i) more participation to increase current revenue but (ii) less participation for information acquisition. Despite increasing potential surplus and weakening her commitment problem, lower persistence may hurt the seller as it prevents her from targeting the reserve to the valuation of an (high) early auction winner. 

with Matteo Paradisi

Abstract We study tax audit policies when the Tax Authority predicts true income using an inference model. When taxpayers are heterogeneous in income and aware of model- based audit rules, the Tax Authority can achieve arbitrarily high tax collection rates if the model’s precision is sufficiently high. However, the targeting of audits yields minimal revenues as optimal reliance on the model focuses on enhancing the incentives to declare income in the first place. Prediction power is used to shape incentives rather than to direct audits. Introducing heterogeneity in the propensity to evade does not alter this conclusion. At the optimum, the predictions from the statistical model are used to screen larger true income taxpayers, tolerating evasion from taxpayers with lower incomes and high propensity to evade. Numerical simulations calibrated on aggregate moments from administrative audit data show modest revenue gains from plausible model precision improvements. Moreover, due to the complementarity between precision and audit budget, small budget increases can be as effective as large enhancements in precision without requiring investments to acquire new taxpayers’ data.

with Enrico Di Gregorio and Matteo Paradisi

Abstract We show that tax authorities can stimulate tax compliance by strategically releasing audit-relevant information. We focus on audit policies that disclose to taxpayers that audit risk discretely drops above a threshold. In a theoretical framework, we derive conditions for the existence of improvements over flat undisclosed audit rules, and we build a test for such improvements that relies on a change in the probability jump at the threshold. Our empirical analysis relies on the Sector Studies, an Italian policy with a disclosed threshold-based design. We leverage more than 26 million Sector Study files submitted between 2007 and 2016. First, we show that taxpayers bunch at the threshold to a great extent, and that this behavior is related to evasion proxies, availability of evasion technologies, and tax incentives. Then, we exploit a staggered Sector Studies reform that widens the initial audit risk discontinuity. In line with our theory, taxpayers who benefit from audit exemptions above the threshold reduce their relative compliance, while those below the threshold improve it. However, mean reported profits increase by 16.2% in treated sectors over six years, suggesting – in light of our test – that a disclosed rule performs better than a flat undisclosed one.

with Giovanni Andreottola

Abstract We study the use of simplistic arguments in political communication, developing a novel model of mobilization through rhetoric with naive and sophisticated voters. We show that politicians sometimes choose simplistic arguments in order to appear more competent, exploiting what we call Poe’s Law, that is, the uncertainty on whether the argument used by the politician reflects her own competence or is ‘degraded’ to meet the demand for simplistic arguments of the naive electorate. We compare the Bayes Nash game with a game in which sophisticated voters are unable to conceptualize Poe’s Law, dismissing their peers’ cognitive abilities and identifying with a leader that speaks to a fully naive crowd. The two games have opposed predictions on how expected simplism departs from its demand-driven benchmark, as well as on the interpretation of extreme arguments. Our results demonstrate that dismissal is a valid rationalization of an overly simplistic political debate.

Abstract:  We study the distribution of goods that are freely duplicated and damaged. The monopolist solves a screening problem that is not cost-separable and requires a concave-linear preference specification to generate nontrivial allocations, associated with two interdependent inefficiencies: underacquisition and damaging. In a game where firms acquire market power through an irreversible investment, both monopoly and active competition emerge as equilibria. Altough it worsens underacquisition and induces double-spending, competition may increase welfare because it mitigates the damaging inefficiency by distributing a version for free. 

Abstract. We associate each multiarmed bandit problem to an uncertainty function (in the sense of De Groot, 1962) so that the implied information function is traded-off one for one with expected utility at each belief state to determine the optimal policy. In the main application we model policymaking as a bandit problem where the arms are treatment incentive schemes (BDM mechanisms) whose payoff value and correlation is disciplined by an economic theory. The associated uncertainty function identifies the set of decision relevant parameters and quantifies the estimation content of selection mechanisms. A regime is a collection of “similar” mechanisms that map models onto the proportion of treated (propensity score): fully coercive (RCT) and fully voluntary (posted price) schemes are extreme examples. To each regime and reduced form model is associated a distortion function which tilts the RCT (identity) map from the propensity score into the average treatment effect. We propose a sampling procedure that (epsilon) validly implements all BDM mechanisms while minimizing the variance of the empirical propensity score and preserving information continuity. Fully voluntary mechanisms are control optimal under linear preferences, but their valid implementation induces the largest variance of the sample size used for estimation, which is undesirable.