Transparency (Star Wars)

Some of my recent papers deal with research transparency. Researchers are subject to an increasing pressure to publish their results in top journals. They may thus select a subset of positive results from a larger set of possible specifications to publish their results. These practices may lead policymakers and the academic community to believe more in starry stories published in good outlets than in studies presenting negative results.

Star Wars: the Empirics Strikes Back

During my PhD, I started working on research transparency with other PhD students (M. Lé, M. Sangnier and Y. Zylberberg) at Paris School of Economics. The outcome of our joint work is the paper Star Wars: the Empirics Strike Back, recently published in the AEJ: Applied Economics. In this paper, we collect all the p-values published between 2005 and 2011 in three of the most prestigious journals in economics and show a strong empirical regularity: the distribution of p-values has a two-humped camel shape with a first hump for high p-values, missing p-values between 25% and 10%, and a second hump for p-values slightly below 5%. In order to improve our understanding of research transparency, we go beyond documenting publication bias and attempt to shed light on the sub-literatures that suffer the most from these biases. We relate the misallocation of p-values to authors’ and papers’ characteristics and find that the presence of a misallocation correlates with incentives to get published: the misallocation is lower for older and tenured professors compared with younger researchers. The misallocation also correlates with the importance of the empirical result in the publication prospects. In theoretical papers, the empirical analysis is less crucial, and, indeed, the misallocation is much lower. Moreover, the two-humped camel shape is less visible in articles using data from randomized control trials or laboratory experiments.

We also study the effectiveness of recent innovations in research transparency such as data repository. The analysis of the different sub-samples does not show conclusive evidence that data or programs availability on the website of the journals mitigate specification searching. One obvious advantage of having data and codes is the possibility to reproduce the results of the published articles. While this might not have an impact on p-hacking, it increases the likelihood of detecting mistakes.

Publication Bias and Editorial Statement on Negative Findings

In February 2015, the editors of the Journals of Health Economics sent out an editorial statement encouraging referees to accept studies that: "have potential scientific and publication merit regardless of whether such studies' empirical findings do or do not reject null hypotheses that may be specified.'" In a joint project with C. Blanco-Perez, we test whether this editorial statement on negative results induced a change in the number of published papers not rejecting the null. More precisely, we collect p-values from two health economics journals and compare the distribution of tests before and after the editorial statement. We find that test statistics in papers submitted and published after the editors sent out the editorial statement are less likely to be statistically significant. In other words, the editorial statement decreased the extent of publication bias in health economics.

Overall, our results provide suggestive evidence that the decrease in the share of tests significant at conventional levels is due to both a change in editors' preferences for negative findings and a change in authors and/or referees' behavior. These findings have interesting implications for editors and the academic community. They suggest that incentives may be aligned to promote a more transparent research and that editors may reduce the extent of publication bias quite easily.

The paper is available here.

Methods Matter: P-Hacking and Causal Inference in Economics

The economics 'credibility revolution' has promoted the identification of causal relationships using difference-in-differences (DID), instrumental variables (IV), randomized control trials (RCT) and regression discontinuity design (RDD) methods. The extent to which a reader should trust claims about the statistical significance of results proves very sensitive to method. Applying multiple methods to 13,440 hypothesis tests reported in 25 top economics journals in 2015, we show that selective publication and p-hacking is a substantial problem in research employing DID and (in particular) IV. RCT and RDD are much less problematic. Almost 25% of claims of marginally significant results in IV papers are misleading.

The paper is available here.

Data and Codes

The paper Star Wars required original data collection. The data and a readme document are available Online on the website of the AEJ: Applied Economics:

The data and codes to replicate the paper Publication Bias and Editorial Statement on Negative Findings are available here:


Brodeur et al., 2016. "Star Wars: The Empirics Strike Back," American Economic Journal: Applied Economics, vol. 8(1): 1-32.

Blanco-Perez and Brodeur, 2019. "Publication Bias and Editorial Statement on Negative Findings," Forthcoming, Economic Journal.

Blanco-Perez and Brodeur, 2019. "Transparency in Empirical Economic Research," IZA World of Labor.

Brodeur et al., 2018. "Methods Matter: P-Hacking and Causal Inference in Economics." IZA Discussion paper, R&R American Economic Review.

Grant & Prize

Leamer-Rosenthal Prize for Open Social Science, Emerging Researcher, Berkeley Initiative for Transparency in the Social Sciences (US$10 000):

SSMART Grant, BITSS (US$25 927), (support from Laura & John Arnold Foundation/William & Flora Hewlett Foundation):

Paris School of Economics Research Fund, Star Wars: the Empirics Strike Back (4 500 Euros)



The Economist:


Deutschlandradio Kultur:

Radio Canada:


Methods Matter:

Washington Center for Equitable Growth:

The Replication Network:


An Economist's Journey:

Editorial Statement:

The Replication Network:

Summary of the findings on the Center for Effective Global Action's Website

Star Wars:

RunningRES Blog:

Marginal Revolution:

IZA Newsroom:

IZA Newsroom:

Gem News: Are formula in tip journals to be trusted?

Edward Conard:

Economic Logic Blog:

Econometrics Beat: Dave Giles' Blog:

David McKenzie's Blog:

Data Colada:

Chris Blattman's Blog:

Cherokee Gothik:



Andrew Gelman's Blog:

Alpha Architect:

Aid Thoughts:

A (Budding) Sociologist's Commonplace Book:

Other media: Lorenzo Burlon's Blog, The Lumpy Economist, Nada €$ Gratis, Tarjomaan, Videnskab dk, etc.

Media Briefing

Royal Economic Society: