Importance of publishing negative results.

Post date: Feb 3, 2011 12:30:05 PM

03-02-2011 Importance of publishing negative results.

Some correspondence in Nature today highlighted an interesting consideration, namely that failure of publishing negative results can lead to a type I error (false rejection of null hypothesis). Although that may sound weird, it makes (some) sense. If a researcher conducts a particular experiment, but fails to find a significant effect, the result may not be published, either because the researcher thinks it is not worth it, or because the editor of the journal they submit to rejects their paper. At an 0.05 significance level, that means that if 20 identical studies were performed, on average 19 would confirm the same "negative" result (and not publish it) while one of them by mere chance would find a significant effect (and publish it) (see Gupta and Stopfer, Nature 470, p39).

I agree with the authors that not publishing negative results (if the experiment was decent) is bad practice.

A WELL DESIGNED EXPERIMENT WITH SUFFICIENT STATISTICAL POWER SHOULD BE PUBLISHED NO MATTER THE OUTCOME. Besides the argument of wrong inference, think of the waste of resources: 20 researchers doing the exact same thing over and over again. However, I think reality may not be as bad as depicted by the authors, at least not when it comes to the chance positive result being published. Maybe I am too optimistic, but if a question is really that important that 20 researchers address it, quite likely at least one of the 19 "negative" researchers will bother to attempt to replicate the single positive outcome. And with 95% probability they will fail, and a few times more. Now those types of "negative" results will of course be accepted in no time.