BPS Replication and Reproducibility in Psychological Science

Post date: Jun 1, 2016 6:06:08 PM

The Royal Society, 26th May 2016, Blog by Su Morris

Originally posted here

This event was a collaborative venture between the British Psychological Society, Experimental Psychology Society, and the Association of Heads of Psychology Departments. It was designed to promote discussion on the subject of replication and reproducibility in psychology in response to Nosek et al’s 2015 paper. The event was very well-attended, illustrating the current high level of interest in this topic.

Although the findings in the Nosek et al (2015) paper may been surprising to some readers, a market-style investigation into researchers’ views of the replicability of the studies revealed very closely-matched correlations with the actual replications. This suggests that there is a certain level of acknowledgement within the field of psychology that some findings are less robust than others. So why is this the case and what are the solutions?

P-values – one issue is that publication bias leads to a general over-estimation of both p-values and effect sizes. Finding a significant result in exploratory data analysis was likened to a many-forking garden path – if a dataset has enough comparisons, or forks in the path or rolls of the dice, eventually some association or interaction is likely to be significant. This problem of multiple comparisons is also pertinent to ANOVA analyses. An example was made of the field of genetics, where the significance value of 5x10-8 is used as standard in genome-wide association studies (GWAS), to acknowledge the multiple tests being performed. Improved training is needed, from undergraduate level upwards, to highlight the questionable nature of ‘significant’ p-values.

Power – a lack of power and narrow sampling increases the possibility of errors, and reduces the likelihood of the findings being replicated. A possible solution might be collaborative working; rather than increasing the local sample size, the methodology could be repeated in other centres around the country resulting in a combined larger sample, but also one with more variability that more closely represents the wider population.

Other issues – the citation only of previous studies which found supporting evidence; methods which are not critically and objectively evaluated; the variable willingness to make analyses as transparent as possible by sharing data; and the focus on results rather than methodology. One important step in addressing these problems is pre-registration; by changing the decision-making process about publication from results to method, both HARKing (creating a hypothesis after the results are known) and publication bias should drastically reduce. Importantly, this then shrinks the gulf between what is good for science (i.e. high quality research) and what is good for scientists (i.e. results that can be published). Additionally, feedback on methods can be received prior to carrying out the data collection, improving the quality of the research.

In sum, the presentations and panel discussion were thought-provoking and interesting, raising many important points. The consensus was that there is generally much support for making research more open and accountable. However, this enthusiasm was tempered with cautionary notes about current incentives, publication practice, and fraudulent research practices. Although psychology research isn’t ‘used to fly a plane’, there is a responsibility to ensure it is accurate and transparent, especially when it forms the basis for interventions and policy. It is not all doom and gloom, however – the field of psychological science should be applauded for responding quickly by taking this opportunity to examine practices; there is a long way to go, but the debate and discussion at this event was certainly a useful step.

Speakers:

Professor Marcus Munafo (University of Bristol) – https://www.dropbox.com/s/q8hf15m15er2a2o/Crisis%20or%20Opportunity.pptx?dl=0

Professor Roger Watt (University of Stirling)

Professor Dorothy Bishop (University of Oxford) – http://www.slideshare.net/deevybishop/what-is-the-reproducibility-crisis-in-science-and-what-can-we-do-about-it

Professor Chris Chambers (Cardiff University – https://dl.dropboxusercontent.com/u/15691907/BPS-chambers.pdf

Kathryn Sharples (Associate Director, Editorial Development, Wiley)

Nick Brown (PhD Student, University Medical Centre, Groningen) – http://nick.brown.free.fr/stuff/BPS/Nick%20Brown%20-%20BPS%20-%2020160524.pdf

Dr Prateek Buch (Policy Associate, Sense about Science)