The media often follow styles guides. In them, you may find guidance on whether you are allowed to brazenly split infinitives or the correct spelling of J.Lo. But they exist for an important reason: to provide clear communication to the reader. These guides, however, don’t really have much to say about numbers…
In our book, How To Read Numbers, we talk about the ways in which the media often get numbers wrong and why this is important. At the end of the book, we offer a Statistical Style Guide for journalists (which you can read below), in the vain hope of solving these problems and making the world a better place.
If you want journalists to follow the guide, then please join our campaign! And if you happen to be a journalist and would like to endorse the guide, please get in touch with Tom or David.
The Statistical Style Guide
1. Put numbers into context
Ask yourself: is that a big number? If Britain dumps 6 million tons of sewage in the North Sea each year, that sounds pretty bad.[i] But is it a lot? What’s the denominator? What numbers do you need to understand whether that is more or less than you’d expect? In this case, for instance, it’s probably relevant that the North Sea contains 54 thousand billion tons of water.
2. Give absolute risk, not just relative
If you tell me that eating burnt toast will raise my risk of a hernia by 50 per cent, that sounds worrying. But unless you tell me how common hernias are, it’s meaningless. Let readers know the absolute risk. The best way to do this is to use the expected number of people it will affect. For instance: ‘Two people in every 10,000 will suffer a hernia in their lifetime. If they eat burnt toast regularly, that rises to three people in every 10,000.’ And be wary of reporting on how ‘fast-growing’ something is: a political party, for instance, can easily be the ‘fastest-growing’ party in Britain if it has doubled in size from one member to two members.
3. Check whether the study you’re reporting on is a fair representation of the literature
Not all scientific papers are born equal. When Cern found the Higgs boson, or Ligo detected gravitational waves, those findings were worth reporting on in their own right. But if you’re reporting on a new study that finds that red wine is good for you, it should be presented in the context that there are lots of other studies, and that any individual study can only be part of the overall picture. Ringing up an expert in the field who didn’t work on the study you’re reporting on and asking them to talk you through the consensus on the subject is a good idea.
4. Give the sample size of the study – and be wary of small samples
The Oxford University Covid-19 vaccine trial of which Tom was a participant had 10,000 subjects and should be robust against statistical noise or random errors.[ii] A psychological study looking at fifteen undergraduates and asking whether washing their hands makes them feel less guilty is much less so. It’s not that small studies are always bad, but they are more likely to find spurious results, so be wary of reporting on them; we somewhat arbitrarily suggest that if the study has less than 100 participants, it’s a reason to be cautious. Some smaller studies can be very robust, and this is not a hard-and-fast rule; but, all else being equal, bigger is better. Relatedly, surveys and polls will often not have unbiased samples; be aware of them.
5. Be aware of problems that science is struggling with, like p-hacking and publication bias
Journalists can’t be expected to be an expert in every field, and it’s hard to blame them for missing problems in science that scientists themselves often miss. But there are some warning flags. For instance, if a study isn’t ‘preregistered’, or better yet a Registered Report, then scientists might have gone back in once they’ve collected their data in order to find something that can get them a published paper. Alternatively, it might be that there are hundreds of other studies sitting unpublished in a scientist’s desk drawer somewhere. Also, if a result is surprising – as in, it’s not what you’d expect given the rest of the findings in the field – then that might be because it’s not true. Sometimes science is surprising, but most of the time, not very.
6. Don’t report forecasts as single numbers. Give the confidence interval and explain it.
If you report that the Office for Budget Responsibility’s model says that the economy will grow 2.4 per cent next year, that sounds accurate and scientific. But if you don’t mention that their 95 per cent uncertainty interval is between -1.1 per cent and +5.9 per cent, then you’ve given a spurious sense of precision. The future is uncertain, even though we sometimes wish it wasn’t. Explain how the forecast is made and why it’s uncertain.
7. Be careful about saying or implying that something causes something else
Lots of studies find correlations between one thing and another – between drinking fizzy drinks and violence, for instance, or between vaping and smoking weed. But the fact that two things are correlated doesn’t mean that one causes the other; there could be something else going on. If the study isn’t a randomised experiment, then it’s much more difficult to show causality. Be wary of saying ‘video games cause violence’ or ‘YouTube causes extremism’ if the study can’t show it.
8. Be wary of cherry-picking and random variation
If you notice that something has gone up by 50 per cent between 2010 and 2018, have a quick look – if you’d started your graph from 2008 or 2006 instead, would the increase still have looked as dramatic? Sometimes numbers jump around a bit, and by picking a point where it happened to be low, you can make random variation look like a shocking story. That’s especially true of relatively rare events, like murder or suicide.
9. Beware of rankings
Has Britain dropped from the world’s fifth-largest economy to the seventh? Is a university ranked the forty-eighth best in the world? What does that mean? Depending on the underlying numbers, it could be a big deal or it could be irrelevant. For example, suppose that Denmark leads the world with 1,000 public defibrillators per million people, and the UK is seventeenth with 968. That isn’t a huge difference, especially if you compare it with countries that have no public defibrillators. Does being seventeenth in this case mean that the UK health authorities have a callous disregard for public emergency first-aid installations? Probably not. When giving rankings, always explain the numbers underpinning them and how they’re arrived at.
10. Always give your sources
This one is key. Link to, or include in your footnotes, the place you got your numbers from. The original place: the scientific study (the journal page, or the doi.org page), the Office for National Statistics bulletin, the YouGov poll. If you don’t, you make it much harder for people to check the numbers for themselves.
11. If you get it wrong, admit it
Crucially – if you make a mistake and someone points it out, don’t worry. It happens all the time. Just say thank you, correct it, and move on.
*
[i]. Roger Milne, ‘Britain in row with neighbours over North Sea dumping’, New Scientist, 27 January 1990 https://www.newscientist.com/article/mg12517011-200-britain-in-row-with-neighbours-over-north-sea-dumping/#ixzz6VUDWpXqm
[ii]. COVID-19 Phase II/III Vaccine Study (COV002)