Evidence-based medicine versus prevention?

Evidence based medicine vs. Prevention image relating to the article

EBM versus prevention?

The past few weeks have seen a big win in terms of the attention paid to public health and the recognition of the ‘wider-determinants of health’ for prevention. October marked the 40th anniversary of the Alma Ata declaration, which in 1978 affirmed a global commitment to prevention through ‘Primary Health Care (PHC)’, which focused on “empower[ing] people and communities; multisectoral policy and action; and primary care and essential public health functions as the core of integrated health services.” This anniversary also saw a re-commitment to these original objectives, with global leaders gathering to sign the Astana Declaration with the same aims. And the UK’s Department of Health & Social Care followed suit in early November by launching the ‘Prevention is better than cure’ policy document, with the specific aim of ensuring “that people can enjoy at least five extra healthy, independent years of life by 2035, while narrowing the gap between the experience of the richest and poorest”.

It has long been accepted that health is produced primarily outside of healthcare services (generally estimated to account for only about 10% of health outcomes), yet our focus (I think, even in the most recent budget) still seems to be on protecting healthcare services while cutting other vital services through austerity measures in the wake of the global financial crisis. Let’s see if the commitments above mark a renewed effort to enact real change.

Why has it taken 40 years for us to come back to the same place?

One problem that might be limiting that change from taking place is the apparent tension between the evidence-based medicine (and policy) paradigm and the prevention agenda. Evidence-based medicine advocates using “current best evidence” to plan care, with the best necessarily drawing on a hierarchy of evidence with randomised controlled trials and their systematic review at the top.

This problem, as I see it, is illustrated in the figure above, where I’ve adapted the World Health Organization’s (WHO’s) model of the causes of chronic disease in the boxes (risk factors) and added a timeline of where these intersect with a person’s increasingly likely use of health services (for these diseases) over the life course.

Traditionally, our health systems, and thus also our research methods, have focused on the stage where disease(s), and to an extent physiological risk factors, present to the healthcare system. What PHC aims to do is to move further down the causal chain, to tackle the ‘causes of the causes’, a.k.a., the ‘social determinants of health’.

Research implications

Estimating a causal effect of an intervention is made difficult in the presence of other variables experienced simultaneously that also affect the outcome of interest (i.e. ‘confounding factors’). What a randomised controlled trial (and other quasi-experimental methods) does well is clean out the confounding factors, on average letting us estimate the true intervention effect (more here).

This is not too difficult to do when the time between an intervention and its effect is instantaneous or occurs within a relatively short time-period. But it becomes harder and harder the further away in time the effect occurs from the intervention. A randomised controlled trial, for instance, tends to have a mean follow-up of only 15 months, mostly less than 18 months, and max (rarely) around 10 years. The longer the lag, the more difficult it is for us to follow up and the more opportunities there are for each participant and control to experience increasing numbers of confounding factors, meaning we struggle to argue that a measured effect is causal and not due to something that we are not measuring.

Looking at the history of now well-established risk factors shows the problems we face. For example, the case for tying smoking to health outcomes. Yes, in the case of smoking there were vested interests to come up against too, but it wasn’t easy to prove a causal link ‘beyond reasonable doubt’, i.e. the suggestion could always be made that it was association and not causation because of the relative strength of the evidence (mostly retrospective and cohort studies). In the end, after decades, sheer weight of evidence combined from multiple fields (“studies from epidemiology, animal experiments, cellular pathology and chemical analytics”) and legal battles shifted the case for smoking.

PHC is now asking us to take an extra step and show causal (if we continue evidence-based medicine) effects of risk factors - and even more difficult, effects of interventions to prevent these - in an even earlier part of the causal chain (than those lifestyle behaviours, like smoking). But, with our current tools, we’re going to struggle to measure effectiveness in line with the requirements of evidence-based medicine given that effects on the outcomes we are interested in might take a lifetime to manifest themselves. Theory might tell us it’s a good idea, but how can we really prove that PHC/prevention is where we should be moving our limited resources? Do we require different methods to prove (cost-)effectiveness in line with the evidence-based medicine paradigm (or maybe to evolve the paradigm and re-weigh the evidence we accept)?

Potential solutions

Just believe?

I’m not particularly comfortable with this one (maybe I’m just too indoctrinated in evidence-based medicine), but we could just dump the paradigm and accept the theory (my public health training and political leaning certainly mean I want to). Once we identify a potential risk factor, we could simply agree that reducing that risk factor is likely to benefit health outcomes in the long-run, and since we predict that prevention is likely to be cheaper than cure, let’s just do it. One problem, though, is that everything is a potential risk factor: just read the Daily Mail. Another problem is that the figure above also highlights that non-modifiable risk factors will remain. We are all getting older and will eventually deteriorate and die, so it is not as if all ill health can be prevented (although some estimates of the potential impact of prevention seem to think so). It’s an assumption (although to be fair, there is some evidence that public health interventions do tend to be cost-effective), but not fully tested that prevention will always be cheaper or achieve preferable results compared to cure (just like the assertion that primary care will be cheaper than secondary care). If we are talking in the long-term about shifting resources (one of the three main justifications in the Department of Health & Social Care’s report is that prevention “reduces the pressures on the NHS, social care, and other public services” suggesting this might be the case), we should probably have some idea of whether the prevention intervention really is better value than whatever cure it is displacing (with total health and/or total government budget not unlimited).

Adapt how we use our existing methods?

Perhaps we can just be more creative about how we use our existing methods? Can we better map causal chains, identify where we are intervening and give ourselves intermediates (e.g. health behaviours or physiological risk factors) we can measure causally in a feasible timescale? If we could combine this with measured ‘usual care’ transition probabilities across the causal chain states, we could maybe estimate (and compare to another intervention) a cost and effect on the end state we are actually interested in changing (and expected timing for this to take place), healthy life years for instance. For example, if we know that reducing smoking by X% leads on average to a reduction in lung cancer of Y% within 10 years, we could make a prediction on the outcome of lung cancer just by measuring effects of an intervention on the smoking prevalence as our intermediate (or proxy) outcome. But this would probably require development of more systematic and fit-for-purpose logic models, or preferably systems models which also allow us to model probable spillover effects of intervening further down the chain. It would require us to take a broader societal rather than health services-only perspective on cost-effectiveness (able to capture effects of health on productivity and other sectors besides the health system alone, for instance). Also, we would need large linked datasets to understand the natural disease history more systematically and to populate these models. Or, do we instead accept that we are not going to unpick the black box and zoom out from individual interventions to measure at a higher level of the system, simply compare the outcomes in a system doing A,B and C, with a system doing A,B and Z, and one doing X,Y and Z?

New methods needed?

Probably (this article by Hauck et al outlines ideas for addressing limitations with current cost-effectiveness analysis, for example, advocating a typology to better consider interdependency issues through systems thinking). But I don’t have the answers. Maybe you do? Or, do you think the tensions with the evidence-based medicine paradigm are not the main impediment to making progress on PHC/prevention? Or, maybe we’re already making great progress toward PHC/prevention and I’m just being cynical? Happy to hear your thoughts. Feel free to comment below or email me at: jonathan.m.stokes@manchester.ac.uk.

Related to this article, you might be interested to have a look at the NIHR’s Public Health Group. The NIHR Clinical Research Network (CRN) has also extended support into health and social care research taking place in non-NHS settings, including public health and social care (more here).


Your comment - EBM v Prevention (Responses)