Home‎ > ‎News alerts‎ > ‎

The SOTP fiasco

posted 31 Jul 2017, 07:01 by Robert Forde   [ updated 5 Aug 2017, 04:49 ]
This article was originally published in "Inside Time", the newspaper for prisoners. It is reproduced here with permission.

Sex offender programmes: time to change

A few weeks ago a rumour surfaced that the UK Ministry of Justice had, very abruptly, stopped the Core and extended SOTP programmes. Expecting to find this was not true I contacted one of the few MOJ colleagues who will still speak to me (I have been a constant critic of these programmes for 13 years). The colleague told me that they were indeed being stopped, apparently as a result of research carried out by the MOJ itself. However, this was not publicised at the time, and it later emerged that Liz Truss, then the Justice Secretary, had ordered it kept secret. With the benefit of hindsight, it now looks as if she was just trying to delay the scandal until after the general election, as it must have been obvious that the secret would come out eventually. Quietly, on 30 June, the MOJ’s “secret” scientific report appeared on their website. It was a bombshell: it seemed to show not only that sex offender training was ineffective, but that it actually increased risk. What had happened? To answer that we need to go back in time to the early 1990s.

At that time crime rates were considerably higher than they are now (they have fallen considerably since). John Major’s government was concerned about crime levels and wondered whether the Home Office (which ran the criminal justice system then) could suggest ways of reducing them. At some point, a group of persuasive psychologists were able to convince Home Office ministers that new-fangled cognitive-behavioural programmes which were in use in North America might be the answer. More importantly, they persuaded ministers to provide the money. A national scheme for reducing offending by the use of programmes would require the hiring and training of a great many staff. Ministers agreed that the money would be provided, and that it would be ring-fenced to stop it being siphoned off for other purposes within the criminal justice system.

On a common sense basis cognitive-behavioural programmes had a lot of appeal. Common sense suggests that we behave the way we do because of the attitudes and beliefs that we hold. Therefore, changing these attitudes and beliefs should result in changes in behaviour. But psychologists of all people should be wary of such common sense interpretations, because these are not based on an understanding of the complexities of human behaviour and the brain functioning which underlies them. One of the things which has changed since the early 1990s is that we now know a great deal more about brain functioning. To be fair, it was realised at the time that these programmes ought to be evaluated to see how effective they were, and even that they should not be evaluated by the same people who carried out the “treatment”. Unfortunately, these good resolutions slipped. Evaluations of the programmes were few, generally not very well designed, and often carried out by people who had a vested interest in showing how good the programmes were. By this I don’t mean a crude financial interest, but people built careers and professional reputations on creating and running these programmes and it would be silly not to recognise that this colours their view.

As early as 2003 an important paper was published by Marnie Rice and Grant Harris, two Canadian psychologists of international repute. They showed that most evaluations of sex offender treatment were of poor quality, and contained a bias towards showing a treatment effect even if none existed. This research was generally ignored. When I began to quote it in parole reports from about 2004 I was greeted with disbelief. Indeed, I was the subject of a misconduct complaint for allegedly “misrepresenting the research on the effectiveness of sex offender programmes”. This was easily defeated by showing that my opinion was based securely on scientific evidence. This did not necessarily mean I was right, but it meant my position was defensible and therefore not misconduct. I have had to fight off two other such complaints since, and an attempt by MOJ lawyers to have me removed from a committee. Several colleagues have been bullied and discriminated against for taking a similar stand. In the meantime, things moved on. In 2005 the California Sex Offender Treatment Evaluation Project (SOTEP) published its report. This was a large and well-designed trial of a programme similar to the SOTP, and was hailed in advance as the study which was going to prove the effectiveness of this kind of programme beyond doubt. It showed no benefit of treatment, and colleagues committed to treatment programmes found all kinds of reasons to suggest that maybe the SOTEP study wasn’t very good after all.

Also in 2005 two academic researchers, Martin Schmucker and Friedrich Lösel, published a meta-analysis of sex offender treatment programmes. Meta-analysis is a powerful technique which combines the results of a number of research projects, effectively making them into one big project. This matters because the more people your study includes the more reliable it is likely to be. Schmucker and Lösel concluded that the results were “promising” and provided good support for sex offender programmes. Meta-analysis is an excellent way of showing what the research as a whole says in a particular field, but it is only as good as the studies that go into it. Unfortunately many of those that went into Schmucker and Lösel’s meta-analysis were not very good. They would not have passed the standards set by Rice and Harris in 2003. As computer programmers say, “garbage in, garbage out”. Interestingly, Schmucker and Lösel repeated this meta-analysis in 2015, using only good quality studies this time, and found no treatment effect for prison-based programmes.

And so to the study which has caused all the trouble, the “Impact Evaluation of the Prison-based Core Sex Offender Treatment Programme”. This study involved 2,562 men who had undergone the prison SOTP, and compared them with 13,219 who had not. To make sure that the two groups were as near identical as possible, they were matched on 87 different characteristics that might be related to risk. This is impressive: studies using matching are not uncommon, but usually only match groups on a few characteristics. Not only that, the researchers checked mathematically to make sure that the matching was very accurate. I have read thousands of papers, and rarely seen such close attention to this crucial part of the process. The men were followed up after release for an average of 8.2 years (some for over 13 years). Only 8% of the untreated men were convicted of further sexual offence during the follow-up period, compared with 10% of the treated men. In other words, the treated men were more likely to commit further sexual offences, not less. The MOJ was understandably horrified by this result, and called in Prof Friedrich Lösel (mentioned above) to provide an independent opinion on the quality of the study. It seems that Prof Lösel advised the results should be accepted, and was promptly told to say nothing in public.

What are the implications? There may well be some legal implications, but I am not qualified to comment on those, though I recognise that some will feel aggrieved if they have had parole refused, or made to serve extra time to undertake the SOTP to “reduce their risk”. They may feel further aggrieved to know that this fiasco was avoidable, if only the Parole Board and MOJ colleagues had listened to what I and other independent psychologists were telling them over a decade ago. Our opinions were based on scientific evidence which was already available then, and which we quoted to them, but they did not want to know, preferring to shoot the messenger rather than heed the evidence. The situation seems no better in other countries, and I have recently had numerous email exchanges with North American colleagues following the recent MOJ report. Overwhelmingly, even before reading it, they tend to find ways of minimising it and denying its importance. I am not optimistic that the MOJ research will have much impact on practice overseas, where cognitive-behavioural programmes are still popular.

The implications for SOTP-type programmes in the UK are clear, and the MOJ was wise to dump these immediately. There could clearly be no justification for continuing them. But what (if anything) should replace them? We are told that the Kaizen and Horizon programmes will be “offered” to men judged to pose a medium or high risk. However, the main difference between these programmes and their predecessors is that they will target different things. The so-called “cognitive-behavioural” methods will be the same, and other untested programmes such as HRP will continue. There is no evidence base for these programmes, and at most they should be trialled on a small scale and effectiveness demonstrated before being adopted nationally. Cognitive-behavioural methods are generally recognised as having helped in the treatment of depression and anxiety. However, there is no evidence that they are effective in changing patterns of behaviour as opposed to emotional states. There is no evidence that changing the attitudes and beliefs which people express to programme facilitators has any effect on subsequent behaviour. The reasons for this are complex, and there is no room to go into them here, though I do that in my forthcoming book*.

My considered opinion is that all of the programmes and risk assessment methods used by the MOJ should be reviewed. It is virtually certain now that most of them do not do what is claimed. There are other ways of helping prisoners to give up a criminal lifestyle: education, trade training, restorative justice, support in the community, and good mental health care, to name but a few. It is time to change.

*"Bad Psychology: How forensic psychology left science behind." To be published by Jessica Kingsley Publishing on 1 September 2017.