Home‎ > ‎

News alerts


Worboys: why was he released?

posted 5 Jan 2018, 10:33 by Robert Forde

It has just been announced that John Worboys, who was convicted of a rape and numerous indecent assaults on women while he posed as a black cab driver in London, is to be released after less than 10 years in prison. His original sentence was indeterminate (effectively, a life sentence). In such cases, the sentencing judge can stipulate a minimum period below which a prisoner cannot apply for parole. In the Worboys case that was eight years, so he has served more than the minimum, which most prisoners do. Indeed, it is not unusual for prisoners to serve many years more.

The announcement has caused a furore because Worboys committed many offences, and was clearly a serial violent sexual offender. The Parole Board is not allowed to release details of individual cases, so we do not know why they took the decision they did. However, a number of untrue statements have been made in the media, some of them by lawyers who should have known better.

First, there is the issue of whether the parole panel should have or could have taken into account offences which did not result in a conviction, or which were not prosecuted. I have seen TV interviews in which lawyers have stated that the panel could not do this. This is wrong. Parole panels can take into account any evidence which they think is relevant. This can even include hearsay evidence, which would not be admissible in a criminal trial. The point of the parole panel is not to determine guilt (that is assumed) but to determine risk.

Second, for what it’s worth, the chair of this particular parole panel was female. I say this only to point out that the decision was not taken simply by a bunch of men who didn’t take offences against women seriously. In my quite extensive experience they always take it seriously anyway, but there is a widespread perception that men don’t take offences against women seriously, despite having daughters, sisters, wives and mothers.

Third, the parole panel can only decide to release prisoners if they believe that their risk to the public is minimal. This doesn’t necessarily mean that they trust them a hundred percent, but may mean that they believe any risk which does exist can be contained. It is usual for parole panels to impose conditions. These can be stringent, and may include stipulations about occupations which may be followed, drug or alcohol consumption, geographical areas which must be avoided, and a host of other things. They virtually always include a condition that victims must not be contacted.

Having said all that, it is also true that Worboys was a prolific serial sex offender and in the normal way would be considered to be high risk. Our estimates of risk (as I have pointed out elsewhere on this site, and also in my book “Bad Psychology”) are not very accurate. On a good day with a following wind, about 70% accuracy might be possible. It has also been shown that static (unchangeable) risk factors are better predictors than anything else. The decision is also strange because the Sex Offender Treatment Programme, which used to be offered to offenders like Worboys, has been shown to increase rather than decrease risk. For precisely this reason it was abolished last year. Although new programmes have been put in place, there has not been time for Worboys to have completed any, and in any case they are entirely untested.

In short, we can assume that Worboys was perceived by the panel to pose no more than a manageable risk. What we do not know is why. In my experience, there can be many reasons. For example, a sex offender may consent to be monitored by lie detector, or to have medication implanted which removes his sex drive. A high sex drive is one risk factor for sex offending, amongst others. Another major risk factor is age, and Worboys is now in his 60s. It is known that sex drive diminishes throughout the lifespan, and elderly sex offenders in general pose a low risk, partly for that reason. Again, we do not know why the panel decided as they did, but they will certainly have had evidence which convinced them.

Would I have released Worboys? Given that his static factors, other than age, would suggest a high risk, and in the absence of any overwhelming mitigating factor, I would not. Whether there was in fact any overwhelming mitigating factor we do not know.

Finally, what of the future? One significant possibility is that other victims’ statements could be the basis of further charges. If he were to be convicted of further offences of a similar kind, he could be returned to prison for some time. Indeed, he would almost certainly be returned immediately even if charges were filed, to guard against absconding. He will be under supervision for a minimum of 10 years, and on life licence until he dies. During that period he can be recalled to prison on the unsupported word of a probation officer. He does not have to commit a further offence for that to happen: it is common that offenders on life licence are recalled to prison because the probation officer thinks that risk may have increased for some reason. In such cases a further parole hearing is supposed to be convened within six months, and can decide that the recall was unnecessary, or can confirm the probation officer’s decision. In the latter event, several more years in prison would be likely. Thankfully, it is very rare indeed that ex-prisoners on life licence commit serious further offences.

 

 

 

Forensic evidence: even without deliberate manipulation there is inherent bias

posted 5 Jan 2018, 10:00 by Robert Forde

A scandal has broken over the issue of drug testing in criminal cases. A criminal investigation is underway, and many cases are being reviewed, to determine whether evidence was deliberately manipulated to make defendants look guilty. If it was, then clearly the course of justice was being perverted on a large scale. Expert evidence is often essential to assist courts in coming to the best conclusion, and clearly if people manipulate the evidence this could result in miscarriages of justice. It does happen, and it is not new: laboratory evidence which might have exonerated Sally Clarke, accused of murdering her two sons, was suppressed. As far as I know, no one was ever prosecuted for this, although Mrs Clarke served several years in prison and died prematurely after her release, probably partly as a result of her terrible experience.

Manipulation of evidence is clearly totally reprehensible, but it is not the only source of error in forensic evidence. A number of scientific studies have shown that bias can creep into the judgements made about forensic evidence and influence the result. In particular, several studies by Prof Itiel Dror of University College London have demonstrated that completely extraneous information can bias an expert report (Dror, 2016; Dror & Rosenthal, 2008). This applies even to expert evidence which is commonly thought of as being scientific and more or less infallible, including fingerprint evidence and evidence from DNA testing. For example, fingerprint experts who are told that the investigating detectives think a suspect is probably innocent but they just need to rule him out are more likely to report that the evidence exonerates him. Conversely, if they are told that the investigators are quite certain of someone’s guilt but need the DNA or fingerprint evidence to confirm it absolutely, they are more likely to report that the evidence points to guilt.

How can this be? Surely supposedly scientific evidence ought to be the same regardless of what someone outside the laboratory thinks about the guilt or innocence of the suspect? Indeed it should, and the evidence itself is. The problem arises at the point where the expert has to decide what the evidence means and convey that decision to others. For example, it seems that fingerprint experts can be biased by an irrelevant suggestion to pay more attention to certain features of the fingerprints which they examine, and thus find more features which confirm that suggestion. What Dror has shown is that biases which are known to affect human judgement in general also affect the judgements of experts. This builds upon the work of Daniel Kahneman (Kahneman, 2011), increasingly well-known for his studies of human judgement and how it can go wrong. Kahneman believes that many of the biases and errors which he has discovered in human decision-making processes are essentially hardwired into the human brain, a product of our neural anatomy and physiology. As such, they are not amenable to removal or even improvement through training.

In the case of fingerprint evidence, which has been presented in court for more than a century, Dror was astonished to find that there was no accepted standard for establishing the reliability of expert judgement. In other words, experts were having their evidence accepted in court when it was not at all clear that there was any “industry standard” which they could reasonably be assumed to meet.

In a very recent study, Dror and Murrie (2017) turned their attention to the judgements made by forensic psychologists, an area in which I have also worked (Forde, 2017). Since judgements in psychology can be more subjective than those in the “hard” sciences it would not be surprising if they were even more subject to these same errors, and this is unfortunately true. This usually matters less in criminal trials, because psychologists are not invited to comment upon whether defendants are guilty or not. However, they may be invited to comment on whether there are psychological factors (low intelligence, suggestibility, mental illness, etc.) which mitigate someone’s legal responsibility. Again, when prisoners are applying for parole, psychologists are often asked to perform risk assessments which may influence whether or not parole is granted. Part of that work may be assessing the extent to which prisoners have allegedly benefited from offending behaviour programmes which they have completed during their sentence. The work of prison psychologists came under the spotlight earlier this year, when it was finally admitted by the Ministry of Justice that supposedly therapeutic psychological work with some prisoners had actually made them worse (Hamilton, 2017; Rose, 2017).

Given the frailties of human judgement, and the apparent obstacles to removing them from individuals, there would appear to be only one solution to this problem: remove the individuals themselves from the process. Many of these decisions could be automated with a considerable improvement in accuracy. A computer scanning two fingerprints for evidence of similarity will not be influenced by whether the investigating detective thinks the suspect is guilty or not. Its evidence will be the same either way. Forensic psychological judgements might be more difficult to automate, but objective data relating to prisoners can reliably be related to their subsequent risk if released. Most forensic psychological judgements are of little predictive value.

In the end, what matters is what works. It is very clear that, as things stand, much forensic evidence is not working very well. The scientific knowledge may be there, but the translation of that scientific knowledge into a workable technology is often haphazard. Convicting the wrong people helps no one, neither those who go through the terrible anguish of a wrongful conviction, nor those who needlessly become victims because the real culprit was left at large.

References

Dror, I. (2016). A hierarchy of expert performance. Journal of applied research in memory and cognition, 5, 121-127.

Dror, I., & Rosenthal, R. (2008). Meta-analytically quantifying the reliability and biasability of forensic experts. Journal of forensic science, 53(4), 900-903.

Dror, I. E., & Murrie, D. C. (2017). A Hierarchy of Expert Performance Applied to Forensic Psychological Assessments. Psychology, Public Policy, and Law, No Pagination Specified-No Pagination Specified. doi: 10.1037/law0000140

Forde, R. A. (2017). Bad Psychology: how forensic psychology left science behind. London: Jessica Kingsley Publishing.

Hamilton, F. (2017). Expert warnings over failure of rehab for rapists were ignored, The Times. Retrieved from www.thetimes.co.uk/edition/news/expert-warnings-over-failure-of-sexual-offenders-treatment-programme-were-ignored-b78n05ng7

Kahneman, D. (2011). Thinking, fast and slow. London: Allen Lane.

Rose, D. (2017). The scandal of the sex crime "cure" hubs: how minister buried report into £200 million prison programme to treat paedophiles and rapists that INCREASED reoffending rates, Mail Online. Retrieved from www.DailyMail.co.uk/news/article-4635876/scandal-£100million-sex-crime-cure-hubs.html

Bad Psychology

posted 23 Aug 2017, 08:26 by Robert Forde   [ updated 4 Sep 2017, 08:29 ]

An important reason for a slowing down of my contributions to this blog in recent times is that I have been spending a great deal of time writing a book. This has been based firmly on published peer-reviewed evidence, some of which appears elsewhere on this blog. Much of it does not, but has been collected over the years that I have spent studying evaluations of cognitive-behavioural programmes and risk assessment practices.

The book is entitled “Bad Psychology: how forensic psychology left science behind”, a title which will make clear its relationship to much of the material on this blog. As well as examining the scientific evidence about contemporary forensic psychological practice, it also discusses the unfortunate response of the criminal justice system to evidence-based criticism of its policies.

Other material on this website reports on the Ministry of Justice’s own evaluation of its sex offender “treatment” programmes, published earlier in 2017. This evaluation makes clear that the Ministry’s programmes made sex offenders more risky, not less. A postscript was added to the book, although it was already in production, to take account of this new evidence. In large measure, it vindicates the approach taken in the book, my practice, and this blog, over more than a decade. I was predicting this outcome as long ago as 2004.

Anyone interested in obtaining a copy of the book may do so from (in the UK):

http://www.jkp.com/uk/bad-psychology-2.html

In North America:

http://www.jkp.com/usa/bad-psychology-2.html

Something of the stance taken in the book may be gathered from the following quote:

“The more I studied what was going on, the more it seemed that what was preventing many of my colleagues from seeing the evidence clearly was not some conscious (and therefore corrupt) attachment to money or career goals. It was not an attack of some kind of irrationality, and it was certainly not lack of intelligence. It was the fact that psychologists, like those that they study, are human beings. As a profession, we often forget this, and implicitly assume that our judgments about other people are not subject to the same flaws as their judgements are. In fact, there is abundant and increasing evidence that psychologists’ judgments are subject to exactly the same weaknesses as everyone else’s.”

The book was published by Jessica Kingsley on 1 September 2017.

The SOTP fiasco: a lawyer's view

posted 5 Aug 2017, 04:57 by Robert Forde   [ updated 7 Aug 2017, 06:13 ]

This article was originally published in "Inside Time", the newspaper for prisoners. It is reproduced here with permission.

We need to talk about sex

Large numbers of prisoners in limbo waiting for ‘SOTP mess’ to be cleared up

Anyone who has served a prison sentence for a sexual offence over the last 25 years will have heard of the SOTP. Thousands of prisoners have completed at least one version of the programme. Many will have served much longer periods in custody because they have been told that they “need to do the SOTP”. Probation officers, Parole Board members and prison officers have consistently told prisoners that they must do the SOTP to reduce their risk of reoffending. It also appears that the SOTP was one of the programmes which the Ministry of Justice (during the Grayling years) tried to sell to foreign jurisdictions.

It is now clear that the claims which have been made for the SOTP have been wildly misleading. The headline from the research is shocking. The group who took part in the treatment programme re-offended more than the group convicted of similar offences who had not taken part. How long the Ministry of Justice has known this has not been disclosed. It has been suggested that efforts were made to hide the research findings. If that is true, there really should be an investigation to provide accountability for decisions which were made.

Sexual offending arouses very strong feelings and can cause dreadful harm. What we do as a society to reduce that harm should be the subject of informed debate. A civilised society needs civilised, intelligent solutions to difficult problems. It is possible to condemn behaviour and still try to understand it. It is also possible to treat people with respect and dignity even though they may have done appalling things. We need to know as much as possible about why people behave in this way and we need to understand what works to reduce the likelihood that they will do so again.

One of the key problems with the approach which has been taken to sexual, as well as other types of offending is that it often fails to take proper account of the individual. What someone ‘needs’ cannot really be answered without knowing who that person is, why they behave as they do and what might change things for them. Some people do not benefit from group work. If we are serious about rehabilitation and stopping reoffending, we might have to accept the cost of providing skilled one-to-one work when it is needed. We might have to focus more on what we can do to ‘manage’ rather than eradicate risk. Good interventions are collaborative and encourage people to be properly invested in and motivated to manage their own risk.

There are still big holes in the plans which HMPPG (formerly NOMS) has for people convicted of sexual offences. There are a large number of prisoners who are in limbo waiting for the SOTP mess to be cleared up. The new programmes, Kaizen and Horizon, are untested and there are not yet sufficient places to meet the likely demand. There may be people who would benefit from these programmes. It will be tempting for some to resist new programmes on the basis that there is little evidence that they will work. They will probably need to have an alternative. It is to be hoped that the opportunity is seized to really think about and discuss what helps people to desist from serious offending. Sentence planning must be designed to support that.

Those who have lost years waiting to do the SOTP are entitled to wonder whether the Ministry of Justice will offer them any form of redress. It is likely that they would need to show that there was a very clear causal link between their continued detention and the wait for the SOTP. Any potential claims are likely to be very much depend on the facts of the individual case.

A leaflet hastily prepared by the Ministry of Justice seeks to reassure people who have done the Core or Extended SOTP that they did not waste their time. The leaflet asserts that programmes teach skills or tools which are important to living a crime free life. It does not say that there are other things which might very well contribute to the same goals.


Andrew Sperling is a Solicitor-Advocate and Managing Director of SL5 Legal. SL5 Legal is based at 39 Warren Street, London W1T 6AF.

The SOTP fiasco

posted 31 Jul 2017, 07:01 by Robert Forde   [ updated 5 Aug 2017, 04:49 ]

This article was originally published in "Inside Time", the newspaper for prisoners. It is reproduced here with permission.

Sex offender programmes: time to change

A few weeks ago a rumour surfaced that the UK Ministry of Justice had, very abruptly, stopped the Core and extended SOTP programmes. Expecting to find this was not true I contacted one of the few MOJ colleagues who will still speak to me (I have been a constant critic of these programmes for 13 years). The colleague told me that they were indeed being stopped, apparently as a result of research carried out by the MOJ itself. However, this was not publicised at the time, and it later emerged that Liz Truss, then the Justice Secretary, had ordered it kept secret. With the benefit of hindsight, it now looks as if she was just trying to delay the scandal until after the general election, as it must have been obvious that the secret would come out eventually. Quietly, on 30 June, the MOJ’s “secret” scientific report appeared on their website. It was a bombshell: it seemed to show not only that sex offender training was ineffective, but that it actually increased risk. What had happened? To answer that we need to go back in time to the early 1990s.

At that time crime rates were considerably higher than they are now (they have fallen considerably since). John Major’s government was concerned about crime levels and wondered whether the Home Office (which ran the criminal justice system then) could suggest ways of reducing them. At some point, a group of persuasive psychologists were able to convince Home Office ministers that new-fangled cognitive-behavioural programmes which were in use in North America might be the answer. More importantly, they persuaded ministers to provide the money. A national scheme for reducing offending by the use of programmes would require the hiring and training of a great many staff. Ministers agreed that the money would be provided, and that it would be ring-fenced to stop it being siphoned off for other purposes within the criminal justice system.

On a common sense basis cognitive-behavioural programmes had a lot of appeal. Common sense suggests that we behave the way we do because of the attitudes and beliefs that we hold. Therefore, changing these attitudes and beliefs should result in changes in behaviour. But psychologists of all people should be wary of such common sense interpretations, because these are not based on an understanding of the complexities of human behaviour and the brain functioning which underlies them. One of the things which has changed since the early 1990s is that we now know a great deal more about brain functioning. To be fair, it was realised at the time that these programmes ought to be evaluated to see how effective they were, and even that they should not be evaluated by the same people who carried out the “treatment”. Unfortunately, these good resolutions slipped. Evaluations of the programmes were few, generally not very well designed, and often carried out by people who had a vested interest in showing how good the programmes were. By this I don’t mean a crude financial interest, but people built careers and professional reputations on creating and running these programmes and it would be silly not to recognise that this colours their view.

As early as 2003 an important paper was published by Marnie Rice and Grant Harris, two Canadian psychologists of international repute. They showed that most evaluations of sex offender treatment were of poor quality, and contained a bias towards showing a treatment effect even if none existed. This research was generally ignored. When I began to quote it in parole reports from about 2004 I was greeted with disbelief. Indeed, I was the subject of a misconduct complaint for allegedly “misrepresenting the research on the effectiveness of sex offender programmes”. This was easily defeated by showing that my opinion was based securely on scientific evidence. This did not necessarily mean I was right, but it meant my position was defensible and therefore not misconduct. I have had to fight off two other such complaints since, and an attempt by MOJ lawyers to have me removed from a committee. Several colleagues have been bullied and discriminated against for taking a similar stand. In the meantime, things moved on. In 2005 the California Sex Offender Treatment Evaluation Project (SOTEP) published its report. This was a large and well-designed trial of a programme similar to the SOTP, and was hailed in advance as the study which was going to prove the effectiveness of this kind of programme beyond doubt. It showed no benefit of treatment, and colleagues committed to treatment programmes found all kinds of reasons to suggest that maybe the SOTEP study wasn’t very good after all.

Also in 2005 two academic researchers, Martin Schmucker and Friedrich Lösel, published a meta-analysis of sex offender treatment programmes. Meta-analysis is a powerful technique which combines the results of a number of research projects, effectively making them into one big project. This matters because the more people your study includes the more reliable it is likely to be. Schmucker and Lösel concluded that the results were “promising” and provided good support for sex offender programmes. Meta-analysis is an excellent way of showing what the research as a whole says in a particular field, but it is only as good as the studies that go into it. Unfortunately many of those that went into Schmucker and Lösel’s meta-analysis were not very good. They would not have passed the standards set by Rice and Harris in 2003. As computer programmers say, “garbage in, garbage out”. Interestingly, Schmucker and Lösel repeated this meta-analysis in 2015, using only good quality studies this time, and found no treatment effect for prison-based programmes.

And so to the study which has caused all the trouble, the “Impact Evaluation of the Prison-based Core Sex Offender Treatment Programme”. This study involved 2,562 men who had undergone the prison SOTP, and compared them with 13,219 who had not. To make sure that the two groups were as near identical as possible, they were matched on 87 different characteristics that might be related to risk. This is impressive: studies using matching are not uncommon, but usually only match groups on a few characteristics. Not only that, the researchers checked mathematically to make sure that the matching was very accurate. I have read thousands of papers, and rarely seen such close attention to this crucial part of the process. The men were followed up after release for an average of 8.2 years (some for over 13 years). Only 8% of the untreated men were convicted of further sexual offence during the follow-up period, compared with 10% of the treated men. In other words, the treated men were more likely to commit further sexual offences, not less. The MOJ was understandably horrified by this result, and called in Prof Friedrich Lösel (mentioned above) to provide an independent opinion on the quality of the study. It seems that Prof Lösel advised the results should be accepted, and was promptly told to say nothing in public.

What are the implications? There may well be some legal implications, but I am not qualified to comment on those, though I recognise that some will feel aggrieved if they have had parole refused, or made to serve extra time to undertake the SOTP to “reduce their risk”. They may feel further aggrieved to know that this fiasco was avoidable, if only the Parole Board and MOJ colleagues had listened to what I and other independent psychologists were telling them over a decade ago. Our opinions were based on scientific evidence which was already available then, and which we quoted to them, but they did not want to know, preferring to shoot the messenger rather than heed the evidence. The situation seems no better in other countries, and I have recently had numerous email exchanges with North American colleagues following the recent MOJ report. Overwhelmingly, even before reading it, they tend to find ways of minimising it and denying its importance. I am not optimistic that the MOJ research will have much impact on practice overseas, where cognitive-behavioural programmes are still popular.

The implications for SOTP-type programmes in the UK are clear, and the MOJ was wise to dump these immediately. There could clearly be no justification for continuing them. But what (if anything) should replace them? We are told that the Kaizen and Horizon programmes will be “offered” to men judged to pose a medium or high risk. However, the main difference between these programmes and their predecessors is that they will target different things. The so-called “cognitive-behavioural” methods will be the same, and other untested programmes such as HRP will continue. There is no evidence base for these programmes, and at most they should be trialled on a small scale and effectiveness demonstrated before being adopted nationally. Cognitive-behavioural methods are generally recognised as having helped in the treatment of depression and anxiety. However, there is no evidence that they are effective in changing patterns of behaviour as opposed to emotional states. There is no evidence that changing the attitudes and beliefs which people express to programme facilitators has any effect on subsequent behaviour. The reasons for this are complex, and there is no room to go into them here, though I do that in my forthcoming book*.

My considered opinion is that all of the programmes and risk assessment methods used by the MOJ should be reviewed. It is virtually certain now that most of them do not do what is claimed. There are other ways of helping prisoners to give up a criminal lifestyle: education, trade training, restorative justice, support in the community, and good mental health care, to name but a few. It is time to change.


*"Bad Psychology: How forensic psychology left science behind." To be published by Jessica Kingsley Publishing on 1 September 2017.


Marking one's own homework

posted 13 Mar 2017, 05:54 by Robert Forde   [ updated 31 Jul 2017, 06:58 ]

It is a good number of years since it was found that treatment programmes appeared to do better when evaluated by those who were actually running them. This is not surprising, and it need not mean any conscious attempt to deceive. People are perfectly capable of quite unconsciously slanting decisions in favour of their pet scheme. For example, if there is some doubt about whether a case should be included in a treatment group or not, it is all too easy for a researcher to decide that someone who seems to have done well should be retained for that someone who seems to have done badly should be removed from the research sample on technical grounds. As the work of Ariely (2012) makes clear, there need not be anything consciously deceitful about this.

The subject of "authorship bias" was also raised recently by Seena Fazel (2017), a well-known researcher in the field of criminal risk assessment. In rebutting some criticisms of the use of meta-analyses for evaluating risk assessment instruments, he pointed out that an earlier paper which he co-wrote (Singh, Grann & Fazel, 2013) had found evidence of authorship effects in this field also. In fact, risk assessment instruments which were evaluated by their own authors apparently achieved accuracies approximately double those found by other colleagues. Interestingly, that paper also examined whether such conflicts of interest had been declared to the journals concerned. They had not, despite such declarations being part of the policy of those journals.This begins to sound a bit less unconscious.

Transparency is a fundamental requirement of the scientific process. It is not enough simply to publish one's results, if the results themselves are questionable. The situation may be complicated by the fact that in some cases there are financial/commercial incentives to achieve "good" results. This is particularly true in the case of risk and other assessment instruments, which may be big business. I will be dealing with this subject, amongst many others, in a forthcoming book (Forde[a], in press, due 1 Sept. 2017), and have already dealt with it briefly in another (Forde[b], in press).

References

Ariely, D. (2012). The (honest) truth about dishonesty: How we lie to everyone — especially ourselves. New York: HarperCollins.

Fazel, S. (2017). Response to “The Use of Meta-Analysis to Compare and Select Offender Risk Instruments”. International Journal of Forensic Mental Health. doi: 10.1080/14999013.2016.1261965.

Forde, R. A. [a]. Bad Psychology: how forensic psychology left science behind (In press). London: Jessica Kingsley.

Forde, R. A. [b]. When profit comes in the door, does science go out the window? In B. Cripps (Ed.), Psychometric testing: Critical perspectives (in press). London: Wiley.

Singh, J. P., Grann, M., & Fazel, S. (2013). Authorship Bias in Violence Risk Assessment? A Systematic Review and Meta-Analysis. PLOS ONE, 8(9), e72484. doi: 10.1371/journal.pone.0072484.

Sex offender treatment "Emperor's new clothes"

posted 24 Feb 2017, 10:25 by Robert Forde   [ updated 2 Mar 2017, 04:21 ]

An article in the British Psychological Society's magazine The Psychologist for March 2016 questioned yet again the evidence for effectiveness of UK sex offender treatment programmes. In practice, these are very similar to programmes conducted throughout the Western world.

The article, written by two leading UK forensic psychologists, Prof Graham Towl and Prof David Crighton of Durham University, suggests that there is no reliable evidence for effectiveness, and some evidence that the programmes could even make offenders worse. Towl and Crighton were previously senior advisers to the Ministry of Justice, so their opinion cannot be lightly dismissed.

Under the title "The Emperor's new clothes?" the authors review the evidence for the effectiveness of sex offender treatment programmes, including the recent review by Schmucker and Lösel, concluding that it is time to try different approaches to these problematic offenders.

Much of the evidence is reviewed elsewhere on this site, and similar conclusions have been drawn. This is clearly not the last that will be heard on this topic, although the offending behaviour industry may be relied upon to mount a vigorous defence of its activities.

Parole risk assessment flawed

posted 11 Nov 2014, 10:21 by Robert Forde   [ updated 24 Feb 2017, 10:14 ]

Blowing my own trumpet for a change!

My doctoral thesis on the use of risk assessment in lifer parole hearings has been available online from 1 December 2014. It was awarded by the University of Birmingham. The URL is:


Essentially, a review of the academic literature covering the last 50 years suggests that risk assessment for parole has in general always been haphazard and subjective, and research from many different jurisdictions confirms this. My own original research on UK lifers seeking parole finds the same thing, as have other UK studies before. The net result is likely to be that many people are kept in prison who could safely be released if more systematic and objective risk assessment were carried out.

OK, NOW it's official!

posted 5 Nov 2014, 12:06 by Robert Forde   [ updated 5 Nov 2014, 12:20 ]

Some time ago I posted on this site details of a study, led by NOMS researchers, which showed that post-treatment assessments after sex offender "treatment" programmes bore no relation to subsequent risk. In June of this year I posted a correction, following an approach by NOMS, clarifying that it was only the post-treatment psychometric assessment which had been evaluated in this research. I also suggested it was most unlikely that the full structured assessment (the SARN) would fare any better, but strictly speaking the question was still open. Now it is not. A study published online last month has shown that the SARN treatment needs analysis bears no relationship to reconviction risk after release, whether this is measured after two years post-release or four.

It is difficult to see why anyone ever thought it would. The fact is that this kind of interview-based assessment is prone to just about every source of distortion imaginable. It never was likely that it would provide a reliable indication of risk. Since the only justification for treatment is that it reduces risk, this means there is no longer any justification  for continuing  the SOTP and associated  "treatment" programmes. The fact that the post-treatment assessments do not relate to risk means that alleged measures of gains during treatment simply do not mean anything. They cannot tell us anything about risk, or treatment need.

Surely even the Ministry of Justice, notorious for refusing to accept the evidence about its policies, must concede (before the courts force it to do so) that the SOTP and its associated assessment methods are now officially useless. There are better ways of spending taxpayers' money.


Let us hope Mr Grayling reads it.

Correction

posted 13 Jun 2014, 06:46 by Robert Forde

Apologies for the delay - I have been offline for some months, due to complete incompetence on the part of BT, my service provider. Now I am back online I must start with a further apology. In December 2012 I published an item headed "SARN and SOTP Useless: Official". This described NOMS-led research showing post-SOTP assessments not to be predictive of further offending. I have since been contacted by Sarah Ashcroft, Head of Interventions at NOMS. She has pointed out that the research to which I referred only concerned psychometric testing, and not all of the measures used by NOMS psychologists to assess change in SOTP participants. Therefore, she suggests, the fact that these tests failed to be useful doesn't invalidate the structured interview assessment which is a large part of the SARN.

I accept the point, but only as far as it goes ...

The fact is, there is plenty of research demonstrating that post-treatment assessments (of whatever type) do not relate to subsequent reconviction risk, while pre-treatment assessments do. Important parts of this research have been conducted by NOMS researchers themselves. This is presumably why NOMS ceased to call assessments "Progress in Treatment Reports" (though, to be honest, they kept on doing them in all but name). So far, every attempt to validate post-treatment assessments has failed. Ms Ashcroft may be right, and future research may show that the SARN structured interview is far better, but I wouldn't hold my breath. The burden of the evidence so far is clear, and shows that assessments of alleged treatment need following SOTP-type programmes are not related to risk.

Incidentally, if the tests (as she admits) have been shown to be ineffective for this purpose, have NOMS assessors now stopped using them in post-treatment assessments? If not, aren't they in breach of professional guidelines? It certainly would be unprofessional to use tests which one knows are useless for the purpose.

It has been argued that these assessments are not risk assessments, but assessments of treatment need. I don't accept this at all. If alleged "treatment need" is not related to risk, then it isn't a treatment need. The only basis for treatment is that it reduces risk. If it doesn't, then it is not a need. It becomes a matter of the prisoner's own choice. This is even more true of those on indeterminate sentences, because they have to show risk reduction to get out at all. For them, arguing that someone should stay inside to complete "treatment" to reduce his risk depends upon being able to show that it will, or at the very least is more likely than not to do so. Since the only lawful basis for continued detention is continued risk, isn't continued detention for risk-reduction treatment unlawful if it is shown not to do this?

I am not a lawyer, but ...

1-10 of 31