Home‎ > ‎News alerts‎ > ‎

Forensic evidence: even without deliberate manipulation there is inherent bias

posted 5 Jan 2018, 10:00 by Robert Forde

A scandal has broken over the issue of drug testing in criminal cases. A criminal investigation is underway, and many cases are being reviewed, to determine whether evidence was deliberately manipulated to make defendants look guilty. If it was, then clearly the course of justice was being perverted on a large scale. Expert evidence is often essential to assist courts in coming to the best conclusion, and clearly if people manipulate the evidence this could result in miscarriages of justice. It does happen, and it is not new: laboratory evidence which might have exonerated Sally Clarke, accused of murdering her two sons, was suppressed. As far as I know, no one was ever prosecuted for this, although Mrs Clarke served several years in prison and died prematurely after her release, probably partly as a result of her terrible experience.

Manipulation of evidence is clearly totally reprehensible, but it is not the only source of error in forensic evidence. A number of scientific studies have shown that bias can creep into the judgements made about forensic evidence and influence the result. In particular, several studies by Prof Itiel Dror of University College London have demonstrated that completely extraneous information can bias an expert report (Dror, 2016; Dror & Rosenthal, 2008). This applies even to expert evidence which is commonly thought of as being scientific and more or less infallible, including fingerprint evidence and evidence from DNA testing. For example, fingerprint experts who are told that the investigating detectives think a suspect is probably innocent but they just need to rule him out are more likely to report that the evidence exonerates him. Conversely, if they are told that the investigators are quite certain of someone’s guilt but need the DNA or fingerprint evidence to confirm it absolutely, they are more likely to report that the evidence points to guilt.

How can this be? Surely supposedly scientific evidence ought to be the same regardless of what someone outside the laboratory thinks about the guilt or innocence of the suspect? Indeed it should, and the evidence itself is. The problem arises at the point where the expert has to decide what the evidence means and convey that decision to others. For example, it seems that fingerprint experts can be biased by an irrelevant suggestion to pay more attention to certain features of the fingerprints which they examine, and thus find more features which confirm that suggestion. What Dror has shown is that biases which are known to affect human judgement in general also affect the judgements of experts. This builds upon the work of Daniel Kahneman (Kahneman, 2011), increasingly well-known for his studies of human judgement and how it can go wrong. Kahneman believes that many of the biases and errors which he has discovered in human decision-making processes are essentially hardwired into the human brain, a product of our neural anatomy and physiology. As such, they are not amenable to removal or even improvement through training.

In the case of fingerprint evidence, which has been presented in court for more than a century, Dror was astonished to find that there was no accepted standard for establishing the reliability of expert judgement. In other words, experts were having their evidence accepted in court when it was not at all clear that there was any “industry standard” which they could reasonably be assumed to meet.

In a very recent study, Dror and Murrie (2017) turned their attention to the judgements made by forensic psychologists, an area in which I have also worked (Forde, 2017). Since judgements in psychology can be more subjective than those in the “hard” sciences it would not be surprising if they were even more subject to these same errors, and this is unfortunately true. This usually matters less in criminal trials, because psychologists are not invited to comment upon whether defendants are guilty or not. However, they may be invited to comment on whether there are psychological factors (low intelligence, suggestibility, mental illness, etc.) which mitigate someone’s legal responsibility. Again, when prisoners are applying for parole, psychologists are often asked to perform risk assessments which may influence whether or not parole is granted. Part of that work may be assessing the extent to which prisoners have allegedly benefited from offending behaviour programmes which they have completed during their sentence. The work of prison psychologists came under the spotlight earlier this year, when it was finally admitted by the Ministry of Justice that supposedly therapeutic psychological work with some prisoners had actually made them worse (Hamilton, 2017; Rose, 2017).

Given the frailties of human judgement, and the apparent obstacles to removing them from individuals, there would appear to be only one solution to this problem: remove the individuals themselves from the process. Many of these decisions could be automated with a considerable improvement in accuracy. A computer scanning two fingerprints for evidence of similarity will not be influenced by whether the investigating detective thinks the suspect is guilty or not. Its evidence will be the same either way. Forensic psychological judgements might be more difficult to automate, but objective data relating to prisoners can reliably be related to their subsequent risk if released. Most forensic psychological judgements are of little predictive value.

In the end, what matters is what works. It is very clear that, as things stand, much forensic evidence is not working very well. The scientific knowledge may be there, but the translation of that scientific knowledge into a workable technology is often haphazard. Convicting the wrong people helps no one, neither those who go through the terrible anguish of a wrongful conviction, nor those who needlessly become victims because the real culprit was left at large.


Dror, I. (2016). A hierarchy of expert performance. Journal of applied research in memory and cognition, 5, 121-127.

Dror, I., & Rosenthal, R. (2008). Meta-analytically quantifying the reliability and biasability of forensic experts. Journal of forensic science, 53(4), 900-903.

Dror, I. E., & Murrie, D. C. (2017). A Hierarchy of Expert Performance Applied to Forensic Psychological Assessments. Psychology, Public Policy, and Law, No Pagination Specified-No Pagination Specified. doi: 10.1037/law0000140

Forde, R. A. (2017). Bad Psychology: how forensic psychology left science behind. London: Jessica Kingsley Publishing.

Hamilton, F. (2017). Expert warnings over failure of rehab for rapists were ignored, The Times. Retrieved from www.thetimes.co.uk/edition/news/expert-warnings-over-failure-of-sexual-offenders-treatment-programme-were-ignored-b78n05ng7

Kahneman, D. (2011). Thinking, fast and slow. London: Allen Lane.

Rose, D. (2017). The scandal of the sex crime "cure" hubs: how minister buried report into £200 million prison programme to treat paedophiles and rapists that INCREASED reoffending rates, Mail Online. Retrieved from www.DailyMail.co.uk/news/article-4635876/scandal-£100million-sex-crime-cure-hubs.html