The Failures of Forensic Science

Imagine you’re sitting in court and you’ve been falsely accused of a crime.

If found guilty, you’re looking at 20 years in prison.

The prosecution calls a forensic analyst to the stand and she states, “There is a clear match between the accused's fingerprints and the fingerprints at the crime scene.”

However, you weren't even at the crime scene and so you think, what is going on?

You look over to the jury and they’re buying it.

The 20 years imprisonment is feeling more certain now. You’re freaking out. You’re on the cusp of having a panic attack in the middle of court.

You think, how can this possibly be?

* * *

Well, this is precisely what has been happening.

Four major reports from 3 three different countries have been released, all pointing towards the lack of certainty in the forensic sciences.

In fact, the ‘Ontario Report’ was triggered by a series of wrongful convictions.                             

The Limitations of The Forensic Sciences

1. The General Lack of Certainty

With the exception of nuclear DNA analysis, however, no forensic method has been rigorously shown to have the capacity to consistently, and with a high degree of certainty, demonstrate a connection between evidence and a specific individual or source. [1]

Big claim, I know. Keep reading.

2. Lack of Quality Studies

The simple reality is that the interpretation of forensic evidence is not always based on scientific studies to determine its validity.

This is a serious problem.

Although research has been done in some disciplines, there is a notable dearth of peer-reviewed, published studies establishing the scientific bases and validity of many forensic methods. [1]

Note the Ontario Report’s recommendation:              

In expressing their opinions, forensic pathologists should adopt an evidence based approach. 

Such an approach requires that the emphasis be placed on empirical evidence, and its scope and limits, as established in large measure by the peer-reviewed medical literature and other reliable sources. 

This approach places less emphasis on authoritative claims based on personal experience, which can seldom be quantified or independently validated. [4]

3. ‘False Precision’ and Vague Terminology

‘False precision’ is the fallacy of treating information that is not precise, as precise. We see this happening among forensic examiners.

Many terms are used by forensic examiners in reports and in court testimony to describe findings, conclusions, and the degrees of association between evidentiary material (eg hairs, fingerprints, fibers) and particular people or objects.

Such terms include but are not limited to ‘match,’ ‘consistent with,’ ‘identical,’ ‘similar in all respects tested,’ and ‘cannot be excluded as the source of.’

The use of such terms can have a profound effect on how the trier of fact in a criminal or civil matter perceives and evaluates evidence.

Yet the forensic science disciplines have not reached agreement or consensus on the precise meaning of any of these terms. [1]

Note the Scottish Report’s recommendations:            

Fingerprint evidence should be recognised as opinion evidence, not fact, and those involved in the criminal justice system need to assess it as such on its merits.

Examiners should discontinue reporting conclusions on identification or exclusion with a claim to 100% certainty or on any other basis suggesting that fingerprint evidence is infallible. [3]

Friction ridge analysis:

Although there is limited information about the accuracy and reliability of friction ridge analyses, claims that these analyses have zero error rates are not scientifically plausible. [1]

Fingerprint analysis:

At present, fingerprint examiners typically testify in the language of absolute certainty.

Given the general lack of validity testing for fingerprinting; the relative dearth of difficult proficiency tests; the lack of a statistically valid model of fingerprinting; and the lack of validated standards for declaring a match, such claims of absolute, certain confidence in identification are unjustified. [7]

4. Cognitive Biases    

Cognitive bias is a serious issue. Not only in the forensic sciences, but in all domains of life that involves careful thinking.

If interested, the nobel prize winner Daniel Kahneman has written a brilliant book on this general issue. Also check out an exceptional list of cognitive biases

A great example of cognitive bias in forensic analysis:

Threats to cognitive processes have a demonstrated tendency to change how humans interpret evidence. Experiments have demonstrated that analysts may change their opinions when exposed to information that is not relevant to their analysis.

In one series of studies fingerprint examiners (and DNA analysts), who did not know they were part of an experiment, were exposed to case information unrelated to their analyses.

They were asked to re-assess samples they had previously matched (several years earlier) while under the impression that they had not seen the particular latent prints before. As part of the study the analysts were provided with information about the investigation, which implied that the prints did not match.

The results indicate that a large proportion (up to 80 per cent in one study) produced interpretations, on the central issue of whether two prints or profiles matched, that were inconsistent with previous interpretations of the same material by the same examiner. [8]

5. Limitations of ACE-V

ACE-V stands for Analysis, Comparison, Evaluation, and Verification.

ACE-V provides a broadly stated framework for conducting friction ridge analyses. However, this framework is not specific enough to qualify as a validated method for this type of analysis. ACE-V does not guard against bias; is too broad to ensure repeatability and transparency; and does not guarantee that two analysts following it will obtain the same results.

For these reasons, merely following the steps of ACE-V does not imply that one is proceeding in a scientific manner or producing reliable results. [2]

6. Limitations of Fingerprint Analysis         

A fingerprint identification was traditionally considered an ‘individualization,’ meaning that the latent print was considered identified to one finger of a specific individual as opposed to every other potential source in the universe.

However, the recent attention focused on this issue reveals that this definition needlessly claims too much, is not adequately established by fundamental research, and is impossible to validate solely on the basis of experience.

Nor does fingerprint evidence have objective standards or a well-validated statistical model that can provide an objective measure of the strength of the fingerprint evidence in a given instance.

Therefore, examiners should not claim to be able to exclude every other finger in the world as a potential source. [2]

7. The Courtroom is Not a Science Lab  

Unfortunately, the adversarial approach to the submission of evidence in court is not well suited to the task of finding ‘scientific truth.’

The judicial system is encumbered by, among other things, judges, lawyers, and jurors who generally lack the scientific expertise necessary to comprehend and evaluate forensic evidence in an informed manner; defense attorneys who often do not have the resources to challenge prosecutors’ forensic experts; trial judges (sitting alone) who must decide evidentiary issues without the benefit of judicial colleagues and often with little time for extensive research and reflection; and very limited appellate review of trial court rulings admitting disputed forensic evidence. [6]    

8. Lack of Statistical Education Among Forensic Examiners

Note the Latent Print Report's recommendation:

Because statistical information plays a fundamental role in weighting latent print feature evidence, training should include the best available empirical information and should educate examiners about probabilistic reasoning in using that information.         

The latent print examiner community should expand the training of examiners in elementary probability theory to enable examiners to properly utilize the output of probabilistic models. [2]    

9. The Lack of Scientific Education Among Lawyers and Judges

Lawyers and judges often have insufficient training and background in scientific methodology, and they often fail to fully comprehend the approaches employed by different forensic science disciplines and the degree of reliability of forensic science evidence that is offered in trial. [1]

* * *