The pace of advancement in artificial intelligence (AI) and particularly generative adversarial networks (GANs), has created disruptive technologies that compromise decades of understood evidentiary challenges in criminal law. Among the most alarming are deepfakes—AI-manipulated audio and video data that creates the illusion that someone said or did something, or, in some cases, creates an entirely fictitious scene. What once was a fun novelty to do on TikTok has turned into something much more insidious, a threat to the fundamental aspect of the criminal justice system: evidence. As online accessibility to such programs increases, it becomes ever more probable that false evidence will be presented in a criminal courtroom—intentionally or not.
Traditionally, a court can adjust to various new evidentiary challenges. Whether it's the challenge of a photo being manipulated in the first half of the twentieth century or DNA testing in a late twentieth century rape case, the court weighs the credibility for or against admission of such evidence. However, where deepfakes are concerned, they not only create a compromised situation, but also raise doubt against credible photos and videos. Thus, the threat is not just wrongful conviction or acquittal; it threatens the respect for the judicial process itself by taking away people's ability to trust the truth of what they see and hear.
This paper proposes that deepfakes in the realm of criminal trials are overwhelmingly risky. It explores the technology behind deepfakes, their positive potentials and their legal implications when it comes to ethical, socio-political and national security matters to determine that deepfakes without reform in law and technology are a risk to evidentiary reliability and thereby, justice.