So-called “deepfake evidence” and computer-authored legal pleadings share, in several respects, similar attributes. They’re each created by widely available artificial intelligence technologies. They can be highly persuasive, even when false or fanciful. And they each have the potential to undermine the integrity of the civil justice system by injecting difficult-to-detect, compelling-but-unreliable digital materials into legal proceedings.
So far, however, unlike the frequent headaches created when generative artificial intelligence is carelessly used to create legal pleadings, the legal challenges posed by deepfake evidence are still largely on the horizon – not yet a daily reality.
There is no foolproof way today to classify text, audio, video, or images as authentic or AI-generated. Judges will increasingly need to establish best practices to deal with a potential deluge of evidentiary issues.
As one legal scholar has noted, “there is no foolproof way today to classify text, audio, video, or images as authentic or AI-generated.” Northwestern University Law School Prof. Daniel Linna, in the law review article, Deepfakes in Court: How Judges Can Proactively Manage Alleged AI-Generated Material in National Security Cases,” added: “Judges will increasingly need to establish best practices to deal with a potential deluge of evidentiary issues.”
That effort is underway.
The National Center for State Courts, a research and education resource for state court judges and judicial administrators, is working to ensure that, when deepfake evidence arrives in court, judges will be well-equipped to deal with it. To that end, the NCSC recently published two “bench cards” setting out questions that judges should ask whenever AI-generated evidence is proffered for use in legal proceedings.
Helpful AI-Generated Evidence
The first bench card addresses acknowledged AI-generated evidence. Parties offering this type of evidence acknowledge to the court and opposing counsel that it was created by digital technology. This category mostly covers computer-generated demonstrative evidence and other evidence that enhances the factfinder’s ability to understand the case. Into this category fall:
- computer-generated visualizations
- accident reconstructions
- depictions of medical procedures
- crime scene layouts
Also included in the category of acknowledged AI-generated evidence are data analyses and evidence that has been digitally enhanced in some fashion (e.g., photographs that have been enlarged and blurred depictions digitally clarified). Acknowledged AI-generated evidence can also include direct evidence such as biometric identifiers.
The bench card explains that, while potentially helpful, this type of evidence requires heightened scrutiny by the trial court because it may be unreliable or overly persuasive with a jury. The bench card advises judges to consider giving the jury instructions on how the evidence was created and a reminder that the jury is entitled to weigh the believability of AI-generated evidence just like any other evidence.
Deepfake Evidence
The second bench card addresses the problem of unacknowledged AI-generated evidence. Into this category falls “deepfake” evidence – evidence that has been digitally fabricated in some fashion to depict a false version of reality.
Deepfake evidence, whether offered in court or during a deposition, poses unique challenges that some experts believe are not adequately addressed by current evidence rules. Deepfakes are difficult to detect and authenticate. Forged deepfake audio and video are highly realistic – often indistinguishable from genuine content. Deepfakes can be used to cast doubt on the authenticity of even legitimate digital evidence, potentially eroding trust in the legal system. Finally, there is the problem of the “deepfake defense,” an emerging tactic where authentic evidence is challenged as fake.
Unacknowledged AI-generated evidence, the bench card notes, has a significant potential to create a miscarriage of justice.
The bench card advises that, whenever AI-generated evidence is proffered in a legal proceeding, the trial court should consider asking:
- What is the source of this evidence, and how, when, and where was it obtained?
- Can you tell the court who has had custody of this evidence from its creation or capture until now, including sharing or transferring the evidence, and where it has been stored?
- Has this evidence been altered, edited, converted to a different format, or processed in any way since its creation?
- Is there any other data or source that can confirm the authenticity of the evidence?
- Were any forensic tools or methods used to verify the integrity of the evidence?
- Can you provide metadata or other technical information that supports the authenticity of this digital file?
- Can a qualified expert explain the processes used to handle and verify this digital evidence?
In some cases, expert testimony might be necessary for the court to reliably assess AI-generated evidence. The NCSC bench card suggests that, if the parties do not identify an expert, the court should consider appointing an expert of its own.
The problem of deepfake evidence is being studied elsewhere in the legal community as well. One approach under consideration in the federal system would be to create a new Rule 901(c) in the Federal Rules of Evidence that would require the proponent of possibly AI-fabricated evidence to demonstrate to the court that it is “more likely than not” authentic. Policymaking efforts regarding deepfake evidence are also underway in New York, California, and Texas. There’s even a suggestion that professional ethics rules should be changed to place a heightened ethical burden on attorneys to refrain from offered deepfake evidence in court.
The message for litigators seems clear. Both the promise and the dangers of computer-generated evidence are on trial judges’ radar screen today. Expect that this type of evidence will receive searching – and well-informed – scrutiny by the trial judge prior to admission in court.