Seeing is believing, and that’s a problem when it comes to deepfake evidence in court.
We’ve already remarked on the many instances where careless use of generative artificial intelligence is flooding courthouses with legal arguments supported by hallucinated case citations. Litigators and court clerks today are spending time vetting an opposing party’s pleadings for superficially plausible case citations that do not, in fact, exist or do not, in fact, support the legal principle for which they’re being cited. Generative AI has added a layer of uncertainty and delay in motion practice where neither previously existed.
Many legal experts believe that jury trials are next.
Almost certainly, generative artificial intelligence will be used to produce deepfake evidence designed to bolster or impeach witness testimony. Generative artificial intelligence can create doctored documents, altered images, motion video depicting events that did not occur, and impersonated voice recordings that all appear or sound genuine upon even the closest inspection.
It’s important to note that evidence created by generative artificial intelligence could also be helpful to the factfinder, as is the case with computer-generated accident reconstruction videos or other types of demonstrative evidence that illuminates in a compelling manner the raw facts underlying the case.
That problem is that juries – along with the rest of humanity – will have trouble sorting the digital wheat from the AI chaff, fundamentally undermining their ability to reliably serve as factfinders in civil litigation. Although there are few reported cases addressing deepfake evidentiary issues, many believe that’s likely to change soon. Deepfakes pose not only evidence admissibility challenges, but they are more likely than ever to be the subject of legal proceedings due to a growing number of laws that prohibit deepfakes used for harassment or political purposes.
Possible New Rule 901(c) Discussed
One group that’s being proactive on the deepfake evidence question is an advisory committee convened by the Judicial Conference’s Committee on Rules of Practice and Procedure. The Advisory Committee on the Federal Rules of Evidence, composed of judges, litigators, and other legal experts, is studying the need to amend the Federal Rules of Evidence to create an opportunity for challenging possibly deepfaked digital evidence before it reaches the jury.
The starting point for their work is Federal Rule of Evidence 901, which deals with the authentication of evidence in federal courts. Rule 901 obliges the proponent of evidence to make a showing that the proffered item is in fact what the proponent claims it to be. While the current rule arguably already provides a framework for weighing challenges to possibly deepfaked evidence, the committee has been studying the need to insert additional guidance into the rule.
According to the June 10, 2025, agenda book documenting the committee’s work on deepfake evidence,, an advisory committee on AI-related evidence issues is considering the addition of a new provision, Rule 901(c), that would create an opportunity to challenge evidence created with AI tools. The rule, which has not formally been proposed and represents merely the advisory committee’s current thinking on the matter, reads as follows:
(c) Potentially Fabricated Evidence Created by Artificial Intelligence.
(1) Showing Required Before an Inquiry into Fabrication. A party challenging the authenticity of an item of evidence on the ground that it has been fabricated, in whole or in part, by generative artificial intelligence must present evidence sufficient to support a finding of such fabrication to warrant an inquiry by the court.
(2) Showing Required by the Proponent. If the opponent meets the requirement of (1), the item of evidence will be admissible only if the proponent demonstrates to the court that it is more likely than not authentic.
(3) Applicability. This rule applies to items offered under either Rule 901 or 902.
The advisory committee explained that two policy objectives underlie their proposed language. The first is a belief that the party opposing the introduction of AI-generated evidence must do more than merely assert that a particular bit of evidence is a deepfake. Thus the rule requires the opponent to present evidence “sufficient to support a finding of such fabrication.” The second policy objective is the believe that – once evidence has been presented suggesting that the AI-generated evidence might be a fake, the proponent of the evidence must make a heightened showing of authenticity.
The suggested additions to Rule 901 reflect an evolution of the committee’s thinking from last year. In a May 2024 report, the committee drafted for discussion purposes changes to Rule 901(b) and Rule 901(c) that would have required the proponent of AI-generated evidence to make the familiar “probative value exceeds prejudicial effect” showing if the opposing party offered proof suggesting “more likely than not” that the evidence was fabricated or altered.
Other notes accompanying the proposed evidence rule changes indicated that, while the advisory committee generally believes rule changes are unnecessary at this time, it wanted to have a proposal ready in case courts are “suddenly confronted with significant deepfake problems that the existing tools cannot adequately address.”
Dates for the next meetings for the Committee on Rules of Practice and Procedure and its Advisory Committee on Evidence Rules are published in the Federal Register. The public can attend as observers either in-person or remotely.
Evidence law experts in New York, California, and Texas have all been studying the problem of deepfake evidence in recent months. Reports published by those groups each suggested that the courts should take the policymaking lead for now.
Rule 3.3 of the ABA Model Rules of Professional Responsibility already require that lawyers not knowingly offer false evidence in court. One commentator has suggested that the rule should be changed to include situations in which the attorney “knew or should have known” that evidence being offered in court was digitally manipulated.
Deepfakes and Pretrial Discovery
Regardless of how policymakers ultimately come down on the need, or not, for new evidence rules, the specter of deepfake evidence arguably places a burden on litigators to take steps that address deepfake evidence concerns during the earliest stages of litigation. Courts may require litigators to disclose during discovery the presence of any relevant AI-created materials – much the same way that some courts currently require litigants to disclose whether generative artificial intelligence was used to draft motions and other pleadings.
Some experts have also suggested tailoring interrogatories to discover the presence of possibly relevant AI-generated materials in the litigation. This might include contracts and other documents created by artificial intelligence, as well as other evidence deepfaked by generative artificial intelligence. Deposition preparation might also yield fruitful inquiries that will unearth the presence of deepfaked evidence well in advance of trial – and assist litigators in making the most compelling case possible for the admission, or exclusion, of such evidence.