Artificial intelligence is taking the world by storm, and the legal community is no exception. Tools that can reduce the time and cost of litigation have long been in high demand by both clients and counsel. But the tool must be fit for its purpose, and tools that generate evidence or other outputs that will be submitted to the court must pass judicial scrutiny. As video cameras became smaller and less expensive, there were fights over the admissibility of deposition videos that attorneys had recorded themselves to avoid videographer fees. Practitioners who focus on e-discovery can tell stories of hotly litigated technology-assisted review protocols. One of the newer fights concerns the admissibility of machine-generated “expert” opinions.
The U.S. Courts Advisory Committee on the Federal Rules of Evidence (the “Committee”) proposes to address the issue by adding a new rule, Federal Rule of Evidence 707. The prospect of adding a new rule to regulate the admissibility of machine-generated evidence was first raised at the Committee’s November 2024 meeting. Since then, the Committee has proposed the following language for Rule 707:
When machine-generated evidence is offered without an expert witness and would be subject to Rule 702 if testified to by a witness, the court may admit the evidence only if it satisfies the requirements of Rule 702(a)-(d). This rule does not apply to the output of basic scientific instruments.
On June 10, 2025, the Judicial Conference Committee on Rules of Practice and Procedure released proposed new Rule 707 for public comment. The public comment period is now open and runs until February 16, 2026.
How would Rule 707 function if adopted? While the proposed Rule and Committee Note sketch out a possible answer, they also hint at struggles to come.
First, Rule 707 would apply only to evidence “offered without an expert witness.” Indeed, although the proposed Committee Note explains that Rule 707 “is not intended to encourage parties to opt for machine-generated evidence over live expert witnesses,” it also contemplates Rule 707 being invoked when “machine or software output is presented without the accompaniment of a human expert (for example through a witness who applied the program but knows little or nothing about its reliability).” Comments in the Committee’s June 10, 2025 Agenda Book illustrate the complexity of outputs Rule 707 could govern, including “machine output analyzing stock trading patterns to establish causation; analysis of digital data to determine whether two works are substantially similar in copyright litigation; and machine learning that assesses the complexity of software programs to determine the likelihood that code was misappropriated.” But without an expert to vouch for such machine-generated evidence, how will proponents establish that these outputs satisfy Rule 702(a)-(d)? Indeed, courts have already rejected AI-generated opinions where counsel could not explain the tool’s basis or methodology. See, e.g., J.G. v. New York City Dep’t of Educ., 719 F. Supp. 3d 293, 307–08 (S.D.N.Y. 2024) (rejecting AI-generated report on reasonableness of fee application due to concerns over hallucinations and obscurity of report’s inputs). Given the rise in orders sanctioning attorneys for submitting AI-drafted briefs featuring nonexistent case law, we suspect courts will be skeptical of the basis and reliability of evidence offered under Rule 707 for the foreseeable future.
Second, proposed Rule 707 would not apply to “the output of basic scientific instruments.” This language notably omits a clause that appeared in an earlier draft of Rule 707, which further excepted the output of tools that were “routinely relied upon commercial software” out of concerns that not all commonly used tools are reliable. The proposed Committee Note offers a few examples that would meet the exception as currently phrased: “the results of a mercury-based thermometer, an electronic scale, or a battery-operated digital thermometer.” While it seems obvious that the output of a calculator would meet the exception and that a Rule 26 report drafted in its entirety by an AI chatbot would not, we predict that battles over exactly what constitutes a “basic scientific instrument” will initially outpace battles over the admissibility of specific evidence pursuant to Rule 707. Given the proposed Committee Note’s observation that “the rule does not apply when the court can take judicial notice that the machine output is reliable” under Rule 201, it may be that the “basic scientific instrument” inquiry will look like the Frye “general acceptance” test for scientific evidence that was displaced by the modern Rule 702 reliability inquiry that other machine-generated outputs would need to satisfy under Rule 707.
These are by no means the only questions courts will need to wrestle with when applying proposed Rule 707. While parties and counsel should be aware of the potential efficiencies and impact of machine-generated “expert” opinions on litigation, there is reason to believe courts will be slow to accept the more ambitious uses of such evidence, even under Rule 707 as proposed.