Federal Court Turns Up the Heat on Attorneys Using ChatGPT for Research

Esquire Deposition Solutions, LLC
Contact

Esquire Deposition Solutions, LLC

Most lawyers regard Mata v. Avianca, Inc., 678 F. Supp. 3d 443, 448 (S.D.N.Y. 2023), as the leading case on the consequences of misuse of generative artificial intelligence in legal pleadings. It was from Mata v. Avianca that the legal community first became widely aware of the distinct possibility that publicly available generative AI tools like ChatGPT can yield “hallucinations” — i.e., completely fabricated legal authority — that unsuspecting legal researchers might insert into their court filings.

For leading law firms, the fallout after having been caught inserting even a single AI-generated hallucination into a legal pleading appears to be catastrophic, almost like a data breach incident.

The Mata v. Avianca court imposed monetary fines and other remedial measures against the offending attorneys, citing Rule 11 of the Federal Rules of Civil Procedure as well as the federal judiciary’s inherent authority to sanction abuses of the judicial process. As of June 22, 2023, the date of the Mata decision, the shortcomings of ChatGPT for legal research and the danger it poses to professional reputations were on most lawyers’ radar screens. Professional education programs blossomed. Cautionary articles were written in nearly every state bar’s official publication. Ethics opinions outlining the many ways that the use of generative AI intersects with lawyer ethics obligations were published by the American Bar Association and numerous courts and state bar associations. A few courts issued standing orders requiring the disclosure of generative AI in legal pleadings. Many others issued dozens of orders sanctioning attorneys for including citations and statements of law fabricated by ChatGPT. Law firms created internal policies governing — and, in some cases, forbidding — the use of generative AI for client work. And legal research companies brought to market generative AI products that, they claimed, addressed ChatGPT’s performance, privacy, and ethical shortcomings.

ChatGPT and Large Law Firms

Late last month, the U.S. District Court for the Northern District of Alabama issued an equally significant opinion on the use of ChatGPT — or, in fact, any generative AI technology — in legal pleadings. The opinion, Johnson v. Dunn, No. 2:21-cv-1701 (N.D. Ala., July 23, 2025), involved a large and well-regarded law firm’s use of hallucinated legal citations in a motion for leave to take the plaintiff’s deposition. The district judge’s opinion in support of its order imposing sanctions will be important for several reasons.

First, Johnson v. Dunn underlines the judiciary’s growing impatience with discovering AI-generated errors in pleadings filed in their courts. In Johnson v. Dunn, the court declared that monetary sanctions are proving ineffective at deterring false, AI-generated statements of law in legal pleadings. Something more is needed, it said.

According to the court:

If fines and public embarrassment were effective deterrents, there would not be so many cases to cite. And in any event, fines do not account for the extreme dereliction of professional responsibility that fabricating citations reflects, nor for the many harms it causes.

Instead of monetary sanctions, the court decided that the appropriate sanction was to disqualify the offending attorneys from representing the client for the remainder of the case. The court also directed that its opinion be published in the Federal Supplement, and that the court clerk inform bar regulators in each state where the responsible attorneys are licensed to practice.

Second, Johnson v. Dunn involved a law firm that had adopted a proactive and responsible approach toward addressing the dangers of generative AI in firm operations. Well before the offending conduct at issue in this case occurred, the firm had circulated to all attorneys an email alerting them to the dangers of using generative AI for client work. And the firm forbade the use of generative AI without the permission of practice group leaders. (As it turned out, the offending ChatGPT hallucination was inserted into the pleading by a practice group co-leader.)

The law firm’s responsible attitude toward generative AI was one of the reasons the court cited for declining to impose sanctions on the firm.

Third, the court’s opinion appears to have significantly raised the professional dangers of signing a legal pleading for submission to a federal court. If an attorney’s signature is on a legal pleading, then that attorney, in the court’s view, is responsible for every matter asserted as true in the pleading — regardless of who initially authored the error. The court declined to accept as an excuse the fact that the error was committed by a supervisor, or that the hallucinated citation was in support of a factually accurate statement of the law (“a stroke of pure luck for these lawyers”), or that the erroneous citation was assumed to have come from prior unobjectionable pleadings in similar matters for the same client.

Fourth, for leading law firms, the fallout after having been caught inserting even a single AI-generated hallucination into a legal pleading appears to be catastrophic, almost like a data breach incident. In Johnson v. Dunn, the offending firm conducted a searching docket review of all of its cases in Alabama federal district courts and the Eleventh Circuit Court of Appeals, finding no other instances of hallucinated citations. The firm also engaged another law firm to independently audit its legal filings for erroneous legal citations — again, finding none.

Fifth, and finally, the court’s opinion suggests that the current court rules and ethical standards might not adequately address the harms caused by careless use of generative artificial intelligence in legal pleadings. Rule 11 doesn’t apply to discovery disputes. And an attorney’s ethical obligation of candor toward the tribunal generally is, in most jurisdictions, implicated only by knowingly made false statements. Carelessness arguably falls short of this standard.

The court’s opinion in Johnson v. Dunn should be required reading by all lawyers, but particularly those in large law firms where practice group leaders have supervisory roles and pleadings for important clients often bear the signatures of multiple attorneys, some of whom may not have authored the pleadings above which their name appears.

Written by:

Esquire Deposition Solutions, LLC
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Esquire Deposition Solutions, LLC on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide