By now, most lawyers should know the dangers of relying on generative AI for legal research. A big risk is that AI will generate fake case citations and quotations. Failure to check and verify the citations and quotations can create serious problems for lawyers whose briefs contain them.
In recent years, many federal district judges have warned counsel and sanctioned them over the use of AI for legal research. And now, what is evidently the first published decision by a bankruptcy court on the topic has been issued. In re Marla C. Martin, Case No. 24 B 13368, 2025 WL 2017224 (N.D. Ill. July 18, 2025).[i]
The decision arises from a curious set of facts in an individual’s chapter 13 bankruptcy case. The debtor had previously filed seven bankruptcy cases, and each one was dismissed. The law firm representing the debtor in her eighth case had handled three prior cases.
The debtor filed number eight after she failed to pay real estate taxes on her home for six years and a lender obtained a tax lien secured by the home. The bankruptcy court gave the debtor many chances to resolve her issues with the lender and propose a confirmable plan.
Eventually she did propose a plan, but it was not feasible. It proposed to pay creditors $2,400 a month, but the debtor’s schedules showed income of $1,600 a month. It turns out the schedules were inaccurate, so the debtor amended them and filed a new plan.
In response, the lender submitted what the bankruptcy court characterized as a “kitchen sink” objection. In reply to the objection, the debtor argued that the lender lacked standing to assert arguments that were not tied to its claim. This included an argument that the lender lacked standing to argue that the amended plan was not feasible.
The bankruptcy judge was curious about what case law the debtor relied on for that argument. This is where AI comes into the picture. The judge and his staff studied the cases the debtor’s counsel had cited.
Four cases, in particular, stood out. The court concluded that “none of them exist as alleged in [the lawyer’s] brief. Worse still, none of the quotations relied upon in the . . . brief are actual statements written by any court.” 2025 WL 2017224, at *4.
The judge confronted the debtor’s counsel. Counsel said he didn’t review the quotes because “he assumed that an AI program would not fabricate quotes entirely.” Id.
The court noted that, “[a]t the very least, the duties imposed by Rule 11 require that attorneys read, and thereby confirm the existence and validity of, the legal authorities on which they rely.” Benjamin v. Costco Wholesale Corp., No. 24-cv-7399, 2025 WL 1195925, at *5 (E.D.N.Y. Apr. 24, 2025)(quoting Park v. Kim, 91 F.4th 610, 615 (2d Cir. 2024)) (emphasis in Benjamin).
“At this point, to be blunt, any lawyer unaware that using generative AI platforms to do legal research is playing with fire is living in a cloud.” 2025 WL 2017224, at *6. The judge said further that, “[t]he bottom line is this: at this point, no lawyer should be using ChatGPT or any other generative AI product to perform research without verifying the results.” Id. at *7.
Having ruled that the debtor’s counsel violated Rule 11, the court considered possible sanctions. The court imposed a $5,500 fine and directed counsel to attend an in-person panel discussion at an industry conference entitled, “Smarter than Ever: The Potential and Perils of Artificial Intelligence.”
[i] The decision in In re Marla C. Martin cites many other cases and articles that discuss the pitfalls of AI-generated legal research. 2025 WL 2017224, at *8 and n. 6-10.