Artificial Intelligence (AI) is transforming the legal profession, offering tremendous opportunities to enhance efficiency and access to justice. While AI tools are reshaping how legal professionals operate, the use of AI in the legal field implicates critical considerations around ethics, privacy, and court rules. Here, we provide an update on the ethical issues surrounding AI, summarize recent court opinions highlighting improper use of AI tools, and provide an overview of AI best practices for lawyers.
Ethical issues in AI
Prompted by the rapid development of AI tools, Chief Justice of the Supreme Court John Roberts dedicated his 2023 annual report on the judiciary to the discussion of benefits and challenges of using AI in legal practice.i The report summarized the integration of new technologies into the legal field over the last 150 years, starting from the use of typewriters in the 19th century, continuing to the introduction of photocopy machines in the 1960s, and on to the era of computerization in the 1970s.ii The report then turned to the “latest technological frontier” — AI — and highlighted the potential benefits it could bring, including access to justice for those with limited resources and assisting the parties and the courts in seeking “just, speedy, and inexpensive” resolution of cases.iii Chief Justice Roberts, however, also warned of the potential dangers of AI, such as hallucination, bias, and loss of confidentiality.iv
Multiple state bar associations have also issued ethical opinions concerning the use of generative AI in the legal setting.v Generally, state bars stress that lawyers must abide by the duty of confidentiality and admonish against entering confidential client information into AI programs that lack adequate data security protections. Many of the ethical opinions further discuss lawyers’ duties of competence and diligence in light of the risk that AI may output false, inaccurate, or biased information, as well as the prohibition on charging clients for time saved through the use of AI tools. Some guidance also urges caution in using chatbots on law firm websites, which presents a risk that an attorney-client relationship may be created without the lawyer’s knowledge.
Beyond the discussion of ethical obligations, some state bars offer practical advice to help lawyers integrate AI use into their practice. For example, the Florida Bar published a “getting started with AI” guide, which covers the concept of AI and related terminology, how generative AI works, and an overview of different AI models, including those that are law-specific.vi Perhaps most importantly, the guide outlines categories of legal tasks that may benefit from the use of AI and whether it is possible to accomplish those tasks with existing legal AI models.
How courts handle AI issues
AI hallucination — the “tendency of AI tools to produce outputs that are demonstrably false”vii — presents a serious risk for attorneys who use AI in their legal practice. Under Federal Rules of Civil Procedure (FRCP) 11(b)(2), an attorney who submits a pleading, motion, or other paper to a court certifies that the arguments therein “are warranted by existing law or by a nonfrivolous argument for extending, modifying, or reversing existing law or for establishing new law.” Fed. R. Civ. P. 11(b)(2). If legal authority cited in a court submission is the product of AI hallucination, an attorney may be found to have violated FRCP 11 and may be subject to sanctions.viii Indeed, a growing body of case law details the woes of attorneys who use AI for legal tasks without proper safeguards. Failure to validate information provided by AI tools is a common thread in such cases.
Several recent opinions address lawyers’ failure to comply with FRCP 11 when using AI. Generally, courts have been willing to sanction lawyers who attempt to improperly remedy AI-related errors. For example, in an early and widely publicized AI-related case — Mata v. Avianca, 678 F.Supp.3d 443 (S.D.N.Y. 2023) — lawyers for the plaintiff used an AI tool that fabricated the cases cited in a brief submitted to court. After the citation issue was discovered, and following some back and forth with the court, the plaintiff’s counsel eventually admitted to using AI in drafting the pleading and not confirming the validity of the referenced cases. The court issued a scathing opinion stating that the lawyers “abandoned their responsibilities” and “then continued to stand by the fake opinions after judicial orders called their existence into question.” Id. at 448. Relying on FRCP 11, the court issued sanctions in the amount of $5,000. See id. at 466.
On the other hand, courts generally have been lenient with attorneys who immediately admit wrongdoing after problems with their use of AI emerge. For example, in United States v. Cohen, 724 F. Supp. 3d 251, 253 (S.D.N.Y. 2024), attorneys for the defendant submitted a brief that contained nonexistent cases. The attorneys were apologetic and did not attempt to hide their use of AI. The court discovered that the attorneys had failed to verify their client’s edits, which were made with the help of AI, before filing the pleading. See id. at 254–55, 258–60. Based on that record, the court concluded that there was no bad faith that would warrant sanctions but noted that the attorneys’ actions were “embarrassing and certainly negligent.” Id. at 258.
More recently, however, courts have had little patience for lawyers who fail to confirm the accuracy of AI-generated content before submitting court filings. For example, in Wadsworth v. Walmart, 348 F.R.D. 489, 493 (D. Wyo. 2025), the court addressed the plaintiff’s submission of a motion in limine that cited nine cases, eight of which were fake citations generated by AI. While the court noted that the attorneys for the plaintiff had been forthcoming, honest, and apologetic, and further tried to remedy the situation, that did not absolve them of sanctions. Id. at 496–99. The court stressed that while technology may change, the requirements of FRCP 11 do not. Id. at 499. Ultimately, the court concluded that the attorneys had failed to make a reasonable inquiry into the law and that that warranted sanctions, including monetary fines and revocation of pro hac vice admissions. Id.
Improper use of AI is not limited to attorneys. Many areas of legal practice — including patent law — rely heavily on expert testimony. And there is a growing risk that expert witnesses may inadvertently submit AI-hallucinated information in declarations and expert reports, undermining their credibility and, ultimately, hurting litigants. For example, in Kohls v. Ellison, No. 24-cv-3754 (LMP/DLM), 2025 WL 66514 (D. Minn. Jan. 10, 2025), an expert submitted a declaration to the court that included citations fabricated by AI. To make matters worse, the expert was a specialist in AI and misinformation, and the subject of the declaration was the dangers of AI deepfakes. Id. at *1, *3. The court concluded that the expert’s error “shatter[ed] his credibility with th[e] Court.” Id. at *4. It also suggested that, given the rapid proliferation of AI use, FRCP 11 “may now require attorneys to ask their witnesses whether they have used AI in drafting their declarations and what they have done to verify any AI-generated content.” Id.
To address the epidemic of citations to fake cases, multiple courts and judges have adopted rules pertaining to AI use in litigation. For example, the local civil rules in the Eastern District of Texas have sections on the use of AI for both pro se litigants and attorneys.ix These rules highlight the requirements of FRCP 11 and caution that AI tools may produce factually or legally inaccurate content.x Some judges, such as District Court Judge Lee of the Northern District of California, go even further than a simple caution and require, as part of their standing orders, that any submission containing AI-generated content include a certification that a lead trial counsel has personally verified the content’s accuracy and that failure to adhere to that requirement may be grounds for sanctions.xi
AI use by the judiciary
AI tools are finding their way into the judiciary as well. Exploring how the judiciary may use AI, a group of judges, joined by a computer science professor and lawyer, recently published a set of guidelines for judges and their chambers who seek to use AI in their work.xii The guidelines discuss the core principles of the judiciary and stress that any use of AI tools must not compromise judicial officers’ independence, integrity, and impartiality.xiii The guidelines further discuss the capabilities and limitations of AI tools and provide a list of tasks for which AI could be useful in the judicial setting, including conducting legal research using tools that have been trained on a comprehensive collection of reputable authorities; drafting routine administrative orders; searching and summarizing depositions, exhibits, and pleadings; determining whether filings misstate or omit relevant legal authority; and editing or proofreading draft opinions.xiv
AI best practices
The AI age is upon us, and it holds tremendous potential for the legal field. Because AI will inevitably grow in both its capacity and complexity, attorneys must learn how to ethically and effectively incorporate this new and useful technology into their practices.
Given the risks and rewards of AI, lawyers should develop internal policies on the use of AI tools and provide training on their proper use for legal tasks. Those policies should address the ethical concerns inherent in the use of AI and stress the importance of safeguarding clients’ confidential information, as well as maintaining transparency with clients and courts regarding the use of AI tools in their matters. Further, they should emphasize that while AI may enhance productivity, it is not a substitute for professional judgement; lawyers are still bound by applicable rules of practice, including the FRCP, local court rules, and standing orders.
Attorneys should also maintain competence with various AI tools to stay abreast of advances in the field. Developments in the AI space are fast-paced and ever-changing. Staying on top of those advances can increase the efficiency of attorneys’ practice of law and inform their compliance with ethical obligations and court rules.
i 2023 Year-End Report on the Federal Judiciary, United States Supreme Court (2023).
ii See id. at 2–5.
iii Id. at 5–7.
iv See id.
v See, e.g., Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law Executive Summary, The State Bar of California Standing Committee on Professional Responsibility and Conduct; Formal Opinion 2024-5: Ethical Obligations of Lawyers and Law Firms Relating to the Use of Generative Artificial Intelligence in the Practice of Law, The New York City Bar Association Committee on Professional Ethics (2024); Florida Bar Ethics Opinion Opinion 24-1, The Florida Bar (2024).
vi The Florida Bar Guide to Getting Started with AI, Legal Fuel (July 28, 2025).
vii Varun Magesh et al., Hallucination-Free? Assessing the Reliability of Leading AI Legal Research Tools at 3 (2025).
viii See Fed. R. Civ. P. 11(c)(1) (“If, after notice and a reasonable opportunity to respond, the court determines that Rule 11(b) has been violated, the court may impose an appropriate sanction on any attorney, law firm, or party that violated the rule or is responsible for the violation.”).
ix United States District Court for the Eastern District of Texas Local Rules as of December 1, 2023, United States District Court for the Eastern District of Texas (2023).
x See id. at 11, 41.
xi Standing Order for Civil Cases Before Judge Eumi K. Lee, United States District Court Northern District of California (2024).
xii See Hon. Herbert B Dixon Jr. et al., Navigating AI in the Judiciary: New Guidelines for Judges and Their Chambers, 26 Sedona Conf. J. (2025) (publication forthcoming) (“Navigating AI in the Judiciary”); see also Hon. Herbert B. Dixon Jr. et al., Webinar on Navigating AI in the Judiciary: New Guidelines for Judges and Their Chambers, The Sedona Conference (2025).
xiii See Navigating AI in the Judiciary at 3–5.
xiv See id. at 5–7.