Calif. Bar Exam Fiasco Shows Why Attys Must Disclose AI Use

Troutman Pepper Locke
Contact

Troutman Pepper Locke

Published in Law360 on June 10, 2025. © Copyright 2025, Portfolio Media, Inc., publisher of Law360. Reprinted here with permission.

Generative artificial intelligence is everywhere. It seems that every industry — even the most sophisticated — is evaluating how to ethically and efficiently integrate generative AI into their business models and work streams.

Amid these innovation efforts, however, is an underlying current of suspicion, distrust and skepticism about how and whether generative AI can be used ethically.

For the legal industry, state statutes governing generative AI, and standing court orders setting parameters for when and how generative AI may be used in court filings, are helpful guideposts. Those guideposts indicate that, when lawyers or legal professionals use generative AI, disclosure and vetting of the generative AI-created content are key.

The February California bar exam offers a timely case study on how the failure to disclose the use and vetting of generative AI can break trust, create scandal and call ethical integrity into question.

Undeniably, California’s February bar exam was rife with controversy and caused a public scandal. The exam — administered virtually and on a new testing platform for the first time — presented myriad glitches, irregularities and disruptions for test-takers. These issues included the platform crashing before the exam even began, delays between a user’s input and the screen’s display, frequent error messages, and an inability to save essay responses.

Needless to say, these are all issues no bar examinee wishes to experience on what could be one of the most determinative days of their career.

Adding fuel to the frenzied fire of media coverage is the revelation in April that the State Bar of California‘s Committee of Bar Examiners used generative AI to craft some of the exam’s questions. This discovery has exacerbated the broken trust that now exists between the bar examiners, test-takers and California’s legal community.

But it is important to stick to the facts, which can often be buried or overlooked in the coverage of scandalous events. What exactly happened here?

After the exam took place, the bar examiners disclosed in a statement that some multiple choice questions — 29 out of 200, 23 of which were used for scoring — were “developed with the assistance of AI and subsequently reviewed by content validation panels and a subject matter expert in advance of the exam.”

It is worth noting that the AI-generated questions were created by ACS Ventures, a “psychometric consulting company that works with clients for the design, development, and evaluation of their assessments” and that “partner[s] with testing agencies.”

As the usage of generative AI becomes more mainstream, this kind of arrangement — i.e., a company using a third party with generative AI expertise to produce generative AI-created content — will likely become more common, as well.

Here, ACS Ventures — the entity that actually used generative AI to draft the questions — does not appear to be under scrutiny related to the incident. The bar examiners who enlisted their services, however, are the ones who have to answer for the undisclosed choice to use generative AI. This is further evidence that the bar examiners’ usage of generative AI alone is not the issue, but the manner in which they used it.

Specifically, the bar examiners have conceded that the use of generative AI was “not clearly communicated to state bar leadership.” While some commentators seem critical of the fact that the bar examiners used generative AI in the first place, the broader public condemnation likely arises from broken trust that had already resulted from the exam’s numerous failures.

Prior to the exam, no one aside from the bar examiners and ACS Ventures knew that generative AI was used to craft some of the questions. No one — from the legal community to the test-takers to the California Supreme Court, which oversees the State Bar of California and the bar examiners — was aware that any of the questions on the exam were drafted with the use of generative AI. Now, many of these stakeholders are critical after learning what occurred.

This is evident in several respects. The California Senate Judiciary Committee has approved a bill that calls for a full audit of the exam. Current State Bar of California Executive Director Leah Wilson has announced that she will resign from her role, effective July of this year. And, most importantly for the test-takers, the California Supreme Court approved adjustments to the passing score.

Setting aside the justified public outrage over the exam’s failures, proponents of generative AI in the legal profession cannot help but wonder: Had the bar examiners’ usage and vetting of generative AI been clearly communicated before the exam took place, would it have been met with the same ire? The answer is likely no.

Indeed, the ethical and transparent use of generative AI is in keeping with California’s stated priorities, as evidenced by the enactment of the California AI Transparency Act, which becomes effective Jan. 1, 2026. The act will require generative AI systems that are publicly accessible within California, and that have more than 1 million monthly visitors or users, to implement robust measures to disclose when content has been generated or modified by generative AI.

Additionally, and quite notably, the State Bar of California’s Standing Committee on Professional Responsibility and Conduct also released its own “Practical Guidance for the Use of Generative Artificial Intelligence in the Practice of Law.”

This guidance includes a recommendation for how practitioners should communicate the use of generative AI: “The lawyer should consider disclosure to their client that they intend to use generative AI in the representation, including how the technology will be used, and the benefits and risk of such use.”

Further, numerous California judges — in both state and federal courts — have issued standing orders outlining guidelines that parties must follow if they choose to use generative AI as an aid in the drafting of legal documents. To name a few examples:

  • Judge Kimberly Knill in the Orange County Superior Court has issued a standing order requiring that the use of generative AI in preparing any “paper filed with the Court” must be disclosed in a clear and plain factual statement noting that the AI-generated work product has been verified as accurate.
  • U.S. District Judge Anne Hwang in the U.S. District Court for the Central District of California entered a similar order, requiring that any filing containing content created by AI must include a separate declaration disclosing that generative AI was used and verified.

Other California judges have all entered similar orders.[1]

The common thread in all of these orders is twofold: (1) The use of GenAI must be disclosed, and (2) the attorney who used generative AI must certify that they checked and verified the accuracy of the AI-created work product.

Given all this, it should come as no surprise that organizations such as the State Bar of California would use generative AI to draft bar exam questions. But the bar examiners’ admitted failure to clearly communicate that it planned to use generative AI in drafting test questions is perhaps where they erred.

California’s policies and its own courts’ standing orders demonstrate that the use of generative AI can be both acceptable and appropriate if it is done transparently. It’s the secret or undisclosed use of generative AI, however, that makes people feel misled.

What can the legal industry and practitioners take away from this? While a bar exam is not a legal proceeding, it represents an important professional milestone for applicants who wish to become lawyers, and the drafting of exam questions is a weighty task that determines who can practice law.

It is held to a high standard in the industry, as evidenced by the fact that the California Supreme Court has direct oversight over the California bar exam. Practicing attorneys are similarly held to high standards by their clients, their colleagues, the rules of professional conduct and the rules of courts.

Here, the California Supreme Court has not condemned the bar examiners’ usage of generative AI, but it has stated that it was not informed about it, and, accordingly, it has demanded answers from the bar examiners. Because the bar examiners disclosed their usage of generative AI after the fact, they will now have to provide those answers publicly amid a maelstrom of already negative reporting.

To avoid this kind of quasi-adversarial public spectacle in their own practices, practitioners are advised to do the following:

  • Use a generative AI provider that is trusted and legitimate.
  • Before you use generative AI, have a process in place to vet and verify any and all generative AI-created content that you will include in any work product.
  • If you are submitting work product that was produced with the assistance of generative AI-created content anywhere — to courts, clients, colleagues, etc. — disclose both that you used generative AI, and that you vetted and verified the content created by generative AI. If you are submitting this work to a court, make sure you are in compliance with any standing orders that court may have been entered regarding generative AI.

Following these steps will help legal practitioners ethically use generative AI in a manner that fosters transparency and preserves trust in the profession.


[1] This includes Fred Slaughter (C.D. Cal.), Todd Robinson (S.D. Cal.), Eumi Lee (N.D. Cal.), Araceli Martínez-Olguín (N.D. Cal.), Stanley Blumenfeld (C.D. Cal.), Rita Lin (N.D. Cal.), Rozella Oliver (M.J. C.D. Cal.), and Peter Kang (M.J. N.D. Cal.).

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Troutman Pepper Locke

Written by:

Troutman Pepper Locke
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Troutman Pepper Locke on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide