California Courts Adopt Rule Governing the State’s Generative AI Use

Morgan Lewis
Contact

Morgan Lewis

Following the year-long work of its Artificial Intelligence Task Force, California’s Judicial Council has adopted Rule 10.430 addressing the use of generative AI by the state’s judicial branch. California is the largest, and one of the first, court systems in the United States to adopt such a framework. This LawFlash covers the content and scope of the rule as well as its accompanying guidelines and offers advice on how to use the court’s approach as a guide for what to anticipate in this area and how best to prepare. 

THE RULE AND STANDARD

Rule 10.430, Generative Artificial Intelligence Use Policies, applies to the California Superior Courts, Courts of Appeal, and Supreme Court. It requires that any court that permits the use of generative AI by its staff or judicial officers must adopt a use policy by December 15, 2025. The accompanying Standard 10.80 functions as a set of guidelines for policy drafting. Each court may either adopt the guidelines or draft a policy of its own reflecting the content and scope of the rule.

GUIDING PRINCIPLES

The rule outlines six guiding principles, which are fleshed out in policy form in the accompanying standard:

  • Confidentiality: A prohibition on the entry of confidential, personally identifying, or other non-public information into public generative AI tools
  • Discrimination: A prohibition on the use of AI to discriminate on the basis of a broad range of protected classes
  • Accuracy: A requirement that staff and judicial officers take reasonable steps to ensure the accuracy of material sourced or partially sourced from generative AI
  • Bias: A requirement that staff and judicial officers take reasonable steps to remove biased, offensive, or harmful content
  • Disclosure: A requirement that staff and judicial officers disclose the use of AI where the final version of a publicly provided work consists entirely of generative AI outputs
  • Ethics: A requirement that all applicable laws, ethical rules, court policies, and conduct rules be complied with in the use of generative AI

The task force recognized that, considering the rapid evolution of these tools and the underlying technology, a balance between uniformity and flexibility was their preferred approach. Rather than prescribing whether and how these technologies may be employed by the courts, the rule instead attempts to situate the use of AI into a framework reflecting and applying broad legal, ethical, and professional principles.

PRACTICAL CONSIDERATIONS AND TAKEAWAYS FOR LAWYERS

California’s guidance on the use of generative AI by courts and judicial employees offers meaningful cues for practicing lawyers and their clients. While it does not bind litigants, the rule reflects how the judiciary is approaching generative AI and creates expectations that will influence litigation conduct, judicial receptiveness to the use of AI tools by lawyers, and ethical obligations.

Transparency and Disclosure Will Be Expected

Lawyers should consider proactive disclosure when generative AI tools are used in preparing filings, particularly for legal research or drafting. California’s guidance emphasizes that AI-generated content must not be misrepresented as human-generated. While the rule applies only to judicial officers, this principle implies that lawyers may face scrutiny if they present AI-generated arguments or citations without proper review and verification.

Accuracy and Verification Are Critical

Lawyers must independently verify the accuracy of any AI-generated legal content. The guidelines warn of hallucinations, bias, and factual errors in generative AI outputs. Courts will expect that lawyers applying AI tools have validated the information—just as they would when relying on associate or paralegal work.

Client Confidentiality Must Be Protected

AI tools should not be used with client-confidential information without adequate safeguards in place. For example, it is crucial to understand the difference between public-facing tools and their more secure counterparts.

Bias and Discrimination Risks Should Be Considered

The rule acknowledges that AI can reproduce or amplify systemic bias. Lawyers using AI to support decision-making—for example, jury selection models or predictive analytics—should vet tools and control for fairness and transparency.

Do Not Rely on AI to Bypass Ethical or Procedural Rules

Judges and staff are prohibited from using AI to make judicial decisions. Lawyers should similarly avoid outsourcing legal judgments. Ethical concerns raised by the task force were not directly resolved in the rule or guidelines. Rather, the group referred these questions to the California Code of Judicial Ethics and related ethical guidance. However, there are obvious parallels between the respective work product of judges and lawyers that California attorneys and their clients should consider in light of the rule.

Judges Are Becoming AI Literate

As courts develop internal policies and training to evaluate and control AI use, expectations will raise for use of the same by lawyers and their clients. Expect that judges may question whether improper reliance on AI occurred, especially in poorly drafted or incorrect filings.

Lawyers should educate clients about appropriate AI use—especially corporate clients using generative AI for internal compliance, human resources, or legal triage. Even internal uses that may appear safe could still carry risk and should be analyzed and scrutinized accordingly.

CONCLUSION

Given California’s status as the nation’s largest court system, this new judicial standard not only impacts a massive number of court personnel, practitioners, and litigants, but will likely contribute to an emerging national standard. Rule 10.430 offers a blueprint for forward-thinking lawyers to shape their practice and counsel around an emerging consensus from the nation’s judges on the application and limits of generative AI.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Morgan Lewis

Written by:

Morgan Lewis
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Morgan Lewis on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide