California Courts Announce New AI Regulations

Foley & Lardner LLP
Contact

Foley & Lardner LLP

On July 18, 2025, California’s Judicial Council approved a set of rules for integrating generative AI into judicial operations. With the adoption of Rule 10.430 and Standard 10.80 specifically, the courts are looking to put into place the country’s first broad framework for generative AI use in court procedures. These new guidelines are expected to go into effect in September 2025.

Under Chief Justice Patricia Guerrero in 2024, a task force with an aim to balance innovation with caution and ensure AI efficiency without compromising trust, propelled these rules forward. With these new rules, all courts in the state using AI must outline clear policies with a focus on confidentiality, bias, and accuracy by December 15, 2025. 

There has been much media and private discussion on why these guardrails are critical. While AI can streamline tasks like looking through case law, drafting memos, summarizing briefs, and saving time for the court employees, there are many risks that still exist. These risks include a variety of issues, such as data breaches or biased outputs. To address this, the rules do not allow feeding sensitive information, such as driver’s licenses, into public AI tools. It also mandates human review of all AI-generated outputs and requires clear labeling of AI-created public content. 

According to the ABA Journal, each court’s policy must “prohibit the entry of confidential, personal identifying, or other nonpublic information into a public generative AI system,” as well as “require disclosure of the use of or reliance on generative AI if the final version of a written, visual, or audio work provided to the public consists entirely of generative AI outputs.”

The rules don’t allow AI to make decisions or act autonomously as human oversight ensures AI supports, not supplants, judicial expertise.

By automating tasks that are viewed as repetitive in nature, AI quickens case resolutions and reduces workloads. However, the rules acknowledge AI’s limitations, particularly around bias. AI systems, trained on historical data, can, without knowing, increase societal inequities if unchecked. California’s framework requires courts to balance the acts of being proactive in preventing discriminatory use, while at the same time, maximizing AI’s strengths and mitigating its risks.

Transparency, accuracy, privacy, and security are other highlighted areas. AI-generated documents or opinions made public have to be disclosed as such if we are going to take steps towards reinforcing the legal system’s foundation of trust.

As the largest state to adopt such a comprehensive framework for AI use in its courts, this policy is positioned as a potential national blueprint. California’s policy could set a standard for responsible AI use, and more states, such as New York, are exploring their own AI rules. 

By outlining clear, ethical guidelines, California is leading the way for a judiciary that’s faster, fairer, and more accessible.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Foley & Lardner LLP

Written by:

Foley & Lardner LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Foley & Lardner LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide