California’s attempt to impose content moderation transparency requirements on social media platforms has suffered a significant setback. Last month, the state reached a settlement with X Corp. (formerly Twitter), effectively stripping AB 587 of its most controversial provisions.
The Legal Challenge: X Corp. v. Bonta
AB 587, signed into law in 2022, was championed by California Governor Gavin Newsom and Attorney General Rob Bonta as a measure to enhance social media transparency. As enacted, the law required large social media companies (those generating more than $100 million in gross revenue) to submit semiannual reports detailing how they define and enforce policies on hate speech, disinformation, extremism, harassment, and foreign political interference.
X Corp. sued California (X Corp. v. Bonta) in the U.S. District Court for the Eastern District of California, arguing that AB 587’s disclosure requirements violated the First Amendment by compelling companies to reveal internal content moderation policies. After an appeal, the Ninth Circuit issued a temporary injunction, halting enforcement of the content category report provisions after finding they likely violated the First Amendment. After this ruling, California agreed to drop the challenged provisions, and the case was returned to the Eastern District of California for final resolution. There, the parties reached a settlement under which California formally committed to removing the contested provisions. The district court ultimately entered a final judgment and permanent injunction, barring enforcement of those provisions.
Overview of AB 587: Content Moderation Requirements
Before facing legal challenges, the original version of AB 587 introduced comprehensive requirements to enhance transparency and accountability in social media content moderation for users in California. Specifically, the bill mandated that large social media companies would need to clearly define and disclose their terms of service and enforcement policies, ensuring users would understand how content is flagged, removed, or otherwise actioned.
In both its original and current forms, AB 587 required social media companies to publicly post their terms of service for each platform they own or operate. Additionally, in the AB 587’s original version, these terms had to clearly outline how users can flag harmful content, what actions the company may take against violators, and provide contact information for inquiries.
AB 587 also mandated that social media companies would need to submit semiannual reports to the state attorney general, detailing their content moderation practices. These reports had to include definitions of key categories such as hate speech, misinformation, extremism, and harassment. The law specified that platforms needed to break down the information into specific categories–like the type of content and media, how content was flagged (e.g., by users, AI, or moderators), and how actions were taken (e.g., by company staff or automated systems). Noncompliance carried substantial civil penalties, with fines reaching up to $15,000 per violation per day. Notably, AB 587 excluded smaller platforms with annual revenue under $100 million, as well as those focused solely on direct messaging or commercial transactions, from these reporting requirements.
What Remains and What Was Removed
Following X Corp.’s legal challenges and the settlement, the law has been significantly narrowed. While platforms must still post their terms of service and report changes to the state twice a year, the most rigorous reporting requirements (such as detailed data on flagged content, enforcement actions, and associated metrics) have been eliminated, weakening the law’s ability to mandate transparency in content moderation.
As part of the settlement, California removed provisions that would have required platforms to:
- Define and disclose their policies on hate speech, extremism, and disinformation and
- Report data on flagged or removed posts related to those categories.
What remains is a stripped-down version of AB 587:
- Social media platforms must publicly post their terms of service and
- Platforms must provide the state with a summary of any changes twice a year.
California will also pay X approximately $350,000 in attorneys’ fees.
Ninth Circuit Injunction
Given the high-stakes clash between content moderation and free speech, it’s worth rewinding to the Ninth Circuit’s injunction–the legal spark that set this entire battle in motion. The Ninth Circuit’s decision to issue a temporary injunction against AB 587, which ultimately led to the settlement, centered on the First Amendment implications of the law’s provisions that require large social media companies to disclose their content moderation policies.
The court classified the law as a content-based regulation, meaning it specifically targets speech based on its subject matter rather than imposing neutral, across-the-board requirements. In this case, AB 587 singled out categories like hate speech, disinformation, and extremism, triggering the highest level of judicial review: strict scrutiny. Under this standard, a law must be narrowly tailored to serve a compelling government interest–an incredibly tough bar to clear. The court found that AB 587’s content moderation disclosure requirements likely failed this test, as they were overly broad and not the least restrictive means of achieving transparency. It suggested that less burdensome alternatives, such as requiring platforms to disclose whether they moderate certain types of speech or providing anonymized samples of removed posts, could achieve similar goals without running afoul of the First Amendment.
Adding to its concerns, the court also ruled that a facial challenge to these provisions was warranted, emphasizing that the constitutional issues weren’t limited to a few companies but applied universally across the industry. A facial challenge argues that a law is unconstitutional in all its applications, whereas an as-applied challenge contends that a law is unconstitutional only in specific circumstances. Ultimately, the decision underscored the First Amendment pitfalls of compelling non-commercial speech and imposing content-based regulations, making it clear that key provisions of AB 587 were unlikely to survive judicial scrutiny.
A Victory for Free Speech or a Regulatory Gap?
The rollback of AB 587 raises fundamental questions about the balance between free expression and platform accountability. Supporters of the law argue that its dismantling removes a critical layer of oversight, allowing social media platforms to moderate content without meaningful transparency. Without disclosure requirements, critics warn, platforms can enforce (or selectively ignore) content policies without public scrutiny, potentially enabling inconsistent enforcement, algorithmic bias, or the unchecked spread of harmful content.
On the other hand, this result is heralded by opponents as a major win for free speech. X Corp. and other challengers view the outcome as a necessary safeguard against government overreach, preventing what they argue was an unconstitutional attempt to dictate how platforms handle speech. The decision reinforces the principle that social media companies, as private entities, have the right to set and enforce their own content moderation policies without compelled disclosures that might indirectly pressure them to act in ways that align with state preferences.
This case highlights a crucial yet often misunderstood aspect of free speech in the context of social media. While many equate free speech with the ability to post anything without restriction on platforms, it's important to recognize that the First Amendment does not guarantee that right on private platforms. Rather, the First Amendment protects individuals from government censorship but does not prevent private companies from curating, removing, or restricting content based on their own policies. This distinction is critical, as social media companies, being private entities, are not obligated to host all forms of speech. While these platforms have become central to public discourse, they are legally considered private spaces, where content moderation is within the discretion of the platform itself.
The injunction and subsequent settlement reinforce the principle that the government cannot compel private companies to alter their content moderation practices or disclose sensitive information in ways that could serve governmental or external interests. However, this balance between protecting platforms’ First Amendment rights and moderating speech introduces tension. Safeguarding companies’ discretion over their moderation policies may result in some individuals’ voices being limited, raising important questions about the role of private companies in facilitating online speech. Ultimately, this saga between X and California underscores a fundamental reality of the digital age: While social media platforms are pivotal spaces for public communication, they remain private entities with constitutional rights that can impact the scope and nature of online speech.
Looking Ahead: The Future of Content Moderation and State Regulation
In the short term, the ruling is unlikely to significantly alter the content users see online. Platforms still retain full discretion over their moderation policies, and some already publish high-level transparency reports. However, the decision does cement a growing shift away from regulatory intervention in platform governance. It also underscores broader debates about the evolving nature of content moderation. Since Elon Musk’s acquisition of Twitter (now X) in 2022, the platform has shifted toward a more decentralized, user-driven approach, relying on community notes, fact-checking and reporting rather than top-down enforcement. Proponents argue that this model enhances free expression by reducing corporate gatekeeping, while critics contend it allows misinformation and harmful content to spread more freely, undermining public trust in online discourse.
With AB 587 now a shadow of its original form, the broader question remains: Will California lawmakers regroup and attempt a more narrowly tailored regulatory approach, or has the judiciary effectively drawn a hard line against effective state intervention in content moderation? For now, platforms like X have secured what they perceive as relief from government overreach–but the legal and political battles over online speech are far from over.
[View source.]