Introduction
Global digital platforms face an increasingly complex and fragmented content regulation landscape. Governments worldwide are imposing overlapping and sometimes conflicting rules for monitoring, managing, or restricting online content.
In the European Union (EU), the Digital Services Act (DSA), Digital Markets Act (DMA), and Data Governance Act (DGA) form the EU’s framework for addressing perceived content risks. The United Kingdom (UK) has enacted the Online Safety Act (OSA) and the Digital Markets, Competition and Consumers Act (DMCCA). Together, these acts establish new standards for platform accountability and consumer protection.
In the United States, federal agencies such as the Federal Trade Commission (FTC) and Department of Justice (DOJ) continue to apply existing antitrust, privacy, and consumer protection laws to address platform conduct and content. Meanwhile, California, Florida, Texas, Virginia and other states have adopted new laws targeting data governance, consumer data, and data about minors.
Outside the transatlantic space, countries including Australia, India, Brazil, South Korea, and Singapore have introduced or expanded content regulation regimes. These frameworks address online harms, misinformation, intermediary liability, and algorithmic transparency, often with significant extraterritorial reach and enforcement powers.
This Client Alert provides a comparative overview of key content regulation regimes worldwide, with a focus on recent developments and forward-looking trends. It also offers practical recommendations to help companies manage compliance risks and operationalize global content governance obligations.
1. Eu Content Regulation: The Digital Services Act
The DSA, which came into effect in November 2022, establishes a phased rollout of new obligations for digital service providers operating in the EU. Positioned as a central element of the EU’s digital policy framework, the DSA outlines responsibilities that vary based on a provider’s size, role, and reach, with requirements scaled to reflect the nature and scope of the service. Service providers are classified into the following categories:
- Very Large Online Platforms (VLOPs) and Very Large Online Search Engines (VLOSEs): Platforms and search engines that exceed a 10% share of the EU’s approximately 450 million active users fall within this group. The threshold is subject to periodic review. “Active users” are individuals engaging with the service at least once within a specified period (typically monthly), excluding passive interactions. These providers are required to calculate and publish EU-based monthly active user numbers at least every six months.
- Online Platforms: Services that enable interactions between consumers and third-party sellers, such as online marketplaces, app stores, and social media platforms.
- Hosting Services: Cloud or web-based hosting service providers.
- Intermediary Services: Network infrastructure providers, such as ISPs and domain name registrars.
All providers are subject to baseline obligations, including the appointment of a designated point of contact (and if outside the EU, an EU-based representative), clear terms and conditions that respect “fundamental rights,” reporting on content moderation practices, and cooperation with enforcement authorities, such as removing illegal content upon request.
VLOPs and VLOSEs bear enhanced obligations, such as:
- Comprehensive risk assessments addressing illegal content, adverse impacts on “fundamental rights,” disinformation, and electoral manipulation.
- Implementation of risk mitigation strategies, including algorithmic adjustments and user controls.
- Annual independent audits.
- Real-time data access for regulators and vetted researchers.
- Maintenance of a publicly accessible advertising repository.
Each EU Member State must establish a Digital Services Coordinator (DSC) to act as the primary enforcement body, facilitate cross-border cooperation, and liaise with the European Commission (EC) and the newly created European Board for Digital Services (EBDS).
The EU Member States lead DSA enforcement. Potential penalties for non-compliance can reach up to 6% of a provider’s global annual turnover in the preceding financial year and periodic fines of up to 5% of average daily turnover. The EC has already initiated regulatory investigations under the DSA, including preliminary findings of non-compliance against certain VLOPs and VLOSEs.
Several EU Member States have also supplemented the DSA with national legislation. Ireland’s Online Safety and Media Regulation Act (OSMRA), effective March 2023, established the Media Commission as an independent supervisory authority with broader authority than the DSA. Its powers include imposing fines of up to EUR 20 million or 10% of global turnover, directing remedial actions, removing non-compliant audiovisual services, and blocking access to a designated online service within Ireland.
Upcoming DSA-related developments include:
- Mandatory compliance with the Code of Practice on Disinformation by VLOPs and VLOSEs beginning July 1, 2025.
- Standardized DSA transparency reporting templates required to be used as of July 1, 2025, with first reports due in early 2026.
- Further regulatory guidance for the protection of minors online (now that the public consultation closed on June 10, 2025).
Organizations should also monitor other related EU legal developments, including the nascent Digital Fairness Act (DFA), which will address dark patterns, influencer marketing, addictive digital product design, and online profiling.
2. UK Content Regulation: The Online Safety Act, Crime And Policing Bill, And The Digital Markets, Consumers, And Competition Act
The Online Safety Act (OSA)
The OSA, which entered into force in March 2025, establishes a sweeping regulatory regime to address illegal and harmful online content. It imposes a statutory duty of care on digital services to assess, mitigate, and manage content-related risks, particularly in relation to user-generated or shared content. The OSA applies primarily to “user-to-user services” (e.g., social media platforms, messaging apps, and marketplaces) and “search services” (e.g., search engines).
Platforms must identify, assess, and mitigate risks arising from illegal content, including terrorism-related material, child sexual abuse imagery (CSAM), hate speech, and content deemed harmful to children.
The UK Office of Communications (Ofcom), the UK’s digital communications regulator, oversees OSA enforcement. It has published detailed codes of practice requiring mandatory risk assessments aligned with best practices and across 17 priority illegal content categories encompassing cover 130 offences.
Ofcom’s four-step risk assessment methodology includes:
- Understanding the types of illegal content to assess;
- Assessing the risk level based on likelihood and impact;
- Determining and implementing appropriate risk mitigation measures; and
- Reporting, reviewing, and updating the assessment as necessary.
The OSA applies to an estimated 100,000 services, but additional obligations fall on a subset of platforms with significant reach or societal impact based on a three-tier classification system:
- Category 1: Large user-to-user services (over 34 million UK users or over 7 million users with content sharing/forwarding functionality).
- Category 2A: Large search services (over 7 million UK monthly users).
- Category 2B: Smaller user-to-user services with at least 3 million UK users and a direct messaging function.
Category 1 platforms face the most stringent duties, including enhanced transparency, user empowerment tools, and protections for content of “democratic importance.” Category 2B services will have the fewest.
The OSA codes also imposes obligations on services accessible to children, mandating:
- Age verification or assurance mechanisms;
- Age-appropriate content filtering;
- Algorithmic transparency;
- Prohibition of exploitative design patterns (e.g., “dark patterns”).
The consultation period on codes of practice related to illegal harms and protection of children ends in June 2025. Companies must demonstrate compliance with final codes or justify equivalent alternative measures. Platforms must also offer transparent moderation policies, clear user reporting channels, and effective redress options for content removal decisions.
Ofcom may issue notices of contravention, levy fines of up to £18 million or 10% of global revenue (whichever is greater) and impose business disruption measures such as service and access restriction orders.
In July 2025, Ofcom will publish its register of categorized services (Category 1, 2A, or 2B services). Services will have the opportunity to respond to consultations about the additional duties for categorized services and will need to prepare to comply with those duties as they come into force.
Services that are likely to be accessed by children will face a deadline in July, when the Protection of Children Codes of Practice come into force. Likewise, services that allow user-generated pornographic content must have age verification and age assurance systems in place by July 2025 to prevent underage access.
Crime and Policing Bill
Relatedly, the UK is advancing the omnibus Crime and Policing Bill, which was introduced in February 2025. It seeks to equip law enforcement and the criminal justice system with enhanced powers and measures to tackle a broad range of crimes and antisocial behaviors. Parts of the bill relating to content regulation include:
- Provisions tackling knife crime, and violence against women and children. It requires online platforms, including search services and social media, to appoint a content manager responsible for removing illegal content related to knives and offensive weapons within seven days of receiving notice requiring the appointment of a content manager.
- Provisions expanding criminal and civil liability to corporations and partnerships, as well as their senior managers and content managers, in certain situations. Content managers must remove identified content within 48 hours of a police-issued “content removal notice”. Failure to comply can result in civil penalties up to £60,000 for the company and, notably, where the recipient is a service provider’s content manager, a civil penalty of up to £10,000 personally for the appointed manager.
This bill is currently in the UK Parliament’s upper house (the House of Lords), where it is undergoing its Second Reading (a mid-point stage in the bill’s development).
The Digital Markets, Competition and Consumers Act (DMCCA)
Effective January 2025, the DMCCA introduces a new competition and consumer protection regime focused on large digital firms deemed to have strategic market status. The law grants the Competition and Markets Authority (CMA) and its Digital Markets Unit enhanced powers to regulate digital players and prevent alleged anti-competitive conduct.
A key innovation of the DMCCA is the creation of a direct enforcement mechanism for consumer protection. The CMA can now act without recourse to the courts, enabling swifter intervention against unfair commercial practices. This mechanism significantly strengthens the UK’s consumer protection architecture. Relevant to content regulation and digital market governance, the DMCCA prohibits false or misleading information and unfair trading or commercial practices. Further, future subscription-specific provisions are expected to take effect from Spring 2026. These provisions will require clear pre-contract disclosures, renewal reminders, and simple, accessible cancellation mechanisms.
The Secretary of State may update the list of banned commercial practices that allegedly manipulate consumer decisions through interface design.
The CMA may fine up to 10% of a company’s annual global turnover and issue enforcement notices, directions, and compensation orders. These expanded powers align consumer law enforcement powers with the CMA’s competition toolkit. They are part of a broader UK strategy to hold powerful digital platforms accountable for market and consumer harms.
3. U.S. Content Regulation: State And Federal Trends
Online content governance in the United States continues to be shaped by a combination of federal agency enforcement and state-level innovation. This combination creates overlapping and sometimes conflicting mandates at different levels of government. These initiatives must be assessed against the background of Section 230, which provides platforms with immunity from liability for third-party content, and the Stored Communications Act, which governs access to electronic communications and stored data.
Key developments in 2024 and 2025 include:
Congressional and Executive Actions Targeting Foreign Speech Moderation: Congress and the U.S. Department of State have recently initiated a coordinated effort challenging foreign actors who are deemed to have “censored” or “suppressed” speech online. For example:
- On May 28, 2025, the U.S. State Department announced a new visa restriction policy targeting foreign officials involved in “censorship” of American speech abroad. This includes demands “that American tech platforms adopt global content moderation policies or engage in censorship activity that reached beyond their authority and into the United States.”
- Congress has launched hearings and inquiries into what it perceives as foreign influence over U.S.-based online speech, with many legislators arguing that some governments seek to silent dissident voices through pressure on global platforms.
These initiatives coincide with growing calls for transparency in how foreign laws and mandates pertaining to online content are applied to U.S. users on global platforms. (For more, see “Transatlantic AI Governance – Strategic Implications for U.S.-EU Compliance”).
FTC Inquiry into Content Moderation Practices: In February 2025, the FTC launched a public inquiry into alleged content-based discrimination by major tech platforms. The inquiry is exploring whether companies allegedly:
- Engage in “shadow-banning” or algorithmic suppression of certain viewpoints;
- Apply content moderation policies inconsistently or discriminatorily;
- Use deceptive or unfair practices that may violate consumer protection laws.
This inquiry reflects the FTC’s expanding view of its jurisdiction over digital speech governance, particularly where content decisions intersect with algorithmic design and platform monetization.
State-Level Legislative Trends: State governments continue to drive new content-related laws and regulations, including in legislative schemes addressing privacy, safety, and speech:
- Consumer Data Privacy Laws: States such as California, Texas, Virginia, Colorado, and Oregon have enacted statutes granting individuals rights over their personal data, including rights to access, delete, correct, and restrict data use. These laws often include obligations for businesses to conduct data protection assessments and provide transparency on data usage, especially for targeted advertising and profiling.
- Child and Minor Online Safety: Current and pending legislation in states like California, Florida, New York, and Arkansas requires age-appropriate content design, bans addictive features for minors, mandates parental consent for data processing, and restricts the profiling or location tracking of children. These laws often empower state attorneys general with enforcement authority.
- Age Verification for Harmful Material: Several states, including Texas, Montana, Louisiana, and Kentucky, have adopted statutes requiring online publishers of content deemed harmful to minors to implement age verification mechanisms.
- Restrictions on Government Interaction with Platforms: Florida enacted legislation barring state agencies from requesting or coordinating content removals with social media platforms, except under narrowly defined exceptions. This move reflects a broader trend of embedding speech protections into state law.
- Sectoral and Institutional Exemptions: Many of these state laws, including those of Florida, Texas, and Connecticut, include carve-outs for specific sectors regulated under federal statutes, such as HIPAA (healthcare), GLBA (financial institutions), and FERPA (educational data). They also frequently exclude nonprofit organizations, government entities, and certain academic or research activities.
Ongoing Judicial Scrutiny and Constitutional Challenges: Courts continue to grapple with the constitutional boundaries of platform regulation, particularly concerning the First Amendment and the evolving interpretation of platforms as public forums or common carriers. These cases may influence how much the government (state or federal) can compel or restrict content moderation decisions. Major cases include:
- Murthy v. Missouri (originally filed as Missouri v. Biden, remanded by the U.S. Supreme Court; currently before the Fifth Circuit): This case addresses whether government requests to platforms constitute state action. In June 2024, the Supreme Court held that plaintiffs lacked Article III standing, as they failed to establish causation and a substantial risk of future injury. The Court thus vacated the Fifth Circuit injunction prohibiting the federal government from meeting with platforms and influencing their moderation policies. The matter is currently back in the Western District of Louisiana, where it could address what measures the federal government can take to restrict certain content on platforms.
- NetChoice v. Paxton & Moody v. NetChoice (remanded by U.S. Supreme Court to the Fifth and Eleventh Circuits, respectively): The plaintiff in these companion cases challenges Florida and Texas laws that constrain platforms’ content moderation abilities. At issue is the tension between such laws and platforms’ First Amendment rights. The Supreme Court clarified the proper test for facial First Amendment challenges to the state laws and required the lower courts to resolve remaining factual and constitutional questions. The Court also noted that Texas and Florida cannot impose their content preferences on private entities. How the lower courts resolve the remaining questions could have sweeping implications for the regulation of online speech.
- Computer & Communications Industry Association v. Uthmeier (Northern District of Florida; appeal pending in Eleventh Circuit): Plaintiffs in this case challenge a Florida law limiting minors’ access to social media platforms. On June 3, 2025, Chief Judge Walker preliminarily enjoined the law’s enforcement. On appeal to the Eleventh Circuit, Florida seeks to stay this injunction.
- NetChoice v. Reyes (Tenth Circuit): The plaintiff in this case challenges Utah’s statutory restrictions on minors’ use of social media, raising parallel First Amendment concerns to those at issue in CCIA v. Uthmeier. The case is in an early procedural posture, with briefing pending before the Tenth Circuit.
As these cases proceed through their respective circuits and potentially back to the Supreme Court, they may clarify whether and when government actions transform private platform moderation into state action, as well as mark the permissible limits of government regulation of online speech.
4. Global Content Regulation Outside The U.S., UK, And EU
Governments across the Asia-Pacific, Latin America, Africa, and the Middle East are increasingly implementing digital content regulation laws aimed at reducing online harms, improving transparency, and asserting greater national oversight of online platforms. These efforts vary in scope and intensity but often include significant enforcement powers and extraterritorial provisions.
Notable developments include:
Australia: The eSafety Commissioner enforces mandatory takedown obligations and basic safety expectations under a comprehensive regulatory framework. Enforcement tools include cyberbullying removal notices, image-based abuse, and harmful content. Australia is also exploring reforms addressing generative AI risks and online misinformation.
Canada: The government has introduced multiple pieces of legislation, including the Online Harms Act and updates to its Digital Charter Implementation Act, which focus on platform accountability for harmful content, misinformation, and child safety.
India: The forthcoming Digital India Act will replace the 2000 Information Technology Act. It is expected to modernize intermediary liability rules, impose stronger due diligence obligations on platforms, and establish more formal grievance redressal mechanisms. Interim guidelines from 2021 already mandate compliance officers and prompt action on takedown requests.
Japan: Japan is refining its data and platform governance laws through updates to its Act on the Protection of Personal Information (APPI) and new proposals targeting algorithmic accountability and transparency.
Malaysia: The Online Safety Act, passed in December 2024, is expected to come into force later this year. The Act distinguishes between "harmful content" and more heavily regulated "priority harmful content." Providers must implement safety plans, user-reporting tools, and other compliance mechanisms. The law is scheduled to enter into force in 2025.
Singapore: The Protection from Online Falsehoods and Manipulation Act (POFMA), enacted in 2019, empowers the government to issue correction and takedown orders for content deemed false and threatening to public trust or security. POFMA applies extraterritorially, affecting foreign platforms with a user base in Singapore.
South Korea: Recent reforms under the Basic Act on the Development of Artificial Intelligence and Creation of a Trust Base require disclosure of recommender system logic and safeguards against purported algorithmic bias. Additional rules govern consumer protection in digital advertising and e-commerce.
Middle East: Regulatory efforts remain fragmented but increasingly assertive. In the UAE, legacy legislation such as Federal Law No. 15 of 1980 on Publications and Publishing, continues to regulate digital content, prohibiting speech on sensitive topics including religion, politics, and national security.
These developments illustrate growing convergence toward regulatory frameworks that prioritize digital accountability, even amid divergent political systems and legal traditions. Companies operating globally must navigate not only divergent legal standards but also significant geopolitical sensitivities and enforcement asymmetries.
5. Advanced strategies for global content compliance
Regulatory divergence, geopolitical scrutiny, and evolving enforcement strategies require a strategic approach. Legal teams must aim to not only operationalize compliance, but also anticipate enforcement exposure, manage cross-border legal risk, and shape regulatory outcomes. Many of these functions necessitate support from specialized outside counsel. Key strategic priorities include:
- Global Regulatory Risk Mapping and Conflict Resolution
- Conduct comparative legal analyses to identify areas of legal conflict between jurisdictions (e.g., EU transparency mandates vs. U.S. First Amendment protections, or Chinese cybersecurity laws vs. Western data access norms).
- Develop decision-making frameworks for resolving legal conflicts (e.g., between takedown obligations and speech protections).
- Engage outside counsel to map and evaluate exposure under divergent or extraterritorial laws, including penalties, injunctive risks, and diplomatic consequences.
- Board-Level and Enterprise Risk Governance Integration
- Ensure legal oversight of content-related decisions involving business-critical tradeoffs (e.g., geo-blocking content, excluding political actors, or adjusting recommender systems).
- Develop escalation frameworks where content decisions could trigger legal, PR, or national security concerns, especially in conflict zones or authoritarian jurisdictions.
- Documented Step Plan and Enforcement Preparedness
- Pre-position legal arguments and internal documentation for anticipated enforcement inquiries or audits (e.g., DSA Article 74 audit readiness or FTC Section 5 investigations).
- Develop legal advice addressing interpretation of opaque or ambiguous statutory terms (e.g., “systemic risk,” “algorithmic transparency,” or “manipulative design”).
- Conduct mock inspections or tabletop exercises, simulating regulator requests or dawn raids in high-risk jurisdictions.
- Cross-Border Investigations and Government Engagement Strategy
- Build privileged legal frameworks for managing cross-border investigations (e.g., between EU DSCs, the CMA, and the FTC) to ensure consistency and prevent regulatory whiplash.
- Leverage outside counsel with transnational experience to coordinate across agencies.
- Design and execute strategic engagement plans with regulators to influence guidance, seek safe harbor designations, or shape co-regulatory codes (e.g., EU DSA Codes of Conduct, UK Ofcom draft codes, India’s Digital India Act consultations).
- Regulatory Advocacy, Treaty Interpretation, and Diplomatic Risk
- Retain counsel to interpret jurisdictional reach under treaties, trade agreements, or mutual legal assistance treaties (MLATs).
- Analyze risks of retaliatory measures (e.g., digital service taxes, blocking orders, or foreign official sanctions) triggered by content decisions or compliance posture.
Amid intensifying digital scrutiny, compliance must become an embedded element of digital platform design, product governance, and content operations. Companies that proactively integrate transparency, user empowerment, and jurisdiction-specific legal obligations into their systems will be best positioned to navigate regulatory complexity, reduce enforcement risk, and build lasting trust with regulators and users across jurisdictions.