Key Takeaways
- The European Commission published its Code of Practice for General-Purpose AI (GPAI) Models on July 10, 2025, following three draft versions and just weeks before the first AI Act obligations take effect. The Code has global reach, applying to GPAI providers whose outputs are used in the EU.
- The Code provides a voluntary but Commission-endorsed framework to help GPAI providers align with transparency, copyright, and systemic risk requirements under Articles 53 and 55 of the AI Act.
- Structured into chapters around transparency, copyright, and safety and security, the Code includes documentation tools and risk protocols. Despite streamlining, some prescriptive elements, like external evaluations and complaint-handling mechanisms, remain and have drawn industry concern.
- While voluntary, signatories will receive deference in enforcement cases; nonsignatories must demonstrate compliance independently.
- Key implementation tools, including the model training data disclosure template, are still pending. The Code takes effect on August 2, 2025, with compliance deadlines ranging from 12 months for new models to August 2, 2027, for pre-existing models.
The EU’s “Voluntary” Code May Become a De Facto Compliance Standard
The EU released its delayed final Code of Practice for GPAI Models just weeks before the EU AI Act’s first set of legal obligations takes effect on August 2, 2025. Developed through a multi-stakeholder process and working group consultations, the Code is intended to be a transitional mechanism, helping GPAI providers comply with core AI Act requirements relating to transparency, copyright, and safety. Although participation in the Code remains voluntary, the Commission has made clear that adherence may ease compliance burdens and help developers demonstrate good-faith efforts as enforcement ramps up. While becoming a signatory is not mandatory, the EU has indicated that those who agree to the Code will be afforded more deference than those who do not in AI Act enforcement matters. As such, the terms of the Code—at least from the perspective of the European Commission—may become a de facto compliance standard. Additional guidance is expected from the Commission in the coming weeks, including a long-awaited template for training data disclosures.
While the final version of the Code still needs formal EU approval, some leading GPAI developers have already agreed to become signatories to the Code. However, at least one leading GPAI provider has declined to sign the Code, citing legal uncertainty and concerns that it imposes obligations beyond those required under the AI Act. The objecting provider warned that the Code could stifle innovation and hinder the ability of providers to build on frontier models.
What’s in the Code?
The Code is divided into three chapters:
- Transparency: This chapter supports compliance with Articles 53(1)(a) and (b) of the AI Act. It includes a model documentation form that captures key information about a model’s development, intended use, technical characteristics, limitations, and risk management procedures. Some documentation must be published to support downstream users, while other information, such as technical specifications and impact assessments, may be shared confidentially with regulators. Signatories benefit from the confidentiality protections under Article 78 of the AI Act when sharing sensitive business information.
- Copyright: Addressing the obligations in Article 53(1)(c), this chapter outlines how developers can implement copyright policies consistent with EU law, particularly the 2019 Copyright Directive. It emphasizes the need to identify and honor opt-out requests from rightsholders under Article 4(3), avoid using unlawfully sourced data, and publicly disclose copyright policies. The chapter provides a framework for demonstrating responsible data practices.
- Safety and security: This chapter applies only to a small group of GPAI providers offering advanced models that meet the AI Act’s threshold for systemic risk under Article 55. Covered developers are expected to maintain risk assessment frameworks, document mitigation strategies, and submit a nonpublic model report to the AI Office. The aim is to foster proactive risk mitigation for high-impact models before potential harm materializes.
Benefits for Signatories and Open Questions for Others
In a press release, the Commission stated that signatories to the Code may benefit from a lighter compliance burden when interacting with the AI Office or national regulators, as the Code is intended to serve as a path to demonstrating compliance with the AI Act. Nonsignatories must still comply with the law but may face additional steps to prove it through alternative means. Future updates to the Code will be shared with the signatories. In addition, the European Commission has clarified that signatories may elect not to undersign certain parts of the code with the caveat that opting out of specific sections will result in losing the signatory benefits with respect to such rejected sections.
Criticisms of the Code From Industry and Other Stakeholders:
While some prominent developers have agreed to the Code, industry sentiment remains divided. Earlier this month, more than 40 European companies called on the Commission to pause implementation, warning that the current approach could undermine the competitiveness of the EU’s AI ecosystem. In addition to these broader concerns, stakeholders have also flagged several specific issues:
- Omitted training data disclosure template: Article 53(1)(d) requires GPAI providers to publish summaries of their training data, but the Commission has not yet issued a standard template. Without it, providers may struggle to make public disclosures in a consistent and regulator-approved format.
- Formal endorsement still pending: The Code was finalized by the expert group but has not yet been formally endorsed by EU member states or the Commission. That endorsement is expected in August.
- Guidance for fine-tuned and open-source models: The Commission is expected to release further interpretive guidance on how the Code applies to derivative GPAI models, which will be critical for smaller developers and open-source projects to understand their obligations.
- Overbroad and vague definitions: Critics argue that the Code’s framing of systemic risk and its alignment with “high-risk” definitions under the AI Act is overly expansive and could sweep in systems that do not present genuine societal or safety risks, leading to over-regulation of low-risk tools and use cases, thereby chilling innovation.
- Disproportionate burden on smaller developers: Some stakeholders have warned that the Code may become a de facto standard that imposes significant compliance costs, particularly for smaller developers. While large GPAI providers may have the resources to meet documentation, risk mitigation, and red-teaming expectations, smaller firms may struggle to meet these obligations. Because innovation often begins among disruptive startups, this regulatory burden on smaller companies may cause startups to be wary about allowing output from their GPAI models to be used in the EU.
- Ongoing uncertainty around enforcement implementation:
Despite the Code’s goal of harmonizing GPAI oversight across the EU, there are still questions about how national regulators and the AI Office will interpret and implement its provisions in practice. Fragmented enforcement or inconsistent application could create further legal and operational uncertainty for providers.
Takeaways
With the imminent deadline of August 2, 2025, for key technical documentation, copyright policies, training content summaries, and designation of EU representatives—by non-EU developers, and the risk assessment, risk-mitigation measures, and incident reporting obligations for GPAI models that pose systemic risk, the AI Act is becoming very concrete for developers. The Code intends to function as the baseline for certain obligations, potentially easing interactions with regulators but reducing legal and operational risk. In this manner, the Code is expected to play a central role in shaping the first stage of AI Act compliance.
[View source.]