The European Commission published its General-Purpose AI Code of Practice (Code) on July 10, 2025, after a long drafting process that was originally intended to conclude in May 2025. The development of the code was facilitated by the European AI Office, and involved nearly 1,000 stakeholders, including academics, model developers, AI safety experts, representatives from EU Member States, and civil society organizations.
The Code establishes several measures that providers[1] of general-purpose AI (GPAI) models[2] can implement to help demonstrate compliance with the EU AI Act’s rules for GPAI, which will enter into application on August 2, 2025 (with a two-year grace period for GPAI models already on the market).
Critically, the Code is a voluntary tool and is not intended to impose new obligations or extend existing compliance requirements. The European Commission has sought to position the Code as an asset for providers of GPAI models in their compliance journey, clarifying the requirements and standardizing approaches among the provider community. It should be noted that, while the measures in the Code can be used to demonstrate compliance, adherence to the Code does not reward organizations with a legal presumption of conformity with the EU AI Act.
The Code covers three primary subjects, organized into three main chapters:
- Transparency
- Copyright
- Safety and security
Both the transparency and copyright chapters apply to all GPAI model providers. The safety and security chapter is only applicable to a smaller number of providers of advanced models that are subject to the EU AI Act’s rules on GPAI models that present a systemic risk.[3]
Key elements of the Code are summarized below.
Transparency
The transparency chapter directs providers toward compliance with their transparency obligations under Article 53(1)(a)-(b), Annex XI, and Annex XII of the EU AI Act. These set out minimum levels of information that should be prepared and made available to the AI Office, national competent authorities, and downstream providers.
Throughout this chapter, the Code emphasizes the importance of clear and accessible information about GPAI models and their risks. It also provides guidance on how critical details can be effectively documented in a way that can be easily understood by regulatory bodies and other organizations. This includes:
- Preparation and maintenance of documentation about the model (eg, how it was trained and sources of data)
- Developing processes for sharing information with the AI Office, and
- Establishing controls that ensure the quality and integrity of information about the model.
The chapter also includes a “Model Documentation Form,” which provides a template of how (and what) key GPAI model information should be presented. Relevant information includes:
- The model properties
- Method of distribution and licensing
- Acceptable and intended uses
- Training processes
- Data used to train, test, and validate the model
- Computational resource requirements, and
- Energy consumption.
The transparency chapter indirectly highlights that GPAI model providers may need to take additional steps to ensure compliance with applicable laws. For example, the Model Documentation Form requires only information on the categories of data sources used to train the model.
Several national copyright regulators, such as the French National Institute of Intellectual Property, have considered requiring additional information on the provenance of data sources, particularly where they are obtained through mass web-scraping techniques. It should not, therefore, be assumed that following the Code will ensure compliance with all applicable laws, even within the EU.
Copyright
The copyright chapter provides guidance for providers on the requirement under Article 53(1)(c) to establish measures that enable compliance with EU law on copyright and other related intellectual property rights (eg, EU Copyright Directive). This includes the right of individuals to opt out of having their copyrighted works used for text and data mining or model training.
The chapter offers practical guidance intended to help providers develop and implement a robust copyright compliance policy and technical and operational controls to respect and protect the intellectual property rights of individuals. For example, the Code recommends that providers take measures to mitigate the risk of downstream AI systems using their GPAI models to generate outputs that may infringe the rights of others, such as including provisions in the provider’s terms of use that prohibit use of the model for copyright-infringing purposes. Several technical measures are also recommended that enable the GPAI model to understand and follow technical restrictions of protected works that are contained in metadata or machine-readable instructions.
The chapter further recommends that providers designate a point of contact that is responsible for managing complaints from stakeholders. This is expected to facilitate the enforcement of the rights of individuals.
Safety and security
This chapter is the longest and most detailed chapter in the Code. It applies only to providers of GPAI models that pose a systemic risk and are therefore subject to additional obligations as set out in Article 55(1) of the EU AI Act.
The chapter is organized into several commitments that detail measures to ensure GPAI models with systemic risk are deployed safely and responsibly. It emphasizes managing systemic risks, ensuring compliance with the AI Act, and adopting state-of-the-art security practices.
For example, the Code requires that providers of GPAI models with systemic risk establish and implement a robust safety and security framework that defines processes and measures for identifying, assessing, and mitigating systemic risks. This includes establishing acceptable risk tolerance levels prior to deployment and requiring appropriate and proportionate mitigations where possible.
Throughout this chapter, the Code also highlights the importance of continuous monitoring, effective incident response processes, and collaboration among GPAI providers to enhance overall security. Despite earlier discussions about removing the independent external evaluation requirement, the final version of the Code makes it mandatory for providers of GPAI models with systemic risk to seek such external evaluation in most circumstances.
Key takeaways
As a guiding principle across all chapters, the Code emphasizes the importance of ethical AI development. Providers are encouraged to consider the ethical implications of their GPAI models and to ensure that their use aligns with societal values and norms.
The Code also highlights the importance of robust data governance practices and encourages providers to ensure that the data used to train, test, and validate their GPAI models is accurate.
Strong foundations in data privacy and cybersecurity continue to be encouraged, and organizations are reminded of the need to implement robust and holistic AI, data, and cybersecurity strategies that encompass both the EU AI Act and other relevant regulatory frameworks.
Next steps for the Code
The Code is now subject to review for adequacy by EU Member States and the European Commission. If deemed appropriate, the Code will be endorsed and GPAI model providers will be able to use it to help demonstrate compliance with the EU AI Act. It is expected that the Code will be approved via an implementing act, conferring general validity across the EU.
The Commission has clarified in the Q&A related to the Code that providers who do not fully implement all commitments immediately after signing will not be considered by the AI Office to have breached these commitments, nor will they be reproached for violating the AI Act in the first year from August 2, 2025. Instead, the AI Office will seek to collaborate closely – especially with providers adhering to the Code – to ensure that models can continue to be placed on the EU market without delays.
Several prominent organizations that are likely to be considered providers of GPAI models have already publicly stated their intention to sign onto the code and follow its measures as part of their AI governance program.
To ensure that the Code remains aligned with technological developments, including changes to the risk landscape or application of the EU AI Act, the AI Office will conduct a review of the Code at least every two years.
The European Commission intends to complement the Code with additional guidelines for GPAI model providers that are expected to clarify several areas of uncertainty. These are expected to include how regulators will address GPAI models that are “fine-tuned” by a third party – and under which FLOPS (floating-point operations per second) threshold they would fall after being placed on the market or put into service.
[1] A “provider” of a GPAI model is a party that develops a GPAI model, or that has a GPAI model developed, and places it on the market or puts it into service under its own name or trademark, whether for payment or free of charge. EU AI Act. Art. 3(3).
[2] A GPAI model is an AI model that displays significant generality and is capable of competently performing a wide range of distinct tasks, regardless of the way the model is placed on the market, and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market. EU AI Act. Art. 3(63).
[3] A “systemic risk” is a risk a risk that is specific to the high-impact capabilities of general-purpose AI models, has a significant impact in the EU due to its reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale. EU AI Act. Article 3(65).
[View source.]