No ‘Stop the Clock’ For the EU AI Act (and a belated General-Purpose AI Code of Practice): What Does This Mean to You?

King & Spalding
Contact

King & Spalding

The European Commission received the final version of the General-Purpose AI (GPAI) Code of Practice on July 10, 2025. The GPAI is a voluntary framework intended to guide how providers of large AI models comply with the forthcoming EU AI Act. Unveiled just three weeks before the Act’s new rules take effect on August 2025, the Code addresses key issues including transparency, copyright, safety, and systemic risk mitigation in AI systems. Long due, it was developed by an expert group of 13 independent specialists with input from over 1,000 stakeholders across industry, academia, and civil society.

The European Commission also ruled out the option of a ‘stop the clock’ on enforcing the EU AI Act. On August 2, 2025, the GPAI requirements will kick in, with no grace period or pause.

Background

The EU AI Act’s specific obligations for general purpose AI models (like large language models) will apply as from August 2, 2025. These rules will become enforceable one year later for new AI models (August 2026) and two years later for existing models (August 2027).

The GPAI Code of Practice is a tool to help industry bridge the compliance gap for the upcoming two years. The Commission’s AI Office convened independent experts in a multi-stakeholder process to draft the Code. This process stretched into summer 2025 after months of negotiations (the Code was originally expected in May) reflecting intense debates and lobbying from various sides.

Industry groups had also pressed for a delay in the EU AI Act’s implementation and enforcement, a request the Commission firmly rejected putting an end to the last weeks’ speculation.

The Code of Practice is the first comprehensive attempt to interpret the Act’s requirements for GPAI in a practical, voluntary format, ahead of the law’s formal enforcement.

Key Provisions

The GPAI Code is divided into three chapters: Transparency, Copyright, and Safety and Security. All GPAI model providers are expected to comply with the first two, while the last one is only applicable to a limited class of providers building the most advanced high-impact AI models that could pose “systemic risks” if mismanaged.

Transparency: All providers that sign the Code must bolster transparency about their systems. This includes preparing detailed model documentation and publicly available summaries of the data used to train their models. In addition, providers are expected to document a model’s key characteristics, such as its intended uses, performance limits, and training data sources, and be ready to share additional information with regulators or downstream users upon request.

The Code also includes a comprehensive Model Documentation Form to be completed detailing technical specifications, training data characteristics, computational resources, and energy consumption.

The open-source models are largely exempt from certain documentation duties unless they pose systemic risks.

Copyright: Signatories commit to respecting EU copyright laws throughout their AI development and deployment. The Code requires companies to implement policies ensuring they only use copyright-protected content for training or data mining when they have legal rights to do so.

Providers must take steps to mitigate the risk of infringing outputs, for example by filtering or post-processing model outputs to avoid copyright violations.

They are also forbidden from circumventing technical measures (like paywalls or text/image restrictions) designed to protect copyrighted works.

To further safeguard intellectual property, AI firms agreeing to the Code will need to set up channels for copyright holders to lodge complaints and assign staff to handle such issues. Web crawlers used for data collection are expected to exclude websites known for hosting pirated content, with the EU preparing an official list of such sites.

Safety & Systemic Risk Mitigation: The high-impact AI model providers must implement robust risk management and security measures. They are expected to continually identify, monitor, and mitigate systemic risks their AI might pose, such as risks to public safety, fundamental rights, or societal well-being. This entails putting in place frameworks to analyze possible high-level risks and updating those assessments as models evolve.

The Safety & Security chapter of the Code calls for measures like third-party external audits, stress-testing of models, post-market monitoring of model impacts, and cooperation with the new EU AI Office on oversight.

Takeaways

Even more important than the Code itself, the signal sent by the EU refusing to stop the clock is a strong one; the EU is not prepared to have its simplification agenda or technical reasons used to pause enforcement of a flagship reg such as the EU AI Act. Sending another signal was politically unbearable.

The finalization of the GPAI Code of Practice marks a significant milestone. The Code is explicitly designed to calibrate AI innovation and fundamental rights protection, showing how AI developers can be guided to adopt best practices even before formal enforcement begins. It could streamline compliance with the AI Act across the industry, potentially reducing conflicts between regulators and AI firms by establishing a shared baseline of trust and accountability.

While adhering to the Code is voluntary, companies that don’t sign up will miss out on the legal certainty and reduced red tape that signatories enjoy under the AI Act’s compliance regime, which enjoy a rebuttable presumption of conformity.

Next steps

With the final text of the Code now published, formal endorsement by the European Commission and the EU Member States is the next step. That approval, expected by the end of 2025, is needed to make the Code operational. Once the Commission and national authorities give the green light, providers of general-purpose AI models will be invited to voluntarily sign on to the Code and commit to its guidelines.

Several major AI firms, including providers of large language models and generative AI systems, have already become early signatories, to promote trust. But the Code doesn’t make unanimity.

In parallel, the EU Commission also published on July 18, 2025 further guidelines that aim to provide legal certainty to actors across the AI value chain by clarifying key concepts (such what constitutes a systemic risk) and requirements (for example on the type of interferences that make one a provider) relevant to general purpose AI models. Further work from the EU AI Office will facilitate the Code’s rollout.

Regulators have also made it clear that the overall timeline will not shift: the European Commission has ruled out any postponement of the AI Act’s application deadlines. This means companies must use the remainder of the year to prepare.

Signatories of the Code effectively have until August 2026 to bring any new AI models into full compliance (and until 2027 for existing models) before enforcement kicks in.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© King & Spalding

Written by:

King & Spalding
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

King & Spalding on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide