Topic |
Original Working Paper |
Final Guidelines |
Notable Changes/Clarification |
1 |
Definition of General-Purpose AI Model |
Relies on Article 3(63) AI Act and recitals; proposes a training compute threshold (e.g., 10²² floating-point operations or FLOP) as a proxy for generality and capability, with some flexibility and examples; proposes the compute threshold and ability to generate text/images as establishing a rebuttable presumption of GPAI classification. |
Adopts a higher indicative threshold (10²³ FLOP) and clarifies that if in addition, the model can create language, text-to-image or text-to-video, these two factors will be “an indicative criterion” that it is GPAI. Provides more detailed examples and rationale for the threshold.& |
Threshold raised and more precise; modalities for generality clarified; examples expanded and refined.
No longer any mention of (legal) presumptions.
|
2 |
Model Lifecycle and Distinct Models |
Focuses on the large pre-training run as the start of a model’s lifecycle. Modifications by the same entity could create a new model, and fine-tuning is cited as a form of modification. |
Adds additional detail to concept of ‘lifecycle’ in relation to GPAI models and that any subsequent development of the large pre-training run, whether before or after the model is placed on the market (including fine-tuning for specialization) by the original provider (or on their behalf) is part of the same model’s lifecycle, never a new model. The compute threshold for creating new models applies only to downstream actors. |
Including fine-tuning within adaptations that will not result in a “new” model. |
3 |
Downstream Modifiers |
Threshold-based approach: if a downstream modifier uses more than a third of the original model’s compute, they become the provider of a new model. |
Same threshold applies, but the guidelines are explicit that this only applies to downstream actors, not the original provider. Obligations for downstream modifiers as providers are limited to the modification. |
Clarifies scope: threshold only applies to downstream actors, not original developers. If the compute threshold is met by the downstream provider, this will be an “indicative criterion” that the modifier should be classified as a provider.
Original providers will need to assess what additional contractual obligations to add to license agreements relating to modifications, e.g., to surface performance information.
|
4 |
Notification Obligation under Article 52(1) |
Not addressed in detail in the Working Paper. |
Guidelines explain the operation of the conditions and presumption for classification of the model as GPAI with systemic risk under Article 51(1) and (2).
The guidelines are explicit that notification may be required before training is complete (and therefore before placing on the market).
Expands information regarding the Commission’s position regarding the process by which providers may contest classification of GPAI with systemic risk and clarifies that the Rules of Procedure of the European Commission will be applicable to the process. The burden lies with the provider to rebut the presumption that a model has high impact capabilities.
|
Expands information regarding the Commission’s position on the process by which provider may contest classification of GPAI with systemic risk.
In that context, the Commission will assess the extent to which the cumulative training compute exceeds the legal threshold and any other elements that influence achieved or expected capabilities. A rejection of a provider’s arguments, the model is considered a GPAI with systemic risk from the moment it meets the statutory conditions.
|
5 |
Placing on the Market |
Provides examples of placing on the market (e.g., via API, repository, cloud, integration into products), consistent with the Act’s recitals, but does not consistently specify the Union (European) market in each example. |
Every example is explicitly referenced as “on the Union market” or “on the European market,” aligning with the AI Act’s territorial scope. Clarifies that obligations are triggered by first making available on the Union market. |
Final guidelines ensure all examples are EU-specific, reinforcing the territorial scope of obligations.
Although the Q&A accompanying the GPAI Code of Practice states that the AI Office will clarify the application of obligations to providers during the development phase (i.e., before placing on the market), no significant explanation appears in the guidelines. |
6 |
Providers for GPAI |
Sets out a number of scenarios in which an entity will be deemed the provider of a GPAI model, and how a downstream modifier can be deemed a provider, but only in relation to modifications. |
Expands examples, in particular in relation to GPAI models that are integrated into AI systems. Also addresses the scenario in which an upstream actor develops/has developed a GPAI model that is made available to a downstream provider outside the EU. |
Recognises that if an upstream actor develops a model and clearly excludes its distribution in the EU, including via integration in AI systems intended for the EU market, the actor may not be considered the model provider, but the downstream actor will be considered the provider. |
7 |
Open-Source Exemptions |
Code of Practice and Demonstrating Compliance |
Expands and clarifies the conditions: more detail on what constitutes “free and open-source,” what counts as monetisation, and what must be made publicly available. Provides more examples of qualifying and disqualifying licences. |
More detailed and practical guidance on open-source exemptions and monetisation (what activities are considered or are not considered monetisation).
Provides that licensors may include specific, safety-oriented terms that reasonably restrict use that would pose significant risk to public safety, security or fundamental rights. |
8 |
Transitional Rules and Retroactive Compliance |
Sets out that models placed before 2 August 2025 must comply by 2 August 2027; recognises challenges and indicates that the AI Office will play a collaborative role. |
Reiterates transitional period and clarifies that retraining/unlearning of GPAI already placed on the market is not required if not feasible; requires disclosure and justification where information is missing. |
The final guidelines are more explicit on what is required and not required for legacy models, and clearer on disclosure obligations. |
9 |
Code of Practice and Demonstrating Compliance |
Adherence to a code of practice is a straightforward way to demonstrate compliance; non-signatories must show alternative means. |
Reiterates this and adds that for providers that adhere to the Code, the Commission will focus enforcement on monitoring the providers’ adherence to the Code.
Providers that do not adhere to the Code will be expected to demonstrate how they comply with their obligations under the AI Act and will have to report the measures they have implemented to the AI Office. They may also be subject to more requests for information under the AI Act.
Adherence to a code of practice will focus enforcement on monitoring the code and may be a mitigating factor in fines.
|
Enforcement and compliance incentives for Code of Practice adherence are more clearly articulated, and consequences of non-adherence to the Code (e.g., more interactions with the AI Office, more requests from the AI Office for information.)
The legal basis under which non-adhering providers are expected to provide information to the AI Office (except upon request) is not specified.
|
10 |
Supervision and Enforcement |
Describes the AI Office’s powers and indicates that the regulator will expect providers to adopt a collaborative approach; enforcement powers start 2 August 2026. |
Expands on the collaborative, staged, and proportionate approach; details proactive reporting, without always linking to a legal obligation to do so, confidentiality and the scope of enforcement powers.
Elaborates the requirements to report “serious incidents” and what will constitute a serious incident that should be reported under Article 55 of the AI Act (applicable to providers of GPAI with systemic risk).
States that the Commission can take decisions under Article 52 of the AI Act (accepting or rejecting designation of GPAI as having systemic risk) – see item 4, above.
|
More detail on enforcement philosophy, procedures and confidentiality. |
11 |
Estimation of Training Compute |
Proposes both hardware-based and architecture-based approaches; allows approximations; seeks to explain what, when and how compute is to be calculated. Provides example calculations. |
Information on what should be estimated and how is consolidated in an annex.
Retains both approaches to calculating compute (hardware, architecture); requires accuracy within 30% error margin; provides more detailed instructions and examples, including for synthetic data and model merging.
The AI Office’s approach to what constitutes “training data” is clarified. Some examples of what activities should not be included are listed (e.g., compute used to generate synthetic data that is publicly accessible). Also clarifies when a model should be considered “large” for a model based on a “dense transformer architecture”.
|
More streamlined approach with detailed estimation requirements; error margin specified and examples of activities that will not need to be taken into account as part of calculating training compute. More examples of the models the AI Office investigated to inform its decision to set 10²³ FLOP as the indicative threshold for qualifying a model as GPAI. |