Latest Wave of Obligations Under the EU AI Act Take Effect: Key Considerations

DLA Piper

[co-author: Liam Blackford]

The first comprehensive legal framework for artificial intelligence (AI), Regulation (EU) 2024/1689 (the EU AI Act), entered into force last year. Rather than taking immediate effect, the harmonized rules on AI under the Act have been staggered in application.

The first of the EU AI Act’s obligations took effect on February 2, 2025, prohibiting certain practices and uses of AI technology and solidifying the importance of AI literacy in organizations. Many other obligations, including the comprehensive compliance framework for high-risk AI systems, are scheduled to apply from August 2, 2026.

August 2, 2025 was another milestone, as it saw the entry into application of several of the EU AI Act's most critical foundational governance provisions. These provisions are essential for setting up the EU AI Act’s institutional and enforcement infrastructure and have a direct impact on businesses, particularly those involved in general-purpose AI (GPAI).

This alert summarizes key developments to the EU AI Act’s application that have taken effect from August 2, 2025.

AI Office and AI Board

Although already established by a European Commission decision dated January 24, 2024, the AI Office officially became operational on August 2, 2025.[1] This body, established within the Commission, plays a central role in the implementation and enforcement of the EU AI Act, particularly in relation to GPAI models.

Among other activities, the AI Office will collaborate with other EU and national authorities and industry stakeholders in their compliance and enforcement efforts, support the EU AI Act's consistent application across the bloc, and oversee systemic risks posed by GPAI models.

As of August 2, 2025, the AI Office is accompanied by the AI Board, a formal EU-level coordination body consisting of Member State representatives tasked with advising and assisting the Commission and the Member States to facilitate the consistent and pragmatic application of the EU AI Act.[2]

National market surveillance and notifying authorities

Under the EU AI Act, Member States must have designated their national competent authorities by August 2, 2025. These should consist of at least one market surveillance authority and at least one notifying authority.[3] These bodies are responsible for supervising compliance with the EU AI Act at the national level.

Member States are obligated to communicate these authorities to the Commission and make their contact details publicly available. Member States were also required to report the status of the financial and human resources of these authorities to the Commission by August 2, 2025. This obligation is expected to be performed every two years thereafter.

Scientific panel of independent experts

August 2, 2025 marked the commencement of the scientific panel of independent experts (established by Implementing Regulation (EU) 2025/454), though its work is subject to delay.[4] This panel is responsible for supporting the AI Office by providing scientific and technical advice, particularly in relation to systemic risks posed by GPAI models. It is also empowered to issue “qualified alerts” to the AI Office in cases where such risks are identified.[5] The Commission continues to seek independent experts to contribute to the efforts of the panel (with applications closing on September 14, 2025).

Obligations for providers of general-purpose AI models

As of August 2, 2025, providers of certain GPAI models (models which display significant generality and are capable of competently performing a wide range of distinct tasks)[6] are required to comply with several GPAI model-specific obligations under the EU AI Act (with a two-year grace period for GPAI models already on the market before this date).

These include:

  • Creating and maintaining technical documentation that can be used by the AI Office, national regulators, and downstream providers and deployers
  • Ensuring that policies are in place to require compliance with EU law on copyright, intellectual property, and other related rights, and
  • Developing and making available a detailed summary of the content used for training the model.[7]

These obligations apply whether or not the GPAI model qualifies as a GPAI model with systemic risk.

In addition to these standard obligations, providers of GPAI models with systemic risk (in particular, models with “high impact capabilities”) are required to comply with several more comprehensive requirements. GPAI models will typically be considered to present high-impact capabilities if the cumulative amount of computation used for its training measured in floating point operations is greater than ten.[8]

In such cases, providers are required to, among other things, implement risk management policies, conduct model evaluations, ensure adequate cybersecurity protection, and report serious incidents to the AI Office. Further breakdown of these requirements can be found in our previous alerts here and here.

To assist providers in complying with their obligations under the EU AI Act, the Commission released the GPAI Code of Practice. Among other guidance, the Code provides practical measures that organizations can take to comply with their obligations regarding transparency, copyright, and system safety. This was accompanied shortly after by the Commission’s guidelines on the scope of obligations for GPAI model providers, which address important questions and previous areas of uncertainty, such as when an AI model is considered general-purpose.

It is important to remember that while illustrative of the Commission’s interpretation, these resources are not legally binding and do not provide organizations a presumption of conformity with the EU AI Act. They are nevertheless valuable assets in demonstrating compliance with the EU AI Act’s provisions.

Penalties

Of particular importance to organizations, August 2, 2025 also brought the EU AI Act’s penalty regime into effect. This means that competent authorities may impose administrative fines for noncompliance or insufficient compliance, including whichever amount is higher in each of the following cases:

  • Up to EUR35 million or 7 percent of global annual turnover for infringements relating to prohibited AI practices
  • Up to EUR15 million or 3 percent of global annual turnover for infringements of certain other obligations under the Act, and
  • Up to EUR7.5 million or 1 percent for supplying incorrect, incomplete, or misleading information to public authorities.

While the majority of penalty provisions take effect, the EU AI Act carves out penalties applicable to providers of GPAI models and postpones these measures until August 2, 2026,[9] aligning them to the enforcement powers relating to GPAI models.[10]

There remains, however, uncertainty regarding how enforcement of these penalties will occur in practice. This is because the penalty regime under Article 99 only requires that Member States lay down rules on penalties and other enforcement measures (including warnings and nonmonetary penalties) by August 2, 2025.

Absent of enforcement measures implemented at the national level, the EU AI Act does not clearly outline that regulators will have the enforcement powers to enforce the penalties outlined in Article 99.

In fact, many investigatory and enforcement powers outlined in the EU AI Act do not begin to apply until August 2, 2026 (the default date for application of all other provisions), such as the provisions enabling enforcement of obligations for providers of GPAI models.[11]

Key considerations

Businesses may consider taking the following actions in light of the implementation of the EU AI Act’s obligations.

  • Identify and assess whether you and/or your business develops or integrates GPAI models
  • Prepare documentation for compliance with transparency measures, as required
  • Establish internal compliance structures aligned with the AI Act’s governance framework
  • Monitor activities of the AI Office, the AI Board, and national authorities for guidance and enforcement trends

An evolving and complex picture

As we are still only in the EU AI Act's first years, EU Member States are at different stages of the journey to implement the Act. Further, when we look beyond the EU, AI regulation on a global basis is highly mixed. As AI technology advances, governments are grappling with how to regulate its development to maximize benefits while mitigating risks.

DLA Piper's AI Laws of the World guide provides a 2025 Q3 snapshot of AI laws and proposed regulations across more than 40 countries (including all 27 EU Member States), key legislative developments, regulations, proposed bills, and guidelines issued by governmental bodies. The guide shows that while there is enormous regional variation in regulatory approaches and attitudes, some common thematic concerns are shared.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© DLA Piper

Written by:

DLA Piper
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

DLA Piper on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide