Over the last few months, organisations have accelerated efforts to engage with the requirements of the EU AI Act as we fast approach the date for when rules relating to general purpose AI models (“GPAI models”) come into effect, 2 August 2025. The European Commission (“EC”) will be enforcing full compliance with all obligations for providers of GPAI models with fines from 2 August 2026, though GPAI models placed on the market before 2 August 2025 will have until 2 August 2027 to comply with GPAI-related obligations. These rules shall apply to a wide range of AI systems currently available on the market from global developers who will be caught by the EU AI Act’s extraterritorial scope, but there has been a nervousness regarding interpretation of the rules due to the delay of the publication of the GPAI Code of Practice that was planned for May 2025. On 10 July 2025, the EC finally published the Code of Practice on GPAI alongside a Q&A and followed up with the guidelines on GPAI obligations on 18 July, both of which are discussed in this article.
With respect to key UK developments, this article discusses among other things, the passing of the Data (Use and Access) Act 2025 (“DUA Act”), a significant piece of data reform legislation that covers a range of topics from automated decision making to smart data schemes and of course updates to the UK’s data privacy regime. Relevant to this article is the important debate that dominated the final period leading up to the passing of the DUA Act – concerning whether the new law would set out any rules for developers of AI models with respect to their use of copyright protected works for training purposes. This Round-up also provides an update on the timing and content of a proposed UK AI Bill.
This third edition of the EU & UK AI Round-up of 2025 examines the following topics:
- EU Sets Out AI Continent Action Plan
- EDPB Publishes Report on AI Privacy Risks & LLMs
- ETSI Launches AI Specification
- EUIPO Publishes Study on GenAI & Copyright
- CNIL Issues Recommendations on GDPR & AI Development
- European Commission Publishes GPAI Code of Practice & Guidelines on GPAI Obligations
- UK ICO Launches AI & Biometrics Strategy
- UK AI Bill Reportedly Delayed
- UK Government Rejects AI Rules in DUA Act
EU Sets Out AI Continent Action Plan
The AI Office (the overarching body overseeing enforcement of the EU AI Act) published its AI Continent Action Plan on 9 April 2025 (“Plan”). In this Plan, the EC puts forward how it aims to be at the forefront of AI regulation whilst encouraging AI innovation to turn the EU into a ‘leading AI continent’. In the Plan, the EC states that the EU must “accelerate and intensify” in five key domains:
- Computing infrastructure. In addition to the 13 existing ‘AI Factories’ across 17 EU Member States and the move towards more complex AI models and even Artificial General Intelligence or ‘AGI’ as it is sometimes called, the EC sets out plans to also launch up to five ‘AI Gigafactories’, costing €20bn in investment and which will be capable of an “unprecedented level of computing”, to ensure Europe can compete globally. The EC's call for interest in AI Gigafactories generated an unprecedented response, with 76 proposals across 60 sites in 16 Member States. Respondents outlined plans to acquire over 3 million state-of-the-art GPUs, far exceeding the EC’s expectations and demonstrating significant industry appetite for large-scale AI infrastructure investment.
The consultation, which closed on 20 June 2025, serves as a preliminary mapping exercise for potential candidates, with an official call planned for Q4 2025. The AI Gigafactories will build on the existing AI Factories initiative, this time with significantly greater computational power for developing frontier AI models.
- Data Union Strategy. There is a need to enhance interoperability and data availability across sectors for better training of models while ensuring there are safeguards for confidentiality, integrity and security of data. The EC suggests that one approach is to streamline existing data legislation to reduce the burden associated with achieving compliance for organisations in a complex legal and regulatory landscape (e.g., the EC intends to soon lay out plans to update the GDPR to make compliance more efficient and predictable for organisations).
- AI adoption. The EC wishes to target key industrial areas of untapped potential where AI can play a role and improve efficiencies – an approach labelled the ‘Apply AI Strategy’. The EC will organise dialogues with industry and public sector representatives to identify sector-specific AI-related deliverables and KPIs to inform its Apply AI Strategy (see point 5 below).
- Skills and talent. The Plan discusses the ‘AI in Education’ initiative which will seek to improve AI literacy in primary and secondary education. The EC also acknowledges the need to develop and enlarge the EU’s talent pool to keep up with the demand for AI-related expertise by re-skilling and upskilling the EU workforce in the use of AI.
- Regulatory simplification. The EC recognises the need to facilitate compliance as organisations not only grapple with a fast-developing technology, but an entirely new set of rules – the EU AI Act is a complex piece of legislation. The AI Office has set up an ‘AI Service Desk’ where stakeholders can ask for help and receive tailor-made answers. Additionally, the AI Office has launched a public consultation to inform its upcoming ‘Apply AI Strategy’, which closed on 4 June 2025. The consultation comprised questions on the challenges faced by organisations in EU AI Act compliance and areas of regulatory uncertainty which may be hindering the development and adoption of AI. It is unclear as yet whether the EC intends to make any modifications to the obligations in the EU AI Act – this should be closely monitored particularly against the backdrop of a political and economic landscape that has evolved and is continuously shifting, since the EU AI Act was first drafted and published.
EDPB Publishes Report on AI Privacy Risks & LLMS
The European Data Protection Board’s (“EDPB”) comprehensive report on mitigating privacy risks in Large Language Models (“LLMs”), published on 10 April 2025 (“Report”), provides practical guidance for organisations deploying LLMs that process personal data. The development of AI tools powered by LLMs – generally relying on vast amount of data, including personal data, as part of their training dataset – raises significant concerns from a data protection law standpoint. The Report analyses three real-world scenarios as case studies, which are (a) customer service chatbots, (b) educational monitoring tools, and (c) AI travel assistants. It highlights critical privacy challenges around data minimisation, transparency, and accuracy.
The Report is not just for privacy teams, it is a strategic playbook for any organisation deploying generative AI, particularly where it is providing or deploying an AI system in the EU market falling under the EU AI Act. It also serves as an illustrative guide and benchmark in relation to risk assessments for organisations with a global footprint, irrespective of whether they are subject to the GDPR.
Looking at the working example relating to an LLM based chatbot, the Report emphasises the need for understanding the data flows at the early design and development stages to assess at the outset the nature of the privacy risks, if any. Thereafter mapping data flows throughout the lifecycle of an AI system (e.g., by identifying the sources of data, categories of data recipients, storage and transfer locations, data retention) is crucial in mitigating any potential privacy risks. For example, as the chatbot processes user personal data, understanding where this data may be transferred to (e.g., to a third country with no adequacy decision from the EC) is critical to build appropriate safeguards into the contractual arrangements with vendors, such as cloud providers.
The key takeaways for organisations from the Report are:
- Risk assessments must go beyond surface-level checks and account for actual use cases. This exercise must be cross-functional to ensure a holistic assessment of all possible risks.
- AI governance starts as early as at an AI system design stage, and throughout the AI lifecycle, including procurement, implementation and updates.
- LLM ecosystems are complex: cloud providers, API users, internal development teams, and deployers all play a role. Data mapping is key to staying on top of a complex compliance legal and regulatory framework in Europe, including the GDPR and the EU AI Act.
ETSI Laucnhes AI Specification
The European Telecommunications Standards Institute (“ETSI”) on 23 April 2025 published, in collaboration with various European governments, including the UK government, the ETSI TS 104 223 AI specification, a technical standard for securing AI against ever-evolving cyber threats (“AI Specification”). The AI Specification encompasses 13 core principles across the entire AI-lifecycle, which it describes as comprising the following five phases: secure design, development, deployment, maintenance, and end-of-life. The AI Specification was developed by ETSI's Technical Committee on Securing Artificial Intelligence, comprising representatives from international organisations, government bodies, and cybersecurity experts. This cross-disciplinary collaboration was intended to ensure the requirements are both globally relevant and practically implementable. As such, ETSI positions the AI Specification as an “international benchmark” designed to have global applicability beyond Europe. It is also described by the UK’s National Cyber Security Centre as the “first global standard that sets minimum security requirements across the entire AI life cycle for all stakeholders in the AI supply chain”.
Unlike traditional software security standards, the AI Specification addresses AI-specific security challenges such as data poisoning (attacks that corrupt training datasets), model obfuscation (techniques that hide malicious functionality), and indirect prompt injection (exploiting AI system inputs).
The EU AI Act establishes a “presumption of conformity” mechanism under Article 40, whereby AI systems complying with “harmonised standards” are presumed to meet the EU AI Act's requirements. This creates strong incentives for AI system providers to follow harmonised standards, as compliance significantly simplifies demonstrating regulatory conformity. However, although the AI Specification is not currently a formal harmonised standard under the EU AI Act, it is a foundational cybersecurity baseline that could inform or complement future harmonised standards development. It is certainly aligned with the EU AI Act in a couple of ways: (i) its comprehensive coverage of the entire AI value chain mirrors the EU AI Act’s broad scope of obligations on deployers, distributors, deployers and other actors in the value chain; and (ii) its focus on addressing security risks reflects the EU AI Act’s risk-based approach.
While adherence to the AI Specification will not mean automatic conformity per se, such efforts can form part of a wider compliance strategy and demonstrate good practice with regard to AI governance. See also the section below titled: European Commission Publishes GPAI Code of Practice and Guidelines on GPAI Obligations.
EUIPO Publishes Study on GENAI & Copyright
The EU Intellectual Property Office (“EUIPO”) on 12 May 2025 published a 400-plus page study titled “The development of Generative Artificial Intelligence from a Copyright perspective” (“Study”). This examines the intersection between generative AI and EU copyright law across three core areas: training data usage, content generation, and broader ecosystem implications.
The Study discusses the following findings:
- Rights Reservation Challenges. The Study identifies significant fragmentation in opt-out mechanisms under the Copyright in the Digital Single Market Directive (“CDSM Directive”).
- Emerging Market Dynamics. A new licensing market is forming for copyright-protected content in AI training, creating potential revenue streams for creators and driving demand for legally cleared training data.
- Technical Solutions. AI developers are adopting various risk mitigation approaches including content comparison tools, output filters, differential privacy, and "model unlearning" techniques to reduce copyright infringement risks.
- Legal Uncertainty. The Study highlights high legal uncertainty around transparency and detectability requirements under the EU AI Act, particularly regarding how outputs should be marked as artificially generated in practice.
Regarding the challenges pertaining to different rights reservation models, the Study identifies that while the CDSM Directive allows creators to opt out of text and data mining by AI developers, there is currently no standard mechanism, legal or technical, that effectively facilitates or enforces these opt-outs. Various tools have been developed, including Robots Exclusion Protocol (REP) and content authentication standards (like C2PA, a technology that records digitally signed information about the provenance of data). However, these solutions are fragmented, often difficult to implement, and lack enforceability. Critically, none of the available tools can independently prevent AI developers from scraping or using copyrighted material without proper authorisation.
In its conclusion, the Study calls for public authorities to establish federated rights reservation databases for structured opt-out management, common standards for transparency (i.e., what details are required to be disclosed) in AI inputs and outputs, and licencing facilitation.
Overall, the Study seeks to establish important groundwork for balancing AI innovation with creator rights protection – an ongoing debate both in the UK (see below) and the EU – emphasising the urgent need for clearer frameworks and public infrastructure to support both technological development and rights enforcement.
CNIL Issies Recommendations on GDPR & AI Development
On 19 June 2025, France's supervisory authority (“CNIL”) issued two recommendations for developers on compliance with GDPR in the context of AI development, providing necessary clarity for the sector.
The first recommendation focuses on legal basis and the CNIL explicitly endorses legitimate interests under Article 6(1)(f) GDPR as the most practical legal basis for web scraping of personal data for AI development, moving beyond consent-based models. This is due to an acknowledgement of the challenges of obtaining consent from data subjects in this context.
Turning to the second recommendation, the CNIL recaps the obligations for controllers with respect to web-scraping and provides for certain mandatory conditions for lawful web-scraping, including that developers must honour technical refusal signals from websites (robots.txt, CAPTCHA), exclude unnecessary sensitive data, define precise collection criteria and immediately delete data identified as irrelevant despite such criteria. Further, prior to undertaking training, developers must have produced a ready-to-implement mitigation plan, in addition to the typical requirement to undertake and document a legitimate interest assessment. The CNIL further sets out some recommended actions that would demonstrate compliance with GDPR, including by establishing a list of excluded sites for web-scraping purposes, developing transparent objection mechanisms to allow individuals to maintain control over their data and refuse collection but also be able to exercise their data subject rights requests, such as their right to object to the processing of their personal data.
Interestingly, the CNIL’s affirmation of legitimate interests as a lawful basis is a stronger stance than that taken by the UK ICO, which stated in 2023 that legitimate interests as a lawful basis may be sufficient to justify AI training – though as acknowledged above, AI model training has moved on in sophistication and prevalence since then, so we may soon see updated guidance from the UK Information Commission to maintain pace with developments in the industry.
European Commission Publishes GPAI Code of Practice and Guidelines on GPAI Obligations
On 10 July 2025, the EC published a Code of Practice for GPAI models (“Code”). The Code represents a voluntary tool prepared by independent experts designed to help industry members comply with the EU AI Act’s rules relating to GPAI models.
From 2 August 2025 onwards, providers placing GPAI models on the market must comply with their respective EU AI Act obligations. Providers must also promptly notify the AI Office of GPAI models that contain systemic risk to be placed on the EU market. In the first year from 2 August 2025 onwards, the AI Office will offer to collaborate closely with providers who adhere to the Code to ensure that models can continue to be placed on the EU market without delay. If these providers fail to fully implement all commitments immediately after signing the Code, the AI Office will not consider them to have broken their commitments under the Code or reproach them for violating the EU AI Act. Instead, in such cases, the AI Office will consider them to have acted in good faith and will be ready to collaborate to find ways to ensure full compliance. However, from 2 August 2026 onwards, the EC will enforce full compliance with all obligations for providers of GPAI models including by issuing fines. Models placed on the market before 2 August 2025 must comply with the EU AI Act obligations by 2 August 2027.
As described in Article 56(6) of the EU AI Act, the Code will now be assessed by the EC’s AI Office and the AI Board who will publish their decision on its adequacy, before the Code is endorsed by Member States and implemented by the EC to become valid and operational. GPAI model providers who voluntarily sign up can show through their adherence to the Code that they comply with the EU AI Act. It is worth noting that as the Code is not yet valid and operational there might be further changes made thereto, which providers should closely monitor in deciding whether they will adhere to the Code and may wish to wait for the final version to be released before making a conclusive decision. The EC puts forward that adherence to the Code will reduce the administrative burden of GPAI model providers and give them better legal certainty than if they were to demonstrate compliance through other methods. In other words, adherence to the Code will help organisations demonstrate compliance with the EU AI Act whilst not being conclusive evidence of compliance. The AI Office will review the Code at least every two years, and it may propose a streamlined process for reviews and updates.
The Code is divided into three chapters: (1) Transparency; (2) Copyright; and (3) Safety and Security, with each chapter outlining corresponding measures/commitments that providers of GPAI models shall agree to adhere to. While the chapters on transparency and copyright claim to offer all providers of GPAI models a way to demonstrate compliance with their obligations under Article 53 of the EU AI Act, the chapter on safety and security is only relevant to those providers of the most advanced models with systemic risk under Article 55 of the EU AI Act.
- Transparency. The transparency chapter includes a commitment to put into place such measures as drawing up and keeping up-to-date model documentation, providing relevant information, and ensuring quality, integrity, and security of information. In addition, the transparency chapter is complemented by a Model Documentation Form which allows providers to easily document the information necessary to comply with EU AI Act obligations.
- Copyright. The copyright chapter contains commitments, such as an obligation to draw up, keep up-to-date and implement a copyright policy, reproduce and extract only lawfully accessed copyright-protected content when crawling the internet for training data, identify and comply with rights reservations when crawling the internet, mitigate the risk of copyright-infringing outputs, and designate a point of contact and enable the lodging of complaints.
- Safety and Security. The safety and security chapter includes 10 commitments that signatories of the Code who provide the most advanced AI models with systemic risk shall agree to adhere to. These commitments include the obligation to create, implement and update a Safety and Security Framework (“Framework”) to outline the systemic risk management processes and measures that signatories implement to ensure the systemic risks stemming from their models are acceptable. The chapter also includes commitments to systemic risk-identification, to engage in systemic risk analysis, to specify systemic risk acceptance criteria and determine whether the systemic risks stemming from providers’ GPAI model is acceptable, to implement appropriate safety mitigations and an adequate level of cybersecurity protection for models and their physical infrastructure along the entire model lifecycle. Additionally, signatories commit to reporting to the AI Office information about their model and their systemic risk assessment and mitigation processes and measures by creating a Safety and Security Model Report (“Model Reports”). Finally, signatories commit to defining clear responsibilities for managing systemic risks and allocating appropriate resources to those responsible with dealing with these risks alongside implementing measures and processes that allow for serious incident reporting.
In addition to the Code, the EC published non-binding guidelines for providers of GPAI models on 18 July 2025 (“Guidelines”), which the EC says: “will help stakeholders across the AI value chain innovate with clarity and confidence”. While the Code details specific measures that GPAI model providers may implement to comply with the EU AI Act, the Guidelines set out the EC’s interpretation of the rules applicable to GPAI models under the EU AI Act. The Guidelines address key questions that providers are grappling with in relation to, among other things, the point at which a GPAI model exhibits systemic risk, what it means to place a GPAI model on the market, and exemptions for certain providers of open-source GPAI models. On the latter, the licence to a GPAI model may be considered free and open source under the EU AI Act, and qualify for the exemptions, if and only if the licence contains all the necessary rights (i.e. rights to access, use, modify, and redistribute).
UK ICO Launches AI & Biometrics Strategy
Turning to the UK, the UK ICO published its first dedicated AI and Biometrics Strategy on 5 June 2025. This represents a major regulatory milestone, focusing on technology use cases where risks are concentrated but significant public benefit potential exists.
The strategy reflects the ICO's outcome-based regulatory philosophy, emphasising cooperative engagement with compliant organisations while reserving enforcement action for serious breaches. Information Commissioner, John Edwards stressed that public trust remains fundamental to successful AI adoption, requiring organisations to demonstrate responsible data handling practices.
The ICO has identified three high-risk areas requiring immediate regulatory attention. These areas represent sectors where AI and biometric technologies are already prevalent and can offer substantial benefits to everyday life but pose harm if misused:
- Foundation models development– the ICO will scrutinise developers of large-scale AI systems, particularly regarding personal data protection in training processes and compliance with lawful data processing requirements.
- Automated decision-making– special focus on AI use in recruitment processes and public services, with the ICO working with early adopters like the Department for Work and Pensions to establish best practices and regulatory expectations.
- Facial recognition technology– specific emphasis on police force usage rather than commercial applications, with planned audits and guidance on lawful, proportionate deployment following public concerns about privacy rights.
The ICO will develop a statutory code of practice on AI and automated decision-making by autumn 2025, providing legally binding standards for AI deployment, moving from voluntary guidance to mandatory compliance requirements.
The ICO’s strategy includes “enhanced industry engagement” through securing assurances from foundation model developers and conducting detailed scrutiny of major employers using automated decision-making systems. The ICO will also monitor emerging agentic AI systems with autonomous capabilities. Further the ICO's supply chain focus means both international developers and deployers face potential scrutiny, requiring careful attention to compliance frameworks when operating in the UK market. The emphasis on public trust indicates that demonstrating responsible data handling will be essential for maintaining market access and consumer confidence.
UK AI Bill Reportedly Delayed
The UK government has delayed planned legislation to regulate AI under the auspices of a “UK AI Bill” until the summer of 2026. Peter Kyle, Secretary of State for Science, Innovation and Technology reportedly intends to introduce a UK AI Bill after the next King’s speech, which is unlikely to be before May 2026. In a letter to MPs on 6 June 2025, the Secretary of State also committed to establishing a Parliamentary Working Group for issues relating to AI and copyright law.
Initially, the UK government’s plans for the UK AI Bill’s focus was limited to a narrow emphasis on regulating advanced AI models, such as LLMs that power chatbots that have been widely adopted by the public. The decision to delay the introduction of this legislation is reportedly due to the UK government’s desire to align AI regulation with the US, as well as a response to prominent voices from the creative sector having lobbied that AI regulation that did not also deal with challenges to copyright did not go far enough in addressing their concerns.
The delay comes in the aftermath of a prolonged debate in Parliament on the extent to which the DUA Act should include provisions dealing with AI and copyright. Since the DUA Act does not ultimately deal with AI and copyright as discussed below, government sources have indicated that the forthcoming UK AI Bill will be the vessel in which to tackle this extremely contentious area of AI regulation. It will be interesting to observe the scope of any new proposals as previously the UK government has suggested a narrow focus limited to regulating only the most advanced AI models.
UK Government Rejects AI Rules in DUA Act
After multiple attempts by successive governments over the last few years to bring about regulatory changes that would enhance and promote the use of data in the UK, a much-debated law has finally arrived in the form of the DUA Act, which received Royal Assent on 19 June 2025. This covers a wide range of topics encompassing both personal and non-personal data, and makes notable, albeit not necessarily substantive changes to the UK privacy regime.
Of most interest to many observers was the ‘ping pong’ between the Houses of Parliament on what the DUA Act should say, if anything, regarding the interaction between AI innovation and copyright rules eventually ended with the two sides agreeing on a compromise as set out below. The House of Lords wanted to ensure that AI developers would be required to disclose to copyright holders’ details of how their information was used to train AI models – a key demand of the creative sector. Regarding the DUA Act provisions themselves, the amendments by the House of Lords were ultimately rejected by the House of Commons on the basis the UK government wishes to first examine the outcome of the IPO’s Copyright and AI Consultation which closed on 25 February 2025 (“Copyright Consultation”) and that, in its view, the DUA Act is not the appropriate vehicle for addressing AI-related concerns. Instead, the DUA Act requires the UK government to, within nine months of Royal Assent:
- Prepare an economic impact assessment in relation to the four policy options described in the Copyright Consultation (section 135 of the DUA Act), which are summarised as follows:
- leaving copyright law unchanged (likely resulting in legal uncertainty and dispute);
- requiring express copyright licensing in all cases meaning AI models may only be trained on copyright works if developers have an express licence to do so;
- introducing a broad text and data mining (“TDM”) exception allowing data mining on copyright works including for commercial use with few restrictions;
- introducing a TDM exception for training AI models on copyright works but subject to copyright holders being able to reserve their rights; and
- Prepare a report on the use of copyright works in the development of AI systems that discusses proposals for, among other things, technical measures for controlling use of copyright works, disclosure requirements by AI developers and the granting of licences to AI developers to use copyright works (section 136 of the DUA Act).
The UK government must provide a progress statement on its efforts towards producing the above documents within six months of the DUA Act receiving Royal Assent, i.e. by the end of 2025.
Looking Ahead
The CNIL’s June 2025 guidance represents a significant milestone in European AI regulation, providing practical clarity on GDPR compliance for AI development. By explicitly endorsing legitimate interest as the primary legal basis for AI systems, the CNIL has established a pragmatic framework that balances innovation with privacy protection. Meanwhile, the EC continues advancing EU AI Act implementation through several key initiatives. The establishment of a 60-member AI Scientific Panel demonstrates EU's commitment to evidence-based regulation, with independent experts providing technical guidance on GPAI models, systemic risk assessment, and classification methodologies. This panel will play a crucial role in shaping how the most sophisticated AI systems are regulated across the EU.
The EC’s targeted consultation on high-risk AI systems, launched on 6 June 2025 and which ran until 18 July 2025, represented an opportunity for stakeholders to influence practical implementation. This consultation was particularly significant for life sciences and healthcare organisations developing AI-based medical devices, as the resulting guidelines will determine which AI systems fall under the EU AI Act's high-risk category. The guidelines are due by 2 February 2026 and are expected to provide essential clarity on compliance obligations that make up the bulk of the rules of the EU AI Act and before mandatory compliance begins on 2 August 2026.
In the UK, Peter Kyle's announcement of the partnership between the UK Regulatory Innovation Office and the Digital Regulation Cooperation Forum signals a commitment to streamlining the complex regulatory framework for fintech and digital firms. The development of AI-based regulatory tools, including a unified digital policy library, demonstrates the UK government's recognition that regulatory complexity itself can hinder innovation. The AI Opportunities Action Plan's progress (which began over six months ago), with new cross-government partnerships and funding for responsible AI trials, indicates the UK's continued emphasis on fostering innovation while maintaining appropriate safeguards. This approach contrasts with the EU's more prescriptive regulatory framework, positioning the UK as an alternative jurisdiction for AI development.
In the upcoming months, organisations should continue to monitor how the practical implementation of these regulatory frameworks continues to take form, consider engaging with ongoing consultations and get up to speed with recent regulatory guidance to maintain a competitive advantage in the evolving AI landscape.