The European Data Protection Board's (EDPB) Opinion 28/2024 provides valuable insights into the intersection of artificial intelligence and data protection, particularly in the context of compliance with the EU General Data Protection Regulation (GDPR). The opinion addresses several questions regarding the processing of personal data during the development and deployment of AI models, emphasizing the importance of lawful, fair and transparent data handling practices. While the EDPB's opinion is not a mandatory interpretation of the GDPR, companies will find guidance on how to address some of the issues that arise when using AI in compliance with GDPR.
The EDPB opinion does not focus on the territorial question of when GDPR applies. If in doubt, companies should first assess whether their use of AI generally falls within the scope of the GDPR as per Article 3 GDPR.
In the view of the EDPB, the GDPR regularly applies to AI models trained with personal data.
Different types of AI models are distinguished as follows:
These AI models typically don't contain directly isolatable or linked data sets, but rather parameters that indicate probabilistic relationships between the data embedded in the model.
To deem such an AI model anonymous, supervisory authorities (SAs) must have adequate evidence that:
- Personal data from the training set cannot be extracted with reasonable means, and
- The model's output does not pertain to the individuals whose data was used for training.
To determine if an AI model meets anonymity conditions, SAs should consider three elements:
The EDPB lists some elements that SAs may consider when evaluating a controller's claim of anonymity for an AI model:
The EDPB takes the view that if a supervisory authority cannot confirm effective anonymization measures from the documentation, it may conclude that the controller has not fulfilled its accountability obligations under Article 5(2) GDPR.
The EDPB emphasizes that the GDPR does not prioritize any legal basis for data processing, as required by Article 6(1) GDPR. Key principles from Article 5 GDPR should guide the assessment of AI models:
A practicable alternative to obtaining consent is to rely on legitimate interest under Article 6(1)(f) GDPR. According to the EDPB, this requires the controller to conduct a thorough three-step assessment. The EDPB refers to 2024 guidance in this regard and, in essence, repeats preexisting conditions for legitimate interest, providing some examples.
Certain measures can limit the impact of processing on data subjects, thereby allowing the controller to rely on legitimate interest. Mitigating measures should be tailored to the AI model's specific circumstances and intended use. The EDPB emphasizes that mere compliance with legally required measures is not sufficient for this purpose. Where measures are not legally required or exceed the required scope, however, they can be considered for the balancing exercise. The EDPB lists a few examples of such measures:
In the final part of its opinion, the EDPB examines how unlawful data processing during the development phase impacts the subsequent use of an AI model. It first reminds SAs of their responsibility to verify the lawfulness of processing during the initial development phase. It also lists the corrective powers of supervisory authorities, including fines, temporary limitation on the processing, ordering erasure of parts of the dataset that are unlawfully processed or, in some cases, ordering the erasure of the whole dataset or the AI model itself. The EDPB then goes on to outline three scenarios in which unlawful processing during development may impact the later deployment of an AI model: