EUROPE - Navigating the Interplay Between EU AI Act and Medical Device Regulations: Strategic Update for the Healthcare Sectors

King & Spalding

New FAQs Available

Already highly regulated with a risk-based approach at their core, AI-powered medical devices and in vitro diagnostic medical devices face new regulatory constraints stemming from the EU AI Act, a horizontal legal instrument mixing safety, product regulatory, and fundamental human rights. The EU AI Act generally enters into force in phases, from February 2025 to 2027.

The European Artificial Intelligence Board (“AIB”) and the Medical Device Coordination Group (“MDCG”) have published “Frequently Asked Questions” (FAQs) clarifying how the EU AI Act, the Medical Devices Regulation (“MDR”), and the In Vitro Diagnostic Medical Devices Regulation (“IVDR”) intersect.

The FAQs address regulatory expectations for AI-powered medical technologies. There currently are 36 questions, but the FAQs are intended to be dynamic and may expand in the future.

Both the AIB and the MDCG are composed of representatives of all EU Member States, chaired by representatives of the European Commission and of a Member State. Hence, the FAQs are a very important reference point, even if they are not binding on companies.

This alert shares some of the key takeaways of the FAQs (excluding post market monitoring elements covered by the FAQs).

Our lawyers assist organizations in analyzing the application of those two regulatory regimes, understanding their interactions, conducting gap assessments to assess how current documentation and filings can be repurposed to demonstrate compliance with the EU AI framework, and, more generally, addressing specific issues by AI and generative AI medical device software.

Key Takeaways from the FAQs: A Clarification on Simultaneous Application and Risks Categories and Practical Recommendations on How to Address Overlapping/Enhanced Requirements (FAQ No. 1 to 22).

Simultaneous Application of EU AI Act and Sector Specific Regs (FAQ No. 1 to 4). – The FAQs confirms that a system which meets the definition of an AI system under the EU AI Act and qualifies as a medical device or an in vitro diagnostic device (“IVD”) under the MDR or the IVDR must comply with both sector-specific legislation (i.e., MDR/IVDR) and horizontal legislation (i.e., EU AI Act). This encompasses a wide range of pre- and post-market elements.

Medical devices powered by AI (“MDAI”) will be considered high-risk AI systems where: (i) the MDAI is a safety component of a medical device (e.g., AI powering an insulin pump) or the AI system itself is a medical device (e.g., AI diagnostic software), and (ii) the medical device is subject to a conformity assessment by a notified body in accordance with the MDR/IVDR.

The table below (reprinted from the FAQs) identifies possible combinations.

Note 10: except for non-invasive devices which are classified as Class in accordance with Guidance on qualification and classification of Annex XVI products - A guide for manufacturers and notified bodies.

The table links MDR and IVDR classifications above Class I (devices) and Class A non-sterile (IVDs) to high risks conditions under the EU AI Act. However, the converse is not always true. That is, although the classification of a medical device under the MDR/IVDR determines whether the AI system qualifies as high-risk, the classification of an AI system as a high-risk does not always mean that the medical device or IVD falls in a higher risk class under the MDR/IVDR. Further, the regulatory constraints under the MDR/IVDR are burdensome, and regulatory arbitrage is not uncommon.

Enhanced Requirements (FAQs No. 5 to 22). -- The FAQs provide details on how to operationalize overlapping obligations (or enhanced obligations coming from the application of the EU AI Act) in five key areas: management systems, data governance, technical documentation, transparency and human oversight, accuracy, robustness, and cybersecurity. Details on those obligations are included in Appendix A to this alert.

Other Relevant Elements: New Criteria to Evaluate the Performance of MDAI, Practical Considerations for Enhanced Conformity Assessments, and Update on What Represents a Significant Change (FAQs No. 23 to 32)

The FAQs confirms that high-risk MDAIs must be supported by clinical evidence to demonstrate the safety, performance, and, where applicable, clinical benefit of the device. This is generated through a clinical investigation (medical device) or a performance study (IVD) that constitutes a real-world testing under the EU AI Act. Validation also involves verification that the high-risk MDAI does not infringe fundamental rights of those concerned.

The relevant conformity assessment procedure stems from the classification of the MDAI within the MDR/IVDR and AI Act. Most MDAIs are classified as Class IIa (MDR)/Class B (IVDR) or above, which means that they will require a notified body to conduct a quality management system audit, technical documentation review, and inspections to ensure compliance.

After an MDAI receives an initial conformity assessment, a new conformity assessment is only needed in the event of a substantial modification, regardless of whether the modified system is intended to be further distributed or continues to be used by the current deployer.1 Changes to high-risk MDAIs that have been pre-determined by the manufacturer, assessed at the time of the initial conformity assessment, and are part of the information contained in the technical documentation do not constitute substantial modifications and therefore do not require new conformity assessments.

Final Considerations: Some Definitional Elements, Information on Minimum Training (FAQs No. 35 & 36)

“In-house” MDAI are not considered high-risk systems. -- MDR/IVDR developed and used only within health institutions established in the EU are not subject to third-party conformity assessment requirement; therefore, they are not classified as high-risk AI. The FAQs reaffirms, however, that other obligations of the EU AI Act might still apply (disclosure, complying with trustworthy principles, etc.)

AI training to users of MDAI. -- As part of their risk management, manufacturers must ensure that deployers are trained to use MDAIs. This training should help reducing foreseeable misuse and oversight during deployments, as well as assist deployers in making informed decisions. When human oversight measures are deemed appropriate due to the level of risk, autonomy, and context, the individuals responsible for human oversight must understand the capabilities and limitations of the device and be able to monitor it effectively. Manufacturers should advise on the education and training required for the specific device.

Manufacturers and deployers must ensure an adequate level of AI literacy among their staff and others who deal with the operation and use of AI systems on their behalf. This must be done while considering the specific circumstances of the staff (e.g., education, experience, etc.).

Conclusion: Harmonization, Not Duplication

The FAQs pave the way towards some form of ‘regulatory coherence’ for implementing the requirements of the AI Act. While the AI Act establishes a horizontal layer of compliance, integrating it with MDR/IVDR standards enables manufacturers to adopt a unified approach that will ‘kill two birds with one stone.’ This paves the way for other sectors subject to European Union ‘safety’ regimes in scope of the EU AI Act.

For companies at the forefront of medical AI, aligning with these dual requirements early is critical, not only to meet compliance expectations but also in their quest for users’ trust.

Appendix A - How to Operationalize Overlapping/Enhancing Requirements

The FAQs provide details on how to operationalize overlapping obligations (or enhanced ones coming from the application of the EU AI Act) in five key areas: management systems, data governance, technical documentation, transparency and human oversight, accuracy, robustness and cybersecurity.

Management Systems

Manufacturers are required to manage the entire lifecycle of MDAIs, ensuring safety and consistent performance by continuous review and oversight throughout the lifecycle and post-market monitoring. This covers activities such as high-risk design, development, testing, deployment, monitoring, and updates, along with comprehensive documentation of all relevant changes and their impacts.

In addition, a quality management system must include substantive requirements and procedural obligations to ensure compliance with applicable requirements of both the MDR/IVDR and the AI Act.

As the quality management system obligations under the AI Act are targeted to the AI system, the FAQs confirm that they should be regarded as complimentary to those required under the MDR/IVDR. Therefore, they should be integrated as part of the existing quality management system under the MDR/IVDR.

Risk management systems under both regulations require manufacturers to reduce the identified and assessed risks related to system design, development, and deployment, which for MDAI should include not only general device risks but AI-specific ones (e.g., data biases).

Data Governance

High-quality data plays a central role in ensuring performance and safety, especially for the development of high-risk MDAIs based on machine learning techniques.

High-quality data sets for training, validation, and testing of MDAIs require the implementation of appropriate data governance and management practices that, in case of personal data, should be fully compliant with GDPR and include transparency about the original purpose of the data collection.

Manufacturers must ensure that high-quality datasets are sufficiently representative; are free of errors; are complete in the view of the intended purpose, include the appropriate statistical properties; and are examined in view of possible biases which are likely to affect the health and safety of persons, have negative impact on fundamental rights, or lead to discrimination prohibited under European Union law.

Record-keeping and logging obligations (pre-market and post-market) introduced by the AI Act also facilitate traceability and identification of situations where an MDAI may present a risk due to a potential bias in the training, validation or testing data sets.

Validation of the training data used is key and should be demonstrated as part of the studies to ensure the accuracy, reliability, and effectiveness of the MDAI.

Finally, manufacturers must document these activities and their effectiveness.

Technical Documentation

A unified technical file must include both (i) MDR/IVDR-mandated elements such as descriptions of software, software architecture, data processing methods, and risk management strategies; and (ii) AI-specific documentation, focusing on transparency and accountability, including risk assessments, data governance practices, and performance testing outcomes of the high-risk MDAI.

Documentation should also cover system design, development procedures, functionality, performance characteristics, system architecture, computational resources to develop, train and test, and intended use and purpose and evidence of conformity with relevant regulatory requirements, including quality management processes.

Transparency & Human Oversight

Transparency. – Transparency requirements are essential, especially to promote accountability, and should be addressed within the manufacturer’s risk management system and quality management system and verified through the conformity assessment procedure.

Manufacturers must ensure that high-risk MDAI intended to interact directly with natural persons are designed and developed in such a way that prospective users are informed that they are interacting with an AI system (unless this is reasonably obvious). Users must also be informed of the purpose, operation (including data processing), and limitations of the device. This includes implementing safeguards, interpretation tools, and user interfaces that make AI outputs meaningful and trustworthy throughout the lifecycle, design, documentation and record-keeping, labelling, and post-market surveillance of the MDAI.

Deployers are also required to inform providers appropriately to ensure proper use of the system.

Additionally, manufacturers must apply usability engineering principles to eliminate or reduce risks related to user errors as much as possible. They must consider the intended user's knowledge and whether training is appropriate and comply with documentation obligations in this regard.

Human oversight. – Manufacturers should design high-risk MDAIs that allow human intervention in critical decision-making processes. The implemented oversight measures should be proportionate to risk, with higher level of autonomy or impact requiring stronger oversight mechanisms. These measures should also be clearly defined, documented and include appropriate instructions for use, which will be pivotal in ensuring safe and effective use and deployment of high-risk MDAIs and in allowing appropriate supervision by healthcare professionals and institutions.

Human oversight can also be understood as a risk management measure that calls for manufacturers to: (1) eliminate or reduce risks as far as possible through safe design and manufacture; (2) if necessary, include alarms in relation to risks that cannot be eliminated; and (3) provide information for safety.

Furthermore, for MDAIs, manufacturers must specifically consider what level of human oversight is necessary and appropriate according to the level of risk posed by the MDAI. Informed consent is not required except for clinical investigations and performance studies, but these other transparency and human oversight requirements help safeguard patient autonomy and support the ethical deployment of MDAIs.

Traceability. – Traceability is applied in two interrelated ways: (i) of the device movement and lifecycle; and (ii) of system functioning and performance.

Accuracy, Robustness, and Cybersecurity

Any risks associated with the operation of the device must be acceptable to enable a high level of protection of health and safety, considering the generally acknowledged state of the art. This can only be achieved through the establishment of an adequate balance between benefit and risk.

Cybersecurity measures implemented by manufacturers must aim to prevent unauthorized access, cyberattacks, exploits, manipulation and to ensure operational resilience while addressing AI specific vulnerabilities and considering AI specific assets. For this purpose, cybersecurity should be part of the risk and quality management systems, and manufacturers must conduct risk assessments to identify potential vulnerabilities and implement mitigation measures appropriate to the relevant circumstances and risks throughout the entire life cycle of the MDAI.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© King & Spalding

Written by:

King & Spalding
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

King & Spalding on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide