New EU Responsibility and Liability Landscape for Smart Medical Devices in a Global Context

White & Case LLP
Contact

White & Case LLP

Artificial intelligence (AI) is already indispensable in the healthcare and life sciences sector. Intelligent medical devices promise nothing less than a revolution in the art of healing. With its legislative projects on AI and product liability, the European Union recently addressed the rapid advance of AI in numerous economic sectors. The interaction between these cross-sectional regulations and sector-specific medical device law poses intricate legal issues.

Background

The European regulatory landscape for AI-powered medical technologies is undergoing a significant transformation. The EU is taking on the regulatory challenge of balancing the potential of AI against its risks. Originally, the European Commission envisioned a legislative system characterized by heightened scrutiny, procedural complexity, and expanded liability. In contrast, the promotion of innovation played a subordinate role. The withdrawal of the Artificial Intelligence Liability Directive (“AILD”) marked a reduction in regulatory density. Nevertheless, the parallel adoption of the Artificial Intelligence Act ("AI Act") and the revised Product Liability Directive ("PLD") create a tight liability framework for manufacturers of intelligent medical devices.

Smart medical devices

There is no official definition of “smart medical devices” in current EU law. A working definition can be synthesized from a joint reading of the Medical Device Regulation (“MDR”) and the AI Act. Accordingly, a “smart medical device” means any software which – either embedded into hardware or by itself – fulfills a specific medical purpose and incorporates at least one AI component. An AI component is defined as a machine-based system exhibiting autonomy, adaptability, and inference capabilities. The ultimate classification as AI follows technical evaluations and a case-by-case assessment.

Legal framework for smart medical devices

Most provisions of the AI Act will enter into force on 2 August 2026. The use of the present tense in this Client Alert is to be understood as referring to the state after the entry into force of the respective provisions of the AI Act.

Smart medical devices are subject to intersecting regulatory requirements under both the MDR and the AI Act. Many smart medical devices fall into the “high-risk” category under the AI Act, requiring technical documentation, risk management, human oversight, transparency, and other obligations to be observed during development. In principle, these obligations apply independently from the requirements under the MDR. In practical terms, some obligations under the AI Act correspond to provisions under the MDR and pursue similar objectives. Their exact content and scope must be assessed in each individual case, considering the technical and medical features of the respective device.

Smart medical devices must undergo a conformity assessment before they may be placed onto the market. In most cases, the conformity assessment would be conducted by the notified body under the MDR, being responsible for assessing the regulatory requirements under both, the MDR as well as the AI Act. In contrast, smart MDR class I medical devices, which do not require an assessment by a notified body under the MDR, may be exclusively subject to a conformity assessment under the AI Act.

Both the AI Act and the MDR distinguish responsibilities of different actors and different stages of a device’s lifecycle. Among others, providers/manufacturers must ensure the device has been developed in accordance with the respective requirements, implement a quality and risk management system, keep documents, and provide information to users of the device. Notably, the AI Act also stipulates obligations for deployers of AI, including professional users, such as the obligation to use the system in accordance with its instructions.

Extension of liability through revised Product Liability Directive

Neither the AI Act nor the MDR contain technology-specific liability provisions. Instead, smart medical devices are subject to general, technology-neutral (product) liability regimes, in particular the revised PLD and national tort or contract law. The revision of the PLD entails a tightened liability of manufacturers of smart medical devices.

Revised definitions of ”product” and “defectiveness”

The revised PLD explicitly mentions software in its definitions of “products” as well as “components” of products. In addition to the manufacturer of the product or component, it creates civil law liabilities for importers, distributors, authorized representatives and anyone who substantially modifies the product outside the manufacturers control.

The revised PLD provides two independent bases for assessing the defectiveness of a product: A product is defective (1) if it does not provide the safety that a person is entitled to expect, or (2) if it falls short of the level of safety required by EU or national law. The latter provision signifies a key intersection between regulatory and product liability law. Many requirements of both the MDR as well as the AI Act may be understood as “safety requirements under EU law”. Thus, failure to fulfill these requirements may trigger product liability. 

Furthermore, the PLD establishes a monitoring duty throughout the entire lifecycle of the device. This follows from a joint reading of several provisions of the PLD tailored towards (smart) software, such as the special consideration given to a device’s ability to continue to learn, the exception from the exemption from liability where the producer is in a position to provide updates, and the potential maximum monitoring horizon of 25 years.

Extended notions of damage and restrictions to exculpation

The revised PLD broadens compensable damage to include verified psychological harm and data loss. Both can become relevant for users of smart medical devices. Liability thresholds have been removed. The injury limitation period has been extended to 25 years. In comparison to the previous PLD, the possibility for exculpation has been narrowed. Manufacturers of software (components) cannot exculpate themselves if the damage could have been prevented by software updates. 

Procedural rules on presumptions and the burden of proof

In addition to its substantive provisions, the revised PLD introduces several procedural rules with potentially significant implications for manufacturers of smart medical devices. 

For instance, the revised PLD requires that a defendant discloses “relevant evidence” if the claimant can support the “plausibility” of a claim for compensation. Failure to provide evidence results in a presumption of defectiveness. This may not only cause friction with confidentiality interests, but also pose technical challenges, especially when the inherent opacity of AI models makes it difficult to pinpoint concrete causes and effects. 

Furthermore, courts may be required to presume a causal link between the defectiveness of a product and a damage in certain other cases. The respective provisions appear to be tailored towards opaque and adaptive AI technologies. Their concrete effects will depend on their implementation into national law and their handling by the courts. Overall, they imply a favorable regime for claimants especially in cases involving medical devices and AI systems. In any case, the respective recital underlines that the legislator had both AI and medical devices in mind when drafting the provisions on procedural presumptions.

Withdrawal of planned modifications to national liability law

The withdrawal of the AILD in February 2025 marks a halt in the development of a standalone tort regime for AI. Apparently, regulating AI liability within existing product and sectoral rules has been deemed sufficient. While this reduces legislative complexity, it increases the need for cross-regulatory interpretation, especially since compliance with the MDR and AI Act by itself does not by itself preclude liability under the PLD.

Developments in the U.S.

While the European Union has moved toward a unified regulatory and liability framework for AI-powered medical technologies, the United States continues to build its approach through iterative guidance and policy initiatives led by the U.S. Food and Drug Administration (FDA). To date, the FDA has authorized more than 850 AI/ML-enabled medical devices, primarily in radiology, reflecting significant commercial adoption. Regulatory oversight remains grounded in the existing statutory framework for Software as a Medical Device (SaMD), supplemented by the FDA’s AI/ML-Based Software as a Medical Device Action Plan and its 2019 discussion paper on a proposed regulatory framework for AI/ML modifications. These initiatives signal a lifecycle-based approach focused on transparency, algorithm change protocols, and postmarket performance monitoring. Although the U.S. lacks a dedicated liability regime for AI systems, manufacturers remain subject to traditional tort law principles, with emerging pressure to ensure algorithmic safety and reliability through quality systems regulation and real-world evidence. The FDA's Digital Health Center of Excellence continues to shape policy through stakeholder engagement and regulatory science efforts, with an emphasis on harmonization and risk-based oversight as the use of adaptive AI systems expands across medical specialties.

Conclusion

The regulatory triangle of the AI Act, MDR, and PLD tightens the liability of manufacturers of smart medical devices. The PLD transfers many of the regulatory requirements of the MDR and AI Act into the sphere of product liability. Far-reaching liability standards and procedural rules favoring claimants may challenge manufacturers of smart medical devices. However, given the level of already established sector-specific standards for medical devices, the disruptive impact of the new liability regime remains to be seen. On one hand, increased liability risks and compliance obligations increase the cost of innovation and deter smaller economic actors in the short term. On the other hand, compliance with EU standards may signal safety and help European smart medical devices to achieve an image of trustworthiness. 

Meanwhile, in the United States, regulators have opted for a more incremental and flexible approach. Rather than introducing new statutory liability regimes, the FDA is developing a lifecycle-based oversight model through evolving guidance, real-world monitoring expectations, and quality systems enforcement. As global competition intensifies, these contrasting regulatory philosophies—one rule-based and prescriptive, the other adaptive and policy-driven—present both challenges and opportunities for multinational manufacturers. Ultimately, in an increasingly AI-driven healthcare ecosystem, regulatory credibility, transparency, and cross-border compliance readiness may become strategic assets in gaining market access and patient trust.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© White & Case LLP

Written by:

White & Case LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

White & Case LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide