Key Insights from Sheppard Mullin and Marsh McLennan’s Webinar on Navigating Healthcare Risks in a Rapidly Evolving AI Landscape

Sheppard Mullin Richter & Hampton LLP
Contact

Sheppard Mullin Richter & Hampton LLP

On June 10th, Sheppard Mullin partner Carolyn Metnick and associate Esperance Becton, in collaboration with Marsh McLennan, presented the CLE webinar, “Navigating Healthcare Risks in a Rapidly Evolving Patient and Provider Centered AI Landscape.” The session addressed the growing legal, operational, and ethical risks of AI adoption in healthcare, emphasizing the importance of thoughtful governance and risk mitigation. Key discussion points included regulatory compliance, implementation strategies, liability trends, Marsh’s generative AI risk framework, and insurance considerations.

Regulatory Compliance: Legal Frameworks and Regulations

The presenters opened by acknowledging a critical tension: AI innovation in healthcare is advancing more rapidly than existing regulatory frameworks. While AI holds promise for enhancing efficiency—such as reducing administrative burdens and facilitating personalized care—it also raises concerns around patient privacy, transparency, and algorithmic bias. Esperance Becton emphasized that, in the absence of AI-specific federal legislation, providers must rely on broadly applicable laws to guide responsible deployment, such as the Health Insurance Portability and Accountability Act (HIPAA), the Food, Drug, and Cosmetic Act, the Federal Trade Commission Act, and Title VI of the Civil Rights Act. At the state level, jurisdictions like California are paving the way with laws mandating disclosure when generative AI (Gen AI) is used in clinical communications and prohibiting health insurers from denying coverage solely based on AI-driven decisions without meaningful human oversight.

In addition to statutory frameworks, providers should align their practices with emerging industry guidance. Entities such as the Joint Commission, the American Medical Association, and the Coalition for Health AI have issued recommendations focused on promoting safety, ethics, and equity. Navigating this patchwork of laws and guidance demands continuous diligence and cross-functional coordination.

AI Implementation and Third-Party Contracting

Successful AI implementation and integration requires collaboration among compliance, IT, legal, and clinical teams. The presenters discussed the creation of multidisciplinary AI governance committees to oversee implementation, approval processes, and ongoing oversight. Healthcare entities should also consider developing clearly articulated policies addressing permitted use cases, training, and incident response protocols.

Carolyn Metnick discussed the importance of applying existing legal frameworks at implementation to support successful integration of AI in healthcare settings, and identified the following core risk areas:

  • HIPAA Compliance. The unauthorized use or disclosure of protected health information by AI systems or vendors may result in significant penalties under HIPAA.
  • FDA Oversight. AI technologies that meet the criteria for software as a medical device may be subject to U.S. Food and Drug Administration (FDA) regulation and require premarket clearance or approval.
  • FTC Regulation. Marketing AI-powered healthcare tools with unsubstantiated performance claims or a lack of transparency can trigger enforcement actions by the Federal Trade Commission (FTC) for deceptive or unfair practices.
  • State Law Trends: States such as California, Colorado, and Utah are emerging as leaders in enacting healthcare-specific AI legislation, creating a patchwork of state compliance obligations for health care organizations operating across jurisdictions.

Another critical area is third-party vendor engagement. Given the influx of emerging startups in the AI market, healthcare organizations must conduct rigorous due diligence before onboarding any external solution. Contracts may include comprehensive provisions related to legal compliance, data ownership and licensing, representations and warranties, and indemnification.

AI Liability in Healthcare: Navigating Uncharted Territory

Marsh McLennan’s Serena Sowers (Senior Vice President) and Hala Helm (Managing Director, Strategic Healthcare Risk Advisor) offered insights into the emerging liability landscape, underscoring the complex legal questions that artificial intelligence introduces across established doctrines. Among the key concerns is how the standard of care in medical malpractice might evolve when clinicians rely on AI-generated recommendations. Questions also arise around vicarious liability, particularly whether AI tools could be considered agents of healthcare providers or institutions. The applicability of the learned intermediary rule is being reevaluated in scenarios where clinicians retain the ability to review and override AI outputs. Additionally, the concept of enterprise liability suggests that, as AI becomes more deeply integrated into clinical workflows, legal responsibility could shift from individual practitioners to healthcare organizations. The classification of AI under products liability law—whether as a product or a service—remains unsettled, with significant implications for accountability. The notion of AI personhood has also raised questions about whether future legal frameworks could permit direct claims against AI systems.

Insurance coverage for AI-related risks may touch on various lines, including medical malpractice, cyber liability, errors and omissions, and product liability. However, the presenters cautioned that coverage frameworks are still evolving. New understandings of the roles of clinicians (as “users”) and hospitals (as “deployers”) are reshaping liability analysis across these legal regimes as well.

Marsh’s Framework for Generative AI Risk Management

Jaymin Kim (Managing Director of Emerging Technologies at Marsh) presented Marsh’s risk framework developed around Gen AI, or AI capable of producing original content. She clarified that while Gen AI does not introduce entirely new categories of risk, it can exacerbate existing ones—such as privacy breaches, intellectual property disputes, and systemic bias. For example, bias embedded in Gen AI training data can lead to widespread discriminatory outcomes, potentially triggering class action litigation and reputational harm. Three key categories of risk controls include the following:

  1. Process Controls: Establishing centralized, multi-layer governance structures that span all organizational AI use cases, including non-clinical applications.
  2. People Controls: Educating and training employees to avoid unintended data exposure, particularly when using public-facing Gen AI tools.
  3. Technical Controls: Integrating AI-specific cybersecurity protocols, implementing access controls, and conducting regular audits to maintain system integrity.

Insurance Policy Considerations

Insurers are beginning to tailor underwriting questions around AI use, and expecting organizations to respond to inquiries such as whether AI has been deployed and for what purposes, whether data governance and privacy safeguards are in place, whether a formal AI oversight committee has been established, and how contractual liabilities with third-party vendors are addressed.

While standalone AI policies are not yet widely available, carriers are starting to introduce AI-specific endorsements and exclusions into broader policies. Organizations should be proactive during negotiations and consider whether AI-related terms like “algorithmic bias” or “AI failure scenario” are clearly defined. It is also important for organizations to understand when and how AI-related incidents activate coverage, and what kind of documentation, audit, and explainability obligations may be required for claim eligibility.

Key Risk Management Takeaways

Marsh concluded the webinar with three critical risk management takeaways for healthcare organizations navigating the evolving AI landscape.

  • Distinguish between generative and traditional AI systems and maintain visibility into how these tools are deployed across the enterprise.
  • Implement centralized, cross-functional AI governance frameworks, supported by well-defined usage policies that are consistently enforced.
  • Continually assess how technological, legal, and insurance developments impact organizational risk exposure.

While AI offers transformative potential for the healthcare industry, the legal and regulatory terrain remains unsettled. To realize the benefits of AI safely and responsibly, healthcare entities must invest in robust governance structures, integrate contractual protections, and align their insurance strategies accordingly.

The full recording of the webinar is available here.

* Marsh McLennan is not affiliated with Sheppard Mullin.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Sheppard Mullin Richter & Hampton LLP

Written by:

Sheppard Mullin Richter & Hampton LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Sheppard Mullin Richter & Hampton LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide