The Legal Landscape of AI in Insurance: What New York Insurers Need to Know

Weber Gallagher Simpson Stapleton Fires & Newby LLP

This update provides an overview of the legal and regulatory considerations surrounding the use of artificial intelligence (AI) products and New York law in insurance products; with a particular focus on underwriting, pricing, and claims handling. Ultimately, the use of AI in insurance products presents both opportunities and challenges under New York law. While AI can enhance efficiency and innovation, its use must comply with strict legal standards for fairness, transparency, and due process. Insurers must ensure that their AI systems are free from bias, provide clear disclosures to consumers, and maintain robust governance and accountability practices. By adhering to these legal requirements, insurers can harness the potential of AI while protecting consumer rights and upholding public trust.

Overview of AI in Insurance
AI is increasingly utilized in the insurance industry to enhance efficiency, improve decision-making, and streamline processes. Its applications range from underwriting and pricing to claims handling, offering significant potential for innovation. However, the use of AI also raises important legal and regulatory considerations, particularly in transparency, fairness, and due process.

Legal and Regulatory Framework
Under New York law, insurers must ensure that their use of AI complies with strict standards designed to protect consumers and promote fairness. Key considerations include:

  1. Admissibility and Due Process Concerns: New York courts have recognized that due process issues can arise when decisions are made by software programs, including AI, rather than by human analysts. Courts must assess the constitutional requirements of due process in light of the growing use of AI. The use of AI in insurance raises significant due process concerns, particularly when algorithms rather than human analysts make decisions. New York courts have emphasized the need to reassess constitutional due process requirements in light of the exponential growth of AI technologies. This suggests that insurers must ensure that AI-driven decisions are transparent, explainable, and subject to human oversight to meet due process standards.[1]
  2. Regulatory Guidance on AI in Insurance: The New York Department of Financial Services (DFS) has issued guidance to ensure that the use of AI in underwriting and pricing does not result in unfair discrimination. Insurers must test for unjust discrimination, maintain governance and documentation, manage third-party vendor relationships, and ensure consumer transparency. § 43.06 Unfair Claim Practices. Further, the DFS has provided detailed guidance to insurers on the use of AI in underwriting and pricing. Insurers must test their AI systems for unfair discrimination and maintain robust governance and documentation practices. They must also manage third-party vendor relationships and ensure consumer transparency, particularly when adverse decisions are based on AI-generated data.[2] These requirements underscore the importance of accountability and consumer protection in the use of AI.
  3. Transparency and Disclosure: Transparency is a key legal requirement in New York’s insurance industry. Insurance producers must disclose their role in transactions, compensation arrangements, and any factors affecting their compensation. This obligation extends to the use of AI, requiring insurers to provide clear and understandable information to consumers about how AI systems are used in underwriting, pricing, and claims handling.[3] As such, transparency to consumers is a critical component of the DFS’s guidance. Insurers must notify consumers of adverse actions and provide clear explanations for such decisions, including the primary factors influencing the action.[4] Insurers must provide consumers clear and understandable information about how AI systems are used in underwriting, pricing, and claims handling.[5] These requirements aim to enhance transparency and protect consumer interests.
  1. Prohibition of Unfair Discrimination: AI systems must be designed and tested to avoid producing biased or discriminatory outcomes. New York law prohibits insurers from unfair underwriting and pricing discrimination. Insurers must promote equitable rate treatment among insureds and avoid discriminatory practices based on protected characteristics. This legal standard applies equally to AI-driven decisions, requiring insurers to ensure their algorithms do not produce biased or discriminatory outcomes.[6]
  1. Governance and Accountability: Insurers must maintain robust governance and documentation practices to demonstrate compliance with regulatory requirements. This includes managing relationships with third-party vendors and ensuring AI systems are tested for fairness and accuracy.[7]

Challenges and Considerations
While AI offers significant opportunities for innovation in the insurance industry, its use also presents challenges. Insurers must navigate complex legal and ethical issues to ensure that AI systems are used responsibly and in compliance with New York law. Key challenges include:

  • Ensuring that AI systems are free from bias and do not produce discriminatory outcomes.
  • Balancing the need for efficiency with the requirement for transparency and consumer protection.
  • Addressing potential due process concerns by providing clear explanations for AI-driven decisions and allowing for human oversight.

Conclusion
The use of AI in insurance products has the potential to transform the industry, offering new opportunities for efficiency and innovation. However, insurers must carefully navigate the legal and regulatory landscape to ensure compliance with New York’s strict standards for fairness, transparency, and due process. By implementing robust governance practices and prioritizing consumer protection, insurers can harness the benefits of AI while mitigating potential risks.


[1] See, People v Wakefield, 38 NY3d 367, 380 (2022); and People v Wakefield, 38 NY3d 367, 380 (2022)
[2] QuickStudy: NY DFS Jumps on the AI Bandwagon by Issuing Proposed Guidance to New York Licensed Insurers Relating to Underwriting
[3]  NY CLS Ins § 2119
[4] NY CLS Ins 2805; and NY CLS Gen Bus 380-i
[5] See, Locke Lord QuickStudy: NY DFS Jumps on the AI Bandwagon by Issuing Proposed Guidance to New York Licensed Insurers Relating to Underwriting ; NY CLS Ins 2805 ; § 221.6 Adverse action notification ; NY CLS Ins 2804 ; § 221.5 Disclosure requirements ; § 52.25 Rules relating to the content and sale of forms for long term care insurance, nursing home insurance only, home care insurance onl y, and nursing home and home care insurance ; 1 New Appleman New York Insurance Law § 9.05.
[6] See, NY CLS Ins 2303; NY CLS Ins 4224; NY CLS Ins 2606; NY CLS Ins 2301; NY CLS Ins 2344; NY CLS Ins 6905.
[7] See, Locke Lord QuickStudy: NY DFS Jumps on the AI Bandwagon by Issuing Proposed Guidance to New York Licensed Insurers Relating to Underwriting; What’s In NYDFS Guidance On Use Of AI In Insurance (August 1, 2024); Administrative Code of New York § 419.11 Oversight of third-party providers; NY CLS Ins 3239; NY CLS Ins 1608.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Weber Gallagher Simpson Stapleton Fires & Newby LLP

Written by:

Weber Gallagher Simpson Stapleton Fires & Newby LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Weber Gallagher Simpson Stapleton Fires & Newby LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide