Red Teaming Is an Effective Tool for Insurer Assessment of AI Risks

Troutman Pepper Locke
Contact

Troutman Pepper Locke

The insurance industry is facing increased scrutiny from insurance regulators related to its use of artificial intelligence (AI). Red teaming can be leveraged to address some of the risks associated with an insurer’s use of AI. The U.S. Department of Commerce’s National Institute of Standards and Technology (NIST) defines a “red team”[1] as:

A group of people authorized and organized to emulate a potential adversary’s attack or exploitation capabilities against an enterprise’s security posture. The Red Team’s objective is to improve enterprise cybersecurity by demonstrating the impacts of successful attacks and by demonstrating what works for the defenders (i.e., the Blue Team) in an operational environment. Also known as Cyber Red Team.

Red teaming is a concept in cybersecurity. The insurance industry’s enterprise risk, legal, and compliance areas are becoming more familiar with the use of red teaming in connection with AI corporate governance efforts.

Insurance regulators view the use of AI by insurers as creating significant risks for the insurance-buying public and have been working diligently to understand insurers’ use of AI and to develop effective AI regulation. In a June 4, 2025, letter to the leaders of the U.S. Senate, the National Association of Insurance Commissioners (NAIC)[2] stated that:

The NAIC and its members are already leading efforts to address the challenges and opportunities presented by AI. For example, in 2023, the NAIC adopted a Model Bulletin requiring insurers to implement written AI governance programs that emphasize transparency, fairness, and risk management. Over half of all states have adopted this or similar guidance, and more are following suit. State regulators, through the NAIC, continue to develop model laws and seek input from stakeholders to ensure that regulatory frameworks keep pace with technological change.

As described above, state insurance regulators are taking steps to protect insurance consumers in those instances where insurers are using AI. For example, 24 states have adopted the NAIC Model Bulletin on the Use of Artificial Intelligence By Insurers (NAIC Model AI Bulletin)[3], the New York Department of Financial Services (NYDFS) has promulgated Cybersecurity Regulation (23 NYCRR 500)[4] and Circular Letter No. 7 Regarding the Use of Artificial Intelligence Systems and External Consumer Data and Information Sources in Insurance Underwriting and Pricing (Circular Letter No. 7)[5], and Colorado has promulgated Regulation 10-1-1 et seq., Governance and Risk Management Framework Requirements for Life Insurers’ Use of External Consumer Data and Information Sources, Algorithms, and Predictive Models (CO AI Regulations)[6](collectively, (State AI Guidance)). While State AI Guidance does not specifically mandate red teaming, adversarial testing could be a valuable component of an insurer’s related AI corporate governance program.

In the insurance industry, red teaming[7] for AI applications is described as a strategic approach to testing and evaluating the security and robustness of AI systems. This involves simulating adversarial attacks to identify vulnerabilities and assess the resilience of AI models used in various insurance processes, such as underwriting, claims processing, fraud detection, and customer service. Red teaming may reveal unlawful bias or unfairly discriminatory practices resulting from the insurer’s use of AI applications.

The primary goal is to objectively assess the AI system’s ability to withstand attacks that could compromise data integrity, privacy, or operational functionality. Adversarial testing includes creating scenarios where AI models are exposed to adversarial inputs designed to deceive or manipulate the system, such as altered data or malicious algorithms. Red teaming helps identify potential risks associated with AI deployment, including biases, errors, and vulnerabilities that could lead to incorrect decision-making or security breaches. Insurers use red teaming to test internally developed AI applications as well as AI purchased from third-party vendors. Some third-party vendors also disclose their use of red teaming. However, insurers should not rely solely on the red teaming representations of their third-party vendors because the insurer’s use of its own data and proprietary changes to the AI applications may create additional vulnerabilities, biases, or unlawful outputs.

By following best practices, insurers can enhance their security posture, protect sensitive data, and enhance their AI corporate governance. Here are some best practices for conducting effective red teaming:

  1. Define Clear Objectives: Establish clear goals for the red teaming exercise. This could include testing the effectiveness of security controls, identifying potential data breaches, or assessing the response to simulated attacks.
  2. Assemble a Skilled Team: Form a team with diverse expertise, including cybersecurity professionals, legal advisors, and industry specialists or subject matter experts. This ensures a comprehensive approach to identifying and addressing vulnerabilities.
  3. Understand Regulatory Requirements: Ensure the red team is aware of relevant regulatory guidance. This helps align the red teaming activities with compliance requirements.
  4. Simulate Realistic Scenarios: Design scenarios that mimic real-world threats specific to the insurance industry, such as phishing attacks targeting customer data or ransomware attacks on claims processing systems.
  5. Maintain Ethical Standards: Conduct exercises ethically, ensuring that customer data and business operations are not adversely affected. Obtain necessary permissions and inform stakeholders about the scope and limitations of the exercise.
  6. Focus on Critical Assets: Prioritize testing on critical assets such as customer databases, claims processing systems, and financial transaction platforms to identify vulnerabilities in the most sensitive areas.
  7. Collaborate With Blue Teams[8]: Foster collaboration between red and blue teams to enhance the overall security posture. This can include sharing findings and working together on remediation strategies.
  8. Document and Report Findings: Provide detailed reports on vulnerabilities discovered, potential impacts, and recommended remediation steps. Report the findings in a manner consistent with AI corporate governance.
  9. Develop a Remediation Plan: Develop a plan to address identified vulnerabilities including timelines, responsible parties, and follow-up testing to ensure issues are resolved.
  10. Continuous Improvement: Use the insights gained from red teaming exercises to continuously improve security measures and incident response plans. Regularly update the red teaming strategy to adapt to evolving threats.
  11. Engage External Experts: Consider involving third-party experts for an unbiased assessment. External red teams can provide fresh perspectives and identify issues that internal teams might overlook.

Other considerations in deploying red teaming include whether the attorney-client privilege or other privileges (e.g., insurance compliance self-evaluative privilege[9]) may apply to red teaming exercises under certain conditions. Such privileges are not automatically applied. For example, the attorney-client privilege may be applicable if the red teaming exercise is conducted in a manner that is intended to provide legal advice or services, and the communications are confidential and made for the purpose of seeking or providing legal advice.

Some key considerations for the attorney-client privilege to apply to red teaming exercises include:

  1. Involvement of Legal Counsel: Internal legal counsel (and, as applicable, outside legal counsel) should be directly involved in the planning, execution, and analysis of the red teaming exercise. Their involvement should be clearly documented as part of providing legal advice.
  2. Purpose of Legal Advice: The exercise should be conducted with the primary purpose of obtaining legal advice, such as assessing compliance with legal standards or preparing for potential litigation.
  3. Confidentiality: Communications related to the red teaming exercise should be kept confidential and shared only with those who need to know for the purpose of obtaining or providing legal advice.
  4. Documentation: Clearly document the role of legal counsel and the purpose of the exercise as part of a legal strategy. This can help establish the intent to seek legal advice.
  5. Separate Business and Legal Advice: Ensure that the exercise is not solely for business purposes. If the primary purpose is business-related, the privilege may not apply.

As insurers develop and implement their AI corporate governance, red teaming should be considered another “arrow in the quiver” for demonstrating to insurance regulators that insurers are assessing AI risk effectively. Transparency and documentation of the red teaming risk assessments will be helpful in responding to regulatory scrutiny.

 


[1] Red Team – Glossary | CSRC

[2] https://content.naic.org/sites/default/files/government-affairs-letter-ai-moratorium.pdf

[3] https://content.naic.org/sites/default/files/cmte-h-big-data-artificial-intelligence-wg-map-ai-model-bulletin.pdf

[4] 23NYCRR500_0.pdf

[5] Insurance Circular Letter No. 7 (2024): Use of Artificial Intelligence Systems and External Consumer Data and Information Sources in Insurance Underwriting and Pricing | Department of Financial Services

[6] Regulation 10-1-1 Governance and Risk Management Framework Requirements for Life Insurers’ Use of External Consumer Data and Information Sources, Algorithms, and Predictive Models – Google Drive

[7] red team exercise – Glossary | CSRC

[8]Red Team/Blue Team Approach – Glossary | CSRC

[9] 215 ILCS 5/155.31.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Troutman Pepper Locke

Written by:

Troutman Pepper Locke
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Troutman Pepper Locke on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide