On July 7th, the NAIC’s Big Data and AI Working Group (Working Group) exposed a draft of an AI Systems Evaluation Tool (Evaluation Tool). The stated purpose is to provide regulators with a tool that enables them to identify and assess the AI-related risks of an insurer on an on-going basis with a scope that considers both financial and consumer risks. The Evaluation Tool is intended to supplement existing market conduct, product review, form filing, financial analysis, and financial examination review procedures to allow regulators to determine the extent of AI systems usage by an insurer and whether additional analysis is needed. An extended comment period was announced on July 28th, with interested parties now able to provide comments through September 5th. Initial takeaways from the exposed draft include:
- The Evaluation Tool assesses a wide range of potential sources of financial, financial reporting and consumer risks resulting from AI systems, demonstrating a more expansive scope than the primary focus on adverse consumer outcomes in the NAIC’s Model Bulletin on the Use of AI Systems by Insurers (Model Bulletin).
- The tailorable templates, charts, and checklists included in the exhibits to the Evaluation Tool allow for regulator discretion and customization to request and review quantitative and qualitative information about the insurer’s AI inventory, governance and risk management practices.
- The Evaluation Tool is intended to be one resource that could increase regulators’ understanding of AI systems utilization and risk assessment across insurance companies when performing market conduct exams, financial analysis and other reviews. How the Working Group will coordinate with other working groups and Committees at the NAIC that oversee market conduct and financial exams is unclear.
The Exposed Draft Evaluation Tool
While regulators will not be required to use the Evaluation Tool, the Working Group presents its proposal as a standardized and efficient method of data collection for regulators to conduct assessments and keep up as the pace and scope of insurers’ use of AI accelerates and expands. The four exhibits included as part of the Evaluation Tool, detailed further below, are optional and request information from insurers on differing topics and in varying formats that probe into the AI systems architecture and risk assessment processes that an insurer has implemented.
Exhibit A: Quantify Regulated Entity’s Use of AI Systems
This exhibit is a chart intended to give regulators a baseline understanding for how pervasive the insurer’s current and planned use of AI systems is over a wide range of operational areas that may or may not be within an insurance regulator’s purview (including marketing, claims, legal and compliance, investments and capital management, reserves, reinsurance, HR and fraud detection, among others). Responses would provide regulators with a quantitative assessment that focuses on the number of AI systems that could have consumer impacts or material financial impacts, and the volume of consumer complaints currently resulting from those AI systems.
Exhibit B: AI Systems Governance Risk Assessment Framework
The second exhibit provides two different approaches to assessing a company’s AI Governance Framework with the first, a questionnaire meant to elicit a comprehensive narrative discussion from the insurer, and the second, a checklist seeking specific responses from the company to a wide range of detailed questions on particular issues and topics. Both templates focus on key concepts from the Model Bulletin related to documented and formal policies, procedures and processes for AI risk management throughout the AI lifecycle. Notably, the exhibit also seeks disclosure “about efforts to maintain compliance and the integrity of financial reporting and control integrity” and specifically inquires into the use of AI in generating financial transactions and the information reported on financial statements. Between the broadened scope and heightened detail of inquiry, the draft exhibit could require insurers to significantly expand the scope of their internal risk management oversight and their oversight of their vendors and professional services providers.
Exhibit C: AI Systems High-Risk Model Details
The draft Exhibit C defines “high-risk AI system models” as those that engage in automated decisioning and “that could cause adverse consumer, financial or financial reporting impact.” This Exhibit would give regulators detailed information about each high-risk AI system model used by the insurer including technical, operational and legal disclosures about the models. The draft envisions regulators relying on the inventory and governance information provided in the first two exhibits to inform what further information a regulator would need regarding a high-risk system model.
Exhibit D: AI Systems Model Data Details
The last exhibit is focused on the data elements used in the AI system models, including the sources for and types of information (with 24 different categories listed, ranging from education level and occupation to social media and geocoding). The draft instructions ask insurers to confirm whether these data elements are used in model development and to identify whether this training data is sourced internally from the policyholder insurance experience or externally from a third party. Insurers are also asked to describe how this data is used throughout their insurance operations and by each line of insurance.
The exhibits are all supported by definitions sourced from the Model Bulletin, but the Definitions and Appendix section also includes a range of examples associated with different operations within the insurance lifecycle.
Timeline and Other Regulatory Developments
A sixty-day comment period is currently open for interested parties to submit comments to the Working Group about the exposed draft of the AI Systems Evaluation Tool. The comment period will close on September 5th. During the Working Group’s July 16th meeting, the presentation and discussion made clear that work on the systems evaluation tool will continue into next year, with a regulator and self-audit pilot to be conducted in 2026 following any redrafting the Working Group completes based on the public comments it receives. Only following that pilot would the tool be finalized by the Working Group. At this time, next steps required for adoption of the Evaluation Tool by the NAIC are unclear.
The Working Group also just concluded a Request for Information period related to the possible development of an AI Model Law. Discussion of the comments received in response to the RFI will be the priority focus of the Working Group’s August 12th convening at the NAIC Summer National Meeting in Minneapolis. It remains to be seen whether the Working Group will proceed with the development of an AI Model Law over the objections of most in the insurance industry.
Conclusion
As the Working Group continues to develop and revise the format and content of the AI Systems Evaluation Tool, it will likely need to address concerns regarding the tool’s scope, the degree of detail required of responses, and the level of compliance burden companies may experience to adapt to the proposed Evaluation Tool.
[View source.]