NY Department of Financial Services Issues AI Cybersecurity Guidance

Harris Beach Murtha PLLC
Contact

The New York Department of Financial Services (DFS) has issued guidance, in the form of an industry letter, on addressing cybersecurity risks arising from artificial intelligence (AI) under its cybersecurity regulation, 23 NYCRR Part 500.

DFS does not consider the guidance in this letter expansive of the requirements already defined in Part 500. Instead, it views the letter as an explanation of how the regulation addresses the specific risks AI poses to its regulated industries and how organizations should address them.

Organizations regulated by DFS should take note of this letter and assess their security accordingly. The risks of AI to improve social engineering and logical attacks are greater than without this technology, and DFS is clearly stating it believes Part 500 covers this enhanced risk. Thus, if your organization falls victim to an AI-enhanced attack, it will be important to show you considered and addressed the increased risks, even if that did not stop the attack. DFS understands breaches can occur even in the most secure organizations, but to gain this sympathy, an organization must show a reasonable effort to address risks, especially risks identified in DFS-issued guidance.

A cornerstone to Harris Beach’s and Caetra.io’s (a Harris Beach subsidiary) approach to cybersecurity is the implementation of a cybersecurity and data privacy program with the use of industry standard controls such as NIST 800-5, ISO 27001 or CIS. Our belief is that where a written and measured program exists, practices are followed throughout the entire environment -- not just on the systems that are most visible. Accordingly, there are fewer “cracks” for an AI engine to exploit and pivot into the more material systems.

The guidance from DFS is organized around the specific risks AI poses to any organization. To align with the agency’s impression the guidance is not associated with expansion of Part 500, this legal alert recasts the DFS guidance in terms of the specific requirements of Part 500 and how it addresses each risk. Here are the specific controls and practices organizations must have in place to address AI risks, along with references to the applicable portions of the regulation:

Risk Assessment

Organizations must conduct regular risk assessments to identify and evaluate cybersecurity risks associated with AI. This includes assessing the potential impact of AI on the confidentiality, integrity and availability of information systems and nonpublic information (NPI) (23 NYCRR 500.9). Accordingly, since AI can introduce new vulnerabilities and amplify existing ones, potentially leading to unauthorized access, data breaches and other cyber incidents, DFS believes this assessment is already codified in 500.09.

Cybersecurity Program

Without a comprehensive program, organizations may fail to address AI-specific threats, leading to inadequate protection against AI-driven attacks. Part 500 requires organizations to develop and maintain a cybersecurity program that includes policies and procedures to protect information systems and nonpublic personal information (NPI) from threats identified in the organization’s risk assessment. DFS argues it is therefore contemplated in 23 NYCRR 500.2 that organizations should address the use of AI in cybersecurity.

Access Controls

AI-enhanced social engineering attacks can exploit weak access controls, resulting in unauthorized access to sensitive data. DFS considers this already addressed through its requirement to implement access controls to limit access to information systems and NPI. This includes using multi-factor authentication (MFA) and other measures to prevent unauthorized access pursuant to 23 NYCRR 500.7.

Monitoring and Testing

Part 500 requires continuous monitoring and threat detection at 23 NYCRR 500.5. The guidance asserts that AI can be used to evade traditional detection methods, making continuous monitoring and advanced threat detection even more essential. DFS further urges an organization to use its own AI to test its systems and improve threat detection and incident response capabilities.

Incident Response Plan

23 NYCRR 500.16 requires each regulated entity to have a written incident response plan. DFS’s letter guidance now requires the incident response plan to address AI-related incidents. This plan should include procedures for responding to, and recovering from, AI-driven cyberattacks.

Training and Awareness

23 NYCRR 500.14 is the portion of the regulation that addresses training. It requires regulated entities to provide regular cybersecurity training and awareness programs for employees. According to the letter guidance, these programs should now include information on AI-related threats, such as AI-enabled social engineering and deep fakes.

Third-Party Service Provider Security

Third-party service providers remain the “Achilles Heel” in many cybersecurity programs because of both the lack of visibility into their practices and the access to system logs before and after an event. Still, 23 NYCRR 500.11 makes covered entities responsible for the practices of their vendors and covered entities much therefore implement appropriate cybersecurity measures to protect information systems and NPI accessible by these vendors. Now, according to the letter guidance, this also must include assessing the cybersecurity practices of third parties using AI.

Data Governance and Classification

DFS recognizes AI consumes large amounts of NPI to operate meaningfully. This creates the risk of aggregating sensitive information in a place where it is more easily accessed, making an attack to access the information more attractive. 23 NYCRR 500.3 a provision to implement data governance and classification to protect NPI. Thus, DFS considers its existing regulation as sufficient to require organizations to implement policies and procedures for data governance and classification to ensure the protection of NPI, especially when using AI technologies that require large amounts of data.

For organizations that already have controls and procedures in place to implement a compliance program for 23 NYCRR 500, this advice requires each organization to consider the procedures through the lens of an AI-based attack. It is, however, important to document that this assessment took place, who participated and any modifications to the plan or its procedures. For organizations that have yet to fully implement a Written Information Security Plan with documented controls and procedures, this provides additional incentive to create one and, hopefully, avoid a cybersecurity attack or significant potential regulatory action should one occur. cybersecurity event.

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Harris Beach Murtha PLLC

Written by:

Harris Beach Murtha PLLC
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Harris Beach Murtha PLLC on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide