On October 16, 2024, the New York Department of Financial Services (“NYDFS”) issued guidance and strategies (“Guidance”) concerning cybersecurity risks that arise in response to the advancements in artificial intelligence (“AI”). While AI technology has, in many cases, positively impacted businesses, it has also opened the door to a myriad of opportunities for cybercriminals to infiltrate secure information systems containing Nonpublic Information (“NPI”). While the Guidance does not impose any new requirements beyond obligations that are in NYDFS’s cybersecurity regulation codified at 23 NYCRR Part 500 (the “Cybersecurity Regulation”); the Guidance is meant to explain how Covered Entities[1] should use the framework set forth in the Cybersecurity Regulation to assess and mitigate cyber risks associated with AI.
What You Need to Know:
- New York’s Department of Financial Services recently issued guidance relating to cybersecurity risks that arise with the use of artificial intelligence.
- While the Guidance does not impose any new requirements, it focused on four AI-related risks, including the use of AI to manipulate individuals or gain unauthorized access to nonpublic data.
- Companies designing and using AI should pay particular attention to the potential cybersecurity vulnerabilities posed by the use of AI systems within their organization, by external bad actors, and contract partners who have access to their data.
The Guidance focuses on four main risks related to the use of AI:
- AI-Enabled Social Engineering is one of the most significant threats to Covered Entities because AI can be used to target individuals in an attempt to lure or convince them to disclose NPI or take action that they are otherwise unauthorized to take, such as wire transfers to fraudulent accounts;
- AI-Enhanced Cybersecurity Attacks allow threat actors to speed up and execute cyberattacks on a much larger scale given AI’s ability to quickly scan and analyze voluminous amounts of information and identify security vulnerabilities. These AI technologies give inexperienced threat actors a tool to launch calculated attacks, increasing the frequency and severity;
- Exposure or Theft of Vast Amounts of NPI concerns the large collection and storage of NPI, including biometric data (i.e., facial and fingerprint recognition), placing a larger target on data collection systems. Threat actors are capable of using biometric data to impersonate authorized users to bypass Multi-Factor Authentication ("MFA"), gain access to NPI and generate AI-Enabled Social Engineering to target others; and
- Increased Vulnerabilities Due to Third Party, Vendor, or Other Supply Chain Dependencies present concerns beyond internal cybersecurity measures of Covered Entities where security vulnerabilities can be exploited down the supply chain, potentially exposing the Covered Entity’s NPI and giving way to broader cyberattacks through the organization’s network and chain of commerce.
It should go without saying that Covered Entities should address AI-related risks in their own design, development, and use of AI, AI technologies utilized by third-party service providers and vendors who have access to their data, and vulnerabilities stemming from AI applications, especially public platforms such as ChatGPT. As part of a cybersecurity program required under the Cybersecurity Regulations, in the course of conducting any required risk assessment, Covered Entities should assess whether AI-related cyber risks warrant updates to cybersecurity, privacy, and data governance policies, including incident response and business continuity plans. Additionally, it is important to maintain strong contracts with third-party service providers and vendors that address the unauthorized access to NPI, including duties to cooperate and broad indemnification provisions.
Pursuant to the Cybersecurity Regulations, a Covered Entity’s cybersecurity policies should require access controls such as encryption technologies and MFA so authorized users must properly authenticate their identities. Also, internal training and awareness remain key parts of a robust cybersecurity program, and employee training should include guidelines for monitoring new security vulnerabilities that may arise from the activity of authorized users, and effective data management practices. Beginning November 1, 2025, Covered Entities will be required under the Cybersecurity Regulations to maintain and update data inventories, as they are crucial for assessing potential risks and ensuring compliance with data protection regulations.
AI technology is continuing to be adopted within organizations and by threat actors. The accessibility to and evolution of AI tools that can be used to exploit cybersecurity vulnerabilities makes it difficult to keep up with the challenges posed by this technology. Covered Entities need to be proactive in assessing the risks presented by the use of AI, both internally and externally, and developing policies, procedures and mitigation strategies outlined in the Guidelines to protect the Covered Entity’s information systems and NPI and mitigate severe disruption to its business.
For more information specific to 23 NYCRR Part 500 and AI Related Services, see our prior alerts at Proposed Amendments to New York’s Cybersecurity Regulations and What Non-IT Lawyers Need to Know About IT Contracts & Contracting for AI-Related Services.
[1] Covered entity is defined in 23 NYCRR § 500.1(e) as “any person operating under or required to operate under a license, registration, charter, certificate, permit, accreditation or similar authorization under the Banking Law, the Insurance Law or the Financial Services Law, regardless of whether the covered entity is also regulated by other government agencies.”