DFS Issues New AI-focused Guidance for Cybersecurity Regulation Compliance
On October 16, 2024, the New York Department of Financial Services (“DFS”) issued guidance addressing how institutions can meet their existing obligations under 23 NYCRR 500 (“Part 500”) given new and heightened cybersecurity risks arising from artificial intelligence (“AI”). DFS recommends a number of steps for companies to prepare for such risks, including:
Update employee training to expand awareness of AI-powered social engineering;
Leverage senior leadership to prioritize and support strategic awareness of risks and mitigations specific to AI;
Design access controls to better withstand deepfakes and other AI-enhanced attacks, especially where those controls protect biometric and other non-public information targeted by threat actors to generate further deepfakes; and
Design periodic risk assessments of the company’s exposure to AI-related threats, including by conducting due diligence of third parties and vendor use of AI, with reference to existing Part 500 obligations.
Regulatory Context: DFS’s cybersecurity regulation, Part 500, was expanded last year, as covered in our previous blog post. DFS continues to bring Part 500 enforcement actions, including an $8 million settlement with Genesis Global Trading, Inc. earlier this year. DFS’s newly issued AI guidance is aligned with concerns flagged by federal regulators and confirms that DFS expects regulated entities’ compliance with their Part 500 obligations to include taking steps to address new and evolving threats.
The guidance identifies four key areas of AI-related risk for regulated entities to take note: two involving threats from bad actors using AI and two connected to risks that may arise from common uses of AI by regulated entities or their service providers.
AI-Enabled Social Engineering: DFS notes that AI can permit threat actors to socially engineer their targeting with more sophistication and efficiency, convincing employees to share credentials or directly wire funds to fraudulent accounts. In response, DFS suggests that covered entities should include training on the risks of AI-powered social engineering as part of the annual cybersecurity training they are already required to provide all personnel under Part 500.14. Additionally, DFS emphasizes the efficacy of Multi-Factor Authentication (“MFA”) as a robust access control and defensive measure, which Part 500.12 continues to require under most circumstances.
AI-Enhanced Cybersecurity Attacks: AI has also enhanced the speed, scope, and severity of cyberattacks by threat actors. AI’s ability to process data faster than humans and in vast amounts can enable threat actors to identify vulnerabilities more quickly, bypass security controls, and escalate the scale and potency of attacks. Furthermore, DFS projects that, since AI enables threat actors who may lack technical skills to launch an attack, the number of threat actors and attacks may increase. DFS calls on companies’ senior leadership to maintain sufficient understanding of cybersecurity-related matters as AI evolves, under existing Part 500.4 obligations, such that entities can prioritize cybersecurity and proactively establish, maintain, and test effective incident response plans under ongoing Part 500.16 requirements. Part 500.4 continues to require regular review of management reports and suggests use of advisors for knowledge building as a means by which companies’ senior governing bodies can exercise oversight of cybersecurity risk management.
Exposure or Theft of NPI: As AI has productively enabled companies to collect, process, and maintain large amounts of data, including nonpublic information (“NPI”), AI has increased the risk of such data being exposed or stolen. DFS flags that “some AI requires the storage of biometric data,” and misuse of such data could compromise MFA and enable the generation of hyper-realistic videos and images (“deepfakes”). Accordingly, DFS emphasizes its standing recommendation of robust access controls as a defensive security measure, using forms of authentication which deepfakes cannot compromise (e.g., physical security keys) or which leverage multiple biometric modalities at the same time. Users’ access privileges should also be narrowly tailored in order, among other things, to limit what NPI a threat actor could access in the event of an MFA failure, as Part 500.7 has detailed. Monitoring processes pre-established by Part 500.14 and effective data management under Part 500.13 further offer mitigating solutions to NPI-related risks.
Increased Vulnerabilities Due to Third Party Reliance: DFS emphasizes that use of AI frequently involves working with vendors and Third-Party Service Providers (“TPSPs”) who develop and maintain AI-powered tools, applications, and collected data. Within a supply chain, each link increases the potential vulnerability of all entities in the network as a single cybersecurity incident could spread, compromising multiple entities’ NPI. To address these risks, DFS recommends designing periodic risk assessments, an existing requirement under Part 500.9, to include not just one entity’s own use of AI but also the AI technologies used by vendors and TPSPs. Additionally, DFS reiterates its recommendation of contractual protections and due diligence prior to using any TPSP that will access NPI and/or Information Systems. As Part 500.11 has already detailed, TPSPs themselves should meet minimum standards of controls, encryption, and other protections, and if a TPSP uses AI, additional representations and warranties are recommended.
Conclusion: The guidance highlights evolving risks pertaining to AI and suggests steps companies can take under their existing Part 500 obligations to address these risks.
DFS continues to pay close attention to the increasing use of AI technologies, and AI may represent an enforcement priority for DFS moving forward. Companies would do well to consider evolving AI-related challenges as they continue to implement and maintain robust cybersecurity programs. SOURCE