The convergence of privacy, AI, and cybersecurity: what fintech GCs should prioritize now

A&O Shearman
Contact

A&O Shearman

The intersection of privacy, cybersecurity, and AI is reshaping risk and regulation in fintech. In a recent episode of The Fintech and Blockchain Podcast, our team explored how fast-moving developments—from AI-powered phishing to fragmented privacy laws—are pressuring fintech companies to rethink governance and compliance. Below are key takeaways and pragmatic recommendations for general counsel and risk leaders.

1. A patchwork privacy regime demands strategic choices

Unlike the EU’s GDPR, the U.S. privacy landscape is a patchwork of state and sector-specific laws. Financial non-public data is primarily governed by the Gramm–Leach–Bliley Act, which is a federal law, but 20 states have enacted comprehensive privacy laws which may also be applicable in certain circumstances, with varied requirements on disclosures, consents, and opt-outs, and there are numerous stand-alone state AI laws and biometric laws.

For fintech firms that are subject to comprehensive state privacy laws, two main strategies have emerged:

  • Uniform strict compliance (e.g., adopting California standards across all users).
  • Tailored compliance based on geography or business value of data.

Either path requires a state-by-state gap assessment—especially for companies using facial recognition or voiceprints, which may trigger obligations under stand-alone state AI laws and biometric laws like Illinois’ Biometric Information Privacy Act (BIPA) or require data protection impact assessments in some jurisdictions.

Action Item: Prioritize a state-by-state legal review and audit your biometric and AI-related data flows.

2. AI use is accelerating—and so is regulation

AI is powering credit decisions, onboarding, and fraud detection. It’s also drawing scrutiny. States are enacting AI-specific laws targeting:

  • Bias in automated decision-making (e.g., in creditworthiness).
  • Lack of human oversight in critical use cases.
  • Opacity in model logic or training data.

The repeal of the 2023 Biden Executive Order on AI and the House of Representatives passing the “Big Beautiful Bill” which, if enacted, purports to prohibit states from enforcing state AI laws, suggests lighter federal oversight for now, but states are moving fast and will likely challenge the legality of the AI restrictions in the Big Beautiful Bill if it is passed. California and Colorado (among other states) have passed AI acts requiring impact assessments and transparency for high-risk applications.

Action Item: Treat AI governance like a legal function. Establish internal review protocols and test for bias and explainability.

3. Cyber threats are more sophisticated—and AI-assisted

AI is now being used by threat actors, especially to enhance the effectiveness of phishing and social engineering. Emails generated with tools like ChatGPT are nearly indistinguishable from legitimate communications. Future risks include AI that can autonomously scan for vulnerabilities or write malware.

Ransomware continues to dominate the threat landscape in2025. Established and new threat actors are continuing to target companies with new and old tactics. Moreover, recent attacks in the crypto space demonstrate that some threat actors are moving fast using strategies honed after years of smaller similar attacks.

Action Item: Audit your cybersecurity stack—tools alone won’t protect you if not properly configured, governed, and owned internally.

4. Regulators are watching closely, even without uniform laws

While no federal playbook exists, regulators are escalating enforcement:

  • NYDFS leads the charge on cybersecurity, with active enforcement and interest in AI oversight and crypto oversight. Moreover, NYDFS is very focused on financial institutions, fintech companies, and data outside the scope of GLBA.
  • SEC has ramped up breach disclosure and board accountability rules.
  • State AGs and privacy agencies are leveraging broad consumer protection laws (e.g., Texas, California) often in the wake of cyber incidents.
  • FBI task forces are collaborating with companies on ransomware—but overseas actors remain hard to reach.

Action Item: Align your incident response with regulator expectations. Ensure ransomware, breach, and AI risks are modeled and tested.

Final thoughts

“Your tools are not going to save you. The best prevention is a lawyered approach that combines technology, people, and processes working together.” Whether managing privacy across 20+ states, governing algorithmic bias, or bracing for AI-assisted hacks, fintechs must move beyond checkbox compliance. Legal teams should lead the charge in aligning business, tech, and risk functions.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© A&O Shearman

Written by:

A&O Shearman
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

A&O Shearman on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide