New York Poised to Enact First-of-Its-Kind AI Safety Law: What Businesses Need to Know About the RAISE Act

Fisher Phillips
Contact

Fisher Phillips

New York could soon jump into the lead of the national AI regulation race. With broad bipartisan support, state lawmakers passed the groundbreaking Responsible AI Safety and Education Act (RAISE Act) on June 12, aimed squarely to prevent catastrophic harm by advanced AI systems. If Governor Hochul signs the bill into law, New York will become the first state to impose enforceable AI safety standards on powerful “frontier models.” Here’s what your business needs to know.

A High-Stakes New Frontier for AI Regulation

Unlike other state-level AI efforts that focus on bias, discrimination, or consumer protection, the RAISE Act (AB 6453) zeroes in on a very specific risk profile: catastrophic harms involving mass casualties or billion-dollar damages.

The law targets the developers of extremely large-scale AI systems – so-called frontier models – with compute costs exceeding $100 million. These are the cutting-edge systems pushing the outer limits of AI capabilities, potentially capable of autonomous actions, advanced biological research, or even self-replication without human oversight.

Key triggers for regulation under the bill:

  • Threshold: Applies to AI models with $100M+ compute cost (or $5M+ for certain “distilled” versions)
  • Scope: Covers any frontier models developed, deployed, or operated in New York
  • Enforcement: New York Attorney General and Homeland Security Division

What Developers Must Do Under the RAISE Act

If enacted, developers will face the following transparency and safety obligations, sweeping in both scope and impact:

Develop a Safety and Security Protocol

  • Detailed plan must explain how developer will prevent critical harms such as bioweapon development or autonomous criminal conduct.
  • Engage in ongoing testing for misuse, loss of control, or potential self-replication.

Publish Redacted Safety Plans

  • These include publicly disclosed safety protocols (with limited redactions for trade secrets or security purposes).

Report Safety Incidents Within 72 Hours

  • Any serious incident indicating heightened risk must be quickly reported to state regulators.

Complete Annual Review and Updates

  • Businesses must participate in ongoing reassessment of protocols as models evolve.

Absolute Prohibition on Deployment if High Risk Exists

  • Models posing “unreasonable risk of critical harm” cannot be released at all.

Other Key Points

There are a few other points you should know about regarding the RAISE Act.

Penalties for violations would be substantial

  • Up to $10M for a first violation
  • $30M for subsequent violations
  • There is no private right of action, but state enforcement would most likely be aggressive.

Effective Date

The law would take effect 90 days after the governor signs the bill.

New York Lawmakers Want to Move Ahead of Other States

Lawmakers cited mounting warnings from AI developers, safety researchers, and national security experts about how quickly these systems are advancing. The legislative memo accompanying the bill highlights the following concerns:

  • Real-world tests showing models attempting self-replication and deception (Apollo Research, 2024)
  • Growing risks of biological weapon design assistance
  • Public statements from OpenAI, Anthropic, and others warning that we are rapidly approaching critical risk thresholds
  • Industry voices acknowledging that federal regulation won’t arrive in time

As Assemblymember Alex Bores (D), the bill’s sponsor, put it:

"We don’t let someone open a daycare without a safety plan. Shouldn’t we at least have one for the most powerful technology humanity has ever built?"

Opposition: Innovation vs. Safety Debate

As expected, major tech industry groups strongly oppose the measure, calling it:

  • Overly broad
  • Premature
  • A threat to innovation
  • A costly burden on the U.S. AI economy

They are pushing for NY Governor Hochul to veto the bill.

But others argue voluntary guardrails aren’t enough. More than 100 global AI researchers signed a letter urging state-level action, warning of a "race to the bottom" if companies cut corners to compete.

Will Hochul Sign?

Governor Hochul has not yet taken a public position. She has until year-end to sign or veto the bill. Her prior approach to AI regulation suggests she may seek to extract amendments in the bill before signing. Observers expect the governor to negotiate with lawmakers to seek technical adjustments, and could possibly seek to delay full implementation until much later in 2026.

How This Intersects With Broader AI Policy Battles

  • Federal Freeze Proposed: Congressional Republicans are pushing for a 10-year federal moratorium on state AI regulations. The House version of the bill would block state laws completely, and the Senate version would prevent states from tapping into a pool of federal funding if they pass state-level AI laws. If either version is enacted, it would directly conflict with New York’s approach.
  • California Comparison: California's governor vetoed a similar bill last year. New York lawmakers modeled their version on California’s proposal but deliberately narrowed its scope, applying only to the most extreme frontier models.

What Employers and Businesses Should Do Now

Even if your business isn’t directly developing frontier models, this law carries important implications for the AI marketplace you rely on:

  • Governance: Make sure you protect your organization by having the right AI policies and risk-management framework (read our AI Governance 101 guide here).
  • Vendors: Ask your AI vendors if their models would fall under “frontier” definitions.
  • Procurement: Build AI safety disclosures into procurement processes as necessary.
  • State-by-State Divergence: Expect an increasingly fractured compliance landscape as states test different approaches.
  • Federal Preemption Risks: Stay alert for federal action that could override or conflict with emerging state laws.

Further Reading

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Fisher Phillips

Written by:

Fisher Phillips
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Fisher Phillips on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide