[co-author: King Xia]
On April 16, Washington state added a new law to the growing patchwork of state, federal and international laws aimed at curbing the malicious use of artificial intelligence (AI)-generated deepfakes. The law is the first of its kind in the United States to broadly provide for criminal liability for all malicious deepfakes, not just sexual or political deepfakes.
SHB 1205
Effective July 27, SHB 1205 amends Washington’s second-degree criminal impersonation statute to prohibit the knowing distribution of a “forged digital likeness” of another person as genuine visual or audio content performed with the intent to defraud, harass, threaten or intimidate another or for any other unlawful purpose.
Under the law, a forged digital likeness is a visual representation or audio recording of an actual and identifiable individual that (a) has been digitally created or modified to be indistinguishable from a genuine visual representation or audio recording of the individual; (b) misrepresents the appearance, speech or conduct of the individual; and (c) is likely to deceive a reasonable person into believing that the representation or recording is genuine. The law only applies where the defendant knows or reasonably should have known that the forged digital likeness was not genuine.
The law includes an exception for matters of cultural, historical, political, religious, educational, newsworthy or public interest, including material protected by the Washington state and federal constitutions.
The law also expressly exempts interactive computer services, mobile telecommunications service providers, and telecommunication network and broadband providers from liability. This exemption operates similarly to Section 230 immunity, shielding platforms from being held liable for third-party content. While it’s not clear how SHB 1205 would reach these entities in the first place, the exemption gives these platforms and services peace of mind.
Other Washington Deepfake Laws
SHB 1205 supplements two existing Washington state laws regulating deepfakes. In 2023, Washington adopted SB 5152, requiring deepfaked political ads to clearly disclose the presence of manipulated content and imposing civil liability for undisclosed deepfakes. And in 2024, Washington adopted HB 1999, making it a gross misdemeanor to share nonconsensual sexual deepfakes.
Deepfake Legal Landscape
Washington’s deepfake laws also fit within a larger and evolving landscape of state, federal and global regulation. On May 19, President Donald Trump signed the TAKE IT DOWN Act (TIDA) into law. TIDA criminalizes the distribution of both real and deepfake revenge pornography, requiring platforms to remove offending content within 48 hours of notice.
Like Washington, many states have adopted laws regulating political and sexual deepfakes. In 2019, California and Virginia adopted the nation’s first laws on nonconsensual sexual deepfakes. Since then, a majority of states have adopted similar prohibitions. Likewise, a majority of states have adopted laws requiring disclosures or imposing other restrictions on political deepfakes around election periods. Several of these laws have been challenged on First Amendment grounds, including California’s AB 2839, which regulated political deepfakes. On October 2, 2024, the U.S. District Court for the Eastern District of California granted a preliminary injunction to block AB 2839 for lack of narrow tailoring and least restrictive alternatives required for content-based laws that are reviewed under strict scrutiny.
Although most current deepfake regulations focus on political and sexual content, Tennessee has adopted a more comprehensive approach to deepfake legislation, similar to Washington’s SHB 1205 but with important nuances. Specifically, Tennessee’s Ensuring Likeness Voice and Image Security Act (ELVIS Act) expands the state’s existing statutory right of publicity to create civil liability for unauthorized voice deepfakes. The ELVIS Act simultaneously applies more broadly and more narrowly than SHB 1205 by sweeping in actors that wouldn’t meet SHB 1205’s criminal intent requirements through civil liability, including entities that provide technology to produce unauthorized deepfakes. Compare this with Washington’s SHB 1205, which applies to a wide swath of media types and not just voice deepfakes, expressly exempts platforms from the statute and imposes criminal penalties like jail time for violation of the law.
The U.S. landscape may change dramatically if Congress enacts a pending 10-year federal moratorium on state AI regulation. The moratorium, included as part of H.R.1 – One Big Beautiful Bill Act, recently passed the U.S. House of Representatives. If adopted, it would prohibit states from enforcing any law or regulation regulating AI models, AI systems or automated decision systems for 10 years. It is not clear how this moratorium would apply to the nation’s deepfake laws, which are not exclusively framed in terms of AI. However, a recent update to the moratorium would exempt state laws that impose criminal penalties, like SHB 1205. But laws imposing only civil liability, like the ELVIS Act, may be put on hold. We expect further debate on this provision as the Senate considers the bill.
Some international jurisdictions have taken stronger approaches to deepfake regulation. The EU AI Act prohibits using AI in manipulative ways; requires that AI interacting with natural persons inform those people about the AI; and requires that AI-generated content, including deepfakes, be clearly labeled. China has adopted similar AI-generated content-labeling requirements and further requires that AI service providers obtain users’ real identities to help prevent misuse. China also requires explicit consent for deepfakes and makes service providers responsible for monitoring content to prevent the dissemination of harmful or misleading information.
These varying approaches reflect a growing consciousness of AI and its capabilities, as well as a desire to regulate issues far older than deepfakes that have been supercharged by AI’s new widespread availability.
[View source.]