On May 19, 2025, President Trump signed into law the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes Act, also known by the backronym the TAKE IT DOWN Act (the “Act”), to combat deepfake revenge porn. The bipartisan legislation introduced by Senators Amy Klobuchar (D-MN) and Ted Cruz (R-TX) is the first federal law targeting AI-powered online abuse. The Act received overwhelming bipartisan support, passing the Senate unanimously. The Act criminalizes (i) the intentional online publication of non-consensual intimate visual depictions of an identifiable person, including depictions that are authentic or AI-generated, and (ii) threats to publish authentic or digitally falsified and manipulated images or videos of a person. The Act also imposes civil obligations on websites and online platforms to remove such content within 48 hours of notice from a victim. Failure to comply with take down notices will constitute an unfair and deceptive trade practice under the Federal Trade Commission Act. Thus, online platforms – even those operated by nonprofit organizations – risk enforcement actions by the Federal Trade Commission for noncompliance.
Background
The rise of generative artificial intelligence has drastically lowered the barriers to creating highly realistic digital content, including deepfake images and videos. What once required sophisticated technical expertise and powerful computing resources can now be accomplished with freely available tools and a few clicks. Although these developments have brought significant benefits across industries, they have also fueled a sharp rise in the creation and publication of AI-generated nude and sexually explicit images created for the purpose of engaging in harassment and cyberbullying activities. The creation of deepfake nonconsensual intimate imagery typically involves superimposing a person’s image and likeness onto sexually explicit content without his or her consent.
Implications of the New Law
The Act makes it a crime to intentionally publish, in interstate or foreign commerce, a “nonconsensual intimate visual depiction” of an identifiable individual, which is defined as an visual image or video that appears to show an identifiable person’s naked genitalia, pubic area, or anus, or an identifiable person engaged in sexually explicit activity. For offenses involving the publication of nonconsensual intimate visual depictions of adults (whether authentic or AI-generated), the Act imposes criminal penalties of fines and up to two years’ imprisonment.[1] For offenses involving minors, the Act imposes fines and increases the maximum sentence of imprisonment to three years.[2] The Act also prohibits intentionally threatening to publish authentic or AI-generated nonconsensual intimate visual depictions “for the purpose of intimidation, coercion, extortion, or to create mental distress” and carries a maximum penalty of incarceration of up to 18 months for threats involving adults and 30 months for threats involving minors.
Further, the Act also imposes obligations on certain “covered platforms.” Under the Act, a “covered platform” is broadly defined to include any website, online service, online or mobile application that “serves the public” and “primarily provides a forum for user-generated content, including messages, videos, images, games, and audio files.”[3] The Act explicitly excludes email or broadband service providers from the definition of covered platforms.
The Act requires online and social media platforms to establish a process whereby an individual can provide notice that “an intimate visual depiction” of that individual was published on its platform without that individual’s consent and submit a request for removal.[4] Covered platforms have one year after the enactment of the Act, until May 19, 2026, to implement, and provide a clear and conspicuous notice, of this process on their platforms.[5]
Lastly, the most controversial section of the Act, mandates that online platforms promptly remove and take down non-consensual intimate images and videos or risk substantial penalties. Upon receiving a takedown request from a victim who reports the nonconsensual publication of an intimate visual depiction, a covered platform must remove it from its platform as soon as possible, but no later than 48 hours, and make reasonable efforts to identify and remove any known identical copies of the reported nonconsensual intimate visual depiction.[6] The failure to swiftly comply with the takedown provisions of the Act will constitute an unfair or deceptive practice in violation of the Federal Trade Commission Act. Accordingly, the Federal Trade Commission may bring enforcement actions and seek penalties against any online or social media platforms, including non-profit organizations, which violate the notice and removal procedures of the Act. This provision has been criticized for being overbroad and requiring platforms to censor speech in violation of the First Amendment, but in the end the harms caused by the proliferation of sexual deepfakes demonstrated that federal legislation, even if imperfect, was necessary to protect victims, especially minors, from further traumatization and harm.
Conclusion
Clients operating websites or other online applications that allow the publication of user-generated content should be aware of this Act and begin to prepare a process through which individuals can provide notice of improper publication and request for removal. The Act requires such entities to complete and establish such processes by May 19, 2026.
[1] Take it Down Act § 2(h)(4)(A).
[2] Id. § 2(h)(4)(B).
[3] Id. § 4(3)(A).
[4] Id. § 3(a)(1)(A).
[5] Id.; id. § 3(a)(2).
[6] Id. § 3(a)(3).