The TAKE IT DOWN Act (the Act), enacted on May 19, 2025, is a powerful (and controversial) new tool designed to stop people from sharing “nonconsensual intimate imagery,” or NCII, online.
The Act does two main things: it criminalizes the creation and distribution of NCII, and—as its title suggests—requires certain communications platforms to implement a “notice and takedown” process to remove NCII from their platforms. Few would dispute the merits of a law prohibiting NCII. The controversy arises from the takedown provisions, which have been criticized by speech and privacy advocates as failing to protect against abusive requests and undermining privacy-forward security measures.
Key Takeaways:
- By May 19, 2026, platforms will need to determine whether they are covered by the Act and, if so, establish a notice and takedown process that allows victims to report NCII, allows the platform to identify and remove within 48 hours the reported NCII, and utilizes “reasonable efforts” to remove any other copies of the reported NCII from elsewhere on the platform.
- The Act limits platforms’ liability when they remove reported content but provides no immunity or protection if a platform decides a report is invalid or insufficient, and it offers no mechanism to resolve disputes about such decisions.
- The Act authorizes the Federal Trade Commission (FTC) to impose civil penalties for noncompliance (currently $53,088 per violation but indexed annually for inflation).
Overview
The Act was sparked by a well-meaning legislative desire to crack down on NCII and has gone through multiple iterations before Congress settled on the current (enacted) version. Continuing Congress’ tradition of developing unwieldy backronyms rather than just naming a law what it’s called, the Act is officially titled the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act or the TAKE IT DOWN Act (S.146). The Act amends Section 233 of the Communications Act of 1934 to explicitly prohibit the intentional publication, or threat of publication, of both authentic and computer-generated NCIIs. The Act also imposes a requirement for applicable platform providers to remove such images upon request, within 48 hours, and to make reasonable efforts to locate and remove all copies of reported NCII. The law goes into force immediately but gives platforms one year to develop and implement the required notice and takedown process.
What Types of Platforms Are Covered?
A website, online service, online application, or mobile application that serves the public and either: (i) primarily provides a forum for user-generated content (e.g., messages, videos, images, games, and audio files), or (ii) publishes, curates, hosts, or otherwise makes available NCII in the regular course of its business.
What Types of Content Do the Takedown Provisions Cover?
The removal provisions apply to “intimate visual depictions” of an “identified individual” that were published without the individual’s consent. Notably, the Act also covers deepfakes: it makes no distinction between authentic NCII and NCII created through the use of software, machine learning, AI, or any other computer-generated or technological means, to the extent that a reasonable person could believe that the image is an authentic depiction of an identifiable individual.
What Must the Notice and Takedown Process Do?
A covered provider must provide on its platform a clear and conspicuous notice regarding how to submit a removal request that is easy to understand and follow. And, upon receiving a valid request from an affected identifiable individual (or an authorized representative), a covered provider must—within 48 hours—remove the NCII and make “reasonable efforts” to identify and remove any known identical copies of such depiction.
The Act does not define “reasonable efforts” nor does it contain any limiting principle, so privacy advocates have questioned the ends to which a platform must go to locate and remove all copies. Presumably, if a platform cannot scan content, perhaps because of well-designed security features, it would not be reasonable for platforms to undermine their own security features (especially given the FTC’s prior support for encryption and similar security features), so the Act wouldn’t require them to do so.
What Constitutes a Valid Request?
For a removal request to be valid, it must (1) be in writing, (2) be submitted by an affected individual (or an authorized representative), (3) notify the provider that NCII was published on the platform without consent (including any relevant information for the covered provider to determine whether the image was published without the consent of the individual), (4) tell the provider how to identify or locate the NCII, (5) include the affected individual’s contact details, and (6) be signed by the affected individual (or an authorized representative).
Each request must contain “information reasonably sufficient for the covered platform to locate” the applicable NCII. But the Act does not define what sort of information would be “reasonably sufficient” for a platform to locate the applicable NCII at all, let alone within 48 hours. Different platforms have different technical capabilities to identify content available on the platform; what is “sufficient information” for one platform may not work for another. And sometimes, those capabilities vary within a platform, depending on the specific service at issue.
How Is the Act Enforced?
The FTC will enforce compliance with the notice and takedown obligations in the same way it treats a violation of an FTC rule defining an unfair or deceptive trade practice. This means that if a platform does not comply with the Act, it risks civil penalties. Those penalties are set by regulation and indexed annually for inflation, but as of 2025 they are set at up to $53,088 per violation. Additionally, the FTC can also seek consumer redress and injunctive relief, like requiring the provider to develop a robust takedown process or prohibiting acts/practices that stand in the way of an effective takedown process.
Interestingly, in cases where the FTC investigates a platform for rejecting a notice, a different federal law may complicate the platform’s ability to defend itself. Suppose John asks an online platform to remove a post that Jane made in a private forum, criticizing John. The post is simply text; there are no images (let alone nonconsensual intimate ones), so the platform correctly determines the notice is invalid and leaves the post up. Undeterred, John complains to the FTC and lies that the platform refused to remove NCII, triggering an inquiry by the FTC into the platform’s ostensible failure to comply with the Act (let’s assume John has sufficient connections to trigger the world’s fastest FTC inquiry).
In theory, the platform would defend itself by producing Jane’s post and demonstrating that the post is not NCII. But could that trigger a lawsuit from Jane? The Stored Communications Act (SCA), which applies to many types of online platforms, prohibits a covered provider from disclosing the contents of communications to anyone and from disclosing any customer information at all to a governmental entity like the FTC, absent a statutory exception (like user consent or an emergency) or compulsory legal process issued under a different part of the SCA (which generally requires a search warrant for content and is of little use to the FTC). So platforms that review removal requests closely may have some tricky choices to make down the line, whereas platforms that simply remove all content targeted by a notice have immunity (further reinforcing the incentive structures described above).
What About Disputes or Mistakes?
The Act provides a safe harbor for platforms that remove content in response to a valid notice. But it does not provide similar safe harbor if a platform determines, in good faith, that a notice is invalid or insufficient or a notice was submitted in bad faith. Nor does the Act itself provide any dispute resolution process to allow a platform to seek review or guidance concerning a questionable notice. Intentionally or not, the structure of the Act has the potential to incentivize platforms to err on the side of removal even in questionable cases.
Supporters of the Act take the position that the law should err on the side of removal given the consequences of failing to remove. However, free speech advocates have raised the concern that the imbalance within the safe harbor has the potential to encourage abusive or pretextual notices to remove content that has nothing to do with NCII at all. (For example, one could imagine supporters of a hypothetical political candidate attempting to use this Act to remove unflattering images of that candidate). And as a practical matter, this incentive would increase in situations where platforms receive larger volumes of requests than they can reasonably review.
If this sounds slightly familiar, that’s because it is. Section 512 of the Digital Millennium Copyright Act (DMCA) establishes a notice and takedown regime that also incentivizes removal, and it has come under criticism for allowing bad faith actors to abuse the process to remove speech they don’t like from the internet. But the DMCA differs from the TAKE IT DOWN Act because it also allows the target of a notice (i.e., the person who posted the offending content) to object, allows the platform to restore content upon objection, and establishes a dispute resolution mechanism, all of which is missing in the Act.
So What’s Next?
For now, platforms may want to start planning (and waiting). With nearly a year until the Act becomes effective, affected platforms will have time to develop an implementation plan. Given the wrinkles above, those plans may differ for different platforms depending on their priorities, but they could include the following:
- Drafting a “clear and conspicuous” notice for people to submit NCII removal requests and deciding where on the platform that notice should live.
- Developing a channel to receive removal requests, including a channel for receiving requests from people who don’t use the platform (without requiring them to do so).
- Settling on the requirements for a “valid” removal request and perhaps including those requirements in the clear and conspicuous notice described above.
- Developing internal guidance for handling removal requests that takes into account the platform’s approach to balancing safety, privacy, and free speech: this guidance could include procedures for evaluating requests; determining whether the content at issue is, in fact, NCII; locating identical copies of NCII elsewhere on the platform; and responding to requests.
- Deciding whether and how the platform will object to insufficient or invalid removal requests, including by developing a proposal to ensure that the platform can defend its position adequately and consistently with other federal laws.
[View source.]