[co-author: Gina Nicotera]
With the prevalence of generative artificial intelligence (“AI”) on the rise, the potential for misuse, including in the workplace, is ever present. Amidst this backdrop, the recently enacted federal TAKE IT DOWN Act (the “Act”) prohibits the distribution of nonconsensual intimate images, or “revenge porn,” including AI-generated and digitally altered content known as “deepfakes.” The primary question for employers is not only what the Act means for them, but what steps they can affirmatively take to preserve a healthy workplace culture and ensure responsible use of AI.
The Act makes it a federal crime to knowingly publish or threaten to publish intimate images of an identifiable person without the individual’s consent. Further, covered online platforms must establish a process that allows an identifiable individual or their authorized representative to seek prompt removal of a published nonconsensual intimate image. Once notified, a covered platform must remove reported content within 48 hours and make “reasonable efforts” to remove any copies of the intimate image and prevent further dissemination. The Act does not specify what actions constitute “reasonable efforts” nor does it mandate specific tools for the verification of claims and removal of content, making compliance with the Act’s removal obligations ambiguous. The FTC enforces the notice-and-removal requirements and may assess civil penalties for noncompliance. Although the criminal provisions of the Act are effective immediately, covered online platforms have until May 2026 to implement compliant removal processes.
- Potential Workplace Impacts
Employers who fail to respond quickly and effectively to removal requests may face additional liability, including under Title VII of the Civil Rights Act of 1964 and related state discrimination statutes, for sexual harassment and/or the creation of a hostile work environment. Although the Act is still in its infancy, a recent California appellate court decision in Carranza v. City of Los Angeles underscores the potential workplace implications of the Act.Carranza, a captain in the LAPD, learned that a photo of a topless woman (rumored to be her, but was of someone else) was being circulated among her fellow LAPD officers. Carranza immediately reported the distribution to the Chief of Police and asked that the LAPD investigate and order personnel to cease distributing the photo. Despite Carranza’s repeated requests, the LAPD did not order its officers to stop sharing the photo, nor did it discipline anyone involved in the distribution of the photo. The Court of Appeal upheld a jury award of $4 million to Carranza. Although the Act was not specifically at issue in Carranza, the decision serves as a reminder to employers that even if they are not a covered online platform, their failure to take prompt remedial action could pose substantial risk.
- Additional Considerations for Employers
To promote a healthy workplace free from online harassment and mitigate risk at the same time, employers should review and update their existing social media, AI acceptable use, and anti-harassment policies to specifically address deepfakes, including, among other things, the following:
- Establish various employee reporting options, including to a supervisor, Human Resources, and/or anonymously via email or hotline.
- Ensure prompt, thorough investigation protocols are in place to address and resolve such complaints.
- Be careful not to make hasty decisions that could be perceived as retaliatory and/or a punishment for reporting.
- Implement appropriate disciplinary policies for employees who distribute or use inappropriate AI content.
- Conduct regular trainings on these policies, particularly for managers, supervisors, Human Resources professionals, and other key leaders who may receive reports about published nonconsensual intimate images and deepfakes.
- Employers should also be aware of state-level protections that may impose additional mandates. Many states, including California, Illinois, Texas, Massachusetts and New York, have already passed legislation regulating the distribution of sexually explicit material, including deepfakes.
[View source.]