Tasked with Troubling Content: AI Model Training and Workplace Implications

Epstein Becker & Green
Contact

Epstein Becker & Green

The discussion of Artificial Intelligence (“AI”) in the workplace typically focuses on whether the AI tool and model has a discriminatory impact.

This means examining whether the AI output creates an unlawful disparate impact against individuals belonging to a protected category.

However, that discussion rarely centers on the types of training data used, and whether the training data itself could have a harmful effect on the workers tasked with training the AI model.

Background


To effectively train AI models, the model must first recognize the entire scope of data input—the good and the bad. For the AI model to recognize traumatic and harmful content and distinguish it from beneficial and safe content, humans are often required to identify and label the traumatic and harmful content—­over and over and over—until the model learns it and can filter it out from good and safe content. This coding work can not only be tedious for the human coders, but also pose a danger of psychological harm, potentially inadvertently creating an abusive and unsafe work environment.

Schuster v. Scale AI


In Schuster v. Scale AI, the Northern District of California is currently confronted with evaluating the psychological harm and potentially hostile working conditions that workers may experience when coding violent and toxic content in an AI model. In this case, a group of independent AI input contractors—known as “taskers”— filed a complaint alleging class wide claims of workplace psychological injury (e.g., depression, anxiety and PTSD), “moral injury”— the emotional, behavioral, and relational problems that can develop when someone acts in ways that go against deeply held values— and “institutional betrayal” —the purported betrayal that can arise when an employer fails to take appropriate steps to prevent or respond appropriately to highly distressing workplace circumstances. The plaintiffs brought causes of action for negligence and violations of California’s Unfair Competition Law.

The plaintiffs in Schuster allege that defendants—operators of generative AI services—required them to input and then monitor psychologically harmful information in AI models. This harmful information, according to the complaint, pertained to suicidal ideation, predation, child sexual assault, violence, and other highly violent and disturbing topics. In some instances, the plaintiffs were purportedly required to engage in hours long traumatic conversations with the AI, demanding complete mental focus as the AI prompted multiple follow-up questions pertaining to disturbing scenarios. Plaintiffs contend that, because of this repeated exposure to traumatic content, they developed PTSD, depression, anxiety, and other mental functioning problems. Plaintiffs further claim that they were not provided sufficient warning, support, or workplace safeguards.

Lessons and Takeaways for Employers


Schuster serves as a reminder to employers to exercise caution and diligence when engaging with AI and other new technologies. Whether an employer is training an internal AI model, deploying AI to assist with employment decisions, or contracting with a company that develops AI tools, this recent litigation underscores how workers may experience conduct which may be construed as unlawful due to their interactions with AI. Employers should continue to monitor and audit workers’ interactions with AI to ensure that AI use and AI training does not create a hostile or abusive environment or otherwise violate workplace-related laws. Further, employers may consider implementing effective technological guardrails as well as providing support and notice to employees interfacing with certain AI tools.

If the plaintiffs are successful, Schuster could introduce changes across the AI data-labeling industry—such as the promulgation of more comprehensive disclosures, mental health resources, and stronger legal protections for employees and independent contractors involved in AI training.

Employers are reminded to:

  • Implement robust oversight mechanisms for workers tasked with potentially coding AI models for harmful content.  
  • Notify employees of the types of data that they are expected to code in training the AI model.
  • Offer the option to opt out of being exposed to disturbing content.
  • Develop personalized and interactive training programs that address how workers should approach traumatic content in the workplace.
  • Provide mental and emotional health resources, including preventative measures, medical monitoring, and treatment.
  • Develop a procedure to investigate employee complaints concerning exposure to harmful content.
  • Recognize the variety of legal claims (g., moral injury and institutional betrayal) that workers may assert as a result of training AI models.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Epstein Becker & Green

Written by:

Epstein Becker & Green
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Epstein Becker & Green on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide