FTC Evaluating Deceptive Artificial Intelligence Claims

Holland & Knight LLP
Contact

Holland & Knight LLP

Highlights

  • The Federal Trade Commission (FTC) has shown a growing interest in scrutinizing deceptive claims related to artificial intelligence (AI) that stem from the agency's core mission to protect consumers and ensure fair competition.
  • This Holland & Knight alert provides a broad overview of the FTC's evaluation of AI-related deception.

The Federal Trade Commission (FTC) has shown a growing interest in scrutinizing deceptive claims related to artificial intelligence (AI). This interest stems from the agency's core mission to protect consumers and ensure fair competition. This Holland & Knight alert provides a broad overview of the FTC's evaluation of AI-related deception.

Context: Why the FTC Is Involved

The FTC's authority under Section 5 of the FTC Act gives it broad power to police unfair or deceptive acts or practices, including false advertising, misleading marketing claims and unfair business conduct. As AI technologies – especially generative AI – have become more prominent in consumer products and business services, the FTC sees increased potential for deceptive or overhyped claims that can mislead consumers or distort markets.

Enforcement Focus Areas

The FTC has signaled that it is especially concerned with:

  • exaggerated performance claims about AI-powered products
  • falsely labeling products as AI-driven to capitalize on the hype
  • opaque data practices, especially involving biometric or personal data collected by AI systems
  • bias and discrimination in AI decision-making systems (e.g., in hiring, credit scoring or surveillance)
  • consumer manipulation, especially through hyper-personalized content or simulated interactions that appear human

Key Guidance and Public Statements

The FTC has issued formal business guidance and blog posts warning companies about deceptive AI practices. Notable takeaways include:

  • Unsubstantiated claims that a product uses AI – or uses it in a particular way – could be considered deceptive.
  • Don't overpromise what AI can do, and remind companies that claims must be truthful, substantiated and not misleading.
  • The FTC will increase its scrutiny of AI systems that collect or use biometric data, especially where deception or lack of consent is involved.

FTC to Bring Enforcement Actions Under Existing Laws

FTC Chair Andrew Ferguson called for the agency to regulate AI claims through its existing consumer protection authorities: "Imposing comprehensive regulations at the incipiency of a potential technological revolution would be foolish. For now, we should limit ourselves to enforcing existing laws against illegal conduct when it involves AI no differently than when it does not."

Two recently announced enforcement actions involving AI underscore the new FTC leadership's commitment to evaluate AI claims under traditional deception frameworks.

Workado agreed in April 2025 to resolve allegations that it made false or misleading performance claims in violation of Section 5 of the FTC Act related to its "AI Content Detector." Specifically, Workado markets its AI Content Detector to consumers who are seeking to determine whether online content was developed using generative AI technology such as ChatGPT or if it was written by a human being. The company claimed that AI Content Detector was developed using a wide range of material, including blog posts and Wikipedia entries, to make it more accurate for the average user. The FTC alleges, however, that the AI model powering the AI Content Detector was only trained or fine-tuned to effectively classify academic content.

In March 2025, Cleo AI agreed to pay $17 million to resolve allegations that it made misleading promises about consumers' access to quick cash advances. Cleo AI provides subscribers with cash advances in amounts determined using an AI risk classifier. The FTC alleged that Cleo AI violated Section 5 of the FTC Act by making misleading claims about the timing and amount of cash advances. The FTC also alleged that Cleo AI violated the Restore Online Shoppers' Confidence Act (ROSCA) because it failed to disclose material information about the timing and amount of the cash advances when consumers subscribed to the service, and it also prevented subscribers with outstanding cash advances from canceling.

Key Takeaway

The FTC is clearly laying the groundwork for aggressive enforcement against deceptive AI practices. Companies leveraging AI in their products, services or marketing must:

  • ensure claims about AI capabilities are truthful and substantiated
  • avoid manipulative design, bias or misuse of personal data
  • stay current with FTC guidance, which reflects a desire to shape norms and deter misconduct early in this technological evolution

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© Holland & Knight LLP

Written by:

Holland & Knight LLP
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

Holland & Knight LLP on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide