A Legislative and Enforcement Outlook for Mental Health Chatbots

DLA Piper
Contact

DLA Piper

A concept once confined to speculative and science fiction, artificial intelligence (AI) therapists – in the form of online chatbots – now exist and are available for use in the present day. Some of them have been developed in clinical or academic settings, while others have been released widely to the public. As chatbots grow in popularity, regulators are seeking to minimize their risk.

In this alert, we discuss existing state and federal legislation that targets mental health chatbots and possibilities for future legislation and enforcement.

State laws addressing mental health chatbots

Concerns about mental health chatbots have certainly reached the ears of state legislators. Illinois became the most recent state to act on this front when Governor JB Pritzker signed on August 4, 2025 the Wellness and Oversight for Psychological Resources Act. The law’s scope is broader than AI and states that an “individual, corporation, or entity may not provide, advertise, or otherwise offer therapy or psychotherapy services, including through the use of Internet-based artificial intelligence, to the public in this State unless the therapy or psychotherapy services are conducted by an individual who is a licensed professional.”

The Illinois law also regulates how such licensed professionals may and may not use AI in providing services to patients. For example, they may not allow AI to “make independent therapeutic decisions,” “directly interact with clients in any form of therapeutic communication,” “generate therapeutic recommendations or treatment plans without review and approval by the licensed professional,” or “detect emotions or mental states.” With some limitations, these licensed professionals can use AI for “administrative” or “supplemental” support, as defined in the law.

Effective immediately, the new law houses enforcement in the Department of Financial and Professional Regulation, which can impose civil penalties for violations after an administrative hearing.

Passed earlier this year, state laws in Nevada and Utah also address this topic. The Nevada law forbids AI providers from offering AI systems programmed to provide services “that would constitute the practice of professional mental or behavioral health care if provided by a natural person.” Such providers are also barred from representing that their systems are capable of providing such care – or that any feature of those systems is in fact a provider of such care. The Utah law is limited to mental health chatbots and requires suppliers to disclose that their chatbot “is an artificial intelligence technology and not a human,” and it imposes restrictions on sharing users’ information and on using the chatbot to advertise products or services.

Another state law in New York focuses uniquely on the adjacent issue of so-called companion bots. In its large budget bill, passed in May 2025, the state required providers and operators of companion bots to “provide a clear and conspicuous notification to a user at the beginning of any AI companion interaction” that “the user is not communicating with a human.” The state also prohibited these bots altogether unless the AI companions contain reasonable protocols for “detecting and addressing suicidal ideation or expressions of self-harm expressed by a user.”

More state laws of this type may be on the way. For example, in May 2025, New Jersey legislators introduced a bill with language similar to parts of the Nevada law, prohibiting developers and deployers of AI systems “from advertising or representing to the public that the system is or is able to act as a licensed mental health professional.” While states continue to innovate in this area, free of fear for now that state AI laws will be federally frozen, it is worth noting that state attorneys general can always use their broad consumer protection authority against companies offering these bots and other AI services.

Legislation at the federal level

At the federal level, the prospect for new laws covering mental health or companion bots is doubtful, despite calls for congressional action. That doesn’t mean federal agencies will be hands-off. At a recent conference, FTC Commissioner Melissa Holyoak called for the agency to do a market study, under Section 6(b) of the FTC Act, on “generative artificial intelligence chatbots that simulate human communication and effectively function as companions.” She noted that some of these bots have “engaged in alarming interactions with young users” and that “it's critical that we study this issue to understand how use of online technologies and in particular chatbots that are potentially replacing healthy social relationships, how those impact our children's mental health [1].” These 6(b) studies involve the agency sending orders to a set of companies with detailed demands for information and documents. While this process is not for enforcement purposes, the study could well underpin the direction of later actions for violations of the FTC Act’s prohibition on deceptive or unfair conduct [2].

As for the FDA, it said in 2022 that it would “exercise enforcement discretion” over software functions intended to diagnose or treat psychiatric conditions and diseases, because, even though they might be medical devices, they “pose lower risk to the public.” The agency has not yet taken action against any mental health chatbots, though it has given at least two of them – Woebot and Wysa – the agency’s “breakthrough device designation,” which speeds regulatory review. However, substantial reduction of FDA staff and the agency’s AI-forward plan to introduce chatbots and other automated tools to speed food and drug reviews suggest that such FDA enforcement may not arrive any time soon.

Conclusion

In the wake of budding state and federal action, it is likely that legislative, judicial, and societal attention will continue to focus on the use of mental health and companion chatbots, with larger questions to be explored about developmental and health effects of social media, platform design, and other human-computer interactions, especially as they relate to minors.

The new laws, with rumblings of more to come, are relevant if you make, promote, or provide bots or other AI products for mental health or companionship. Companies are encouraged to follow these developments and respond accordingly in adapting their AI governance programs.

DLA Piper’s AI and Data Analytics practice group is actively monitoring federal and state movement in this area.

[1] She also referred to some child advocates concluding “that minors should not use AI companion chatbots at all, and that Congress should legislate on the issue.”

[2] The American Psychological Association and consumer advocacy groups have also asked the Federal Trade Commission to act in this area.

[View source.]

DISCLAIMER: Because of the generality of this update, the information provided herein may not be applicable in all situations and should not be acted upon without specific legal advice based on particular situations. Attorney Advertising.

© DLA Piper

Written by:

DLA Piper
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

DLA Piper on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide