Millions Are Using This Mental Health Chatbot Like a Therapist. But It Wasn't Built to Handle Crisis
States across the U.S. have begun enacting laws to regulate AI-powered therapy apps, responding to a growing number of users turning to artificial intelligence for mental health support. These early laws reflect the complexity of governing fast-evolving technology and reveal gaps in protection that concern developers, policymakers, and mental health advocates alike.
States regulate AI therapy apps amid rising use, but inconsistent laws leave gaps as federal oversight and safety concerns grow across mental health tech.
Getty Images
AI chatbots are being used by millions for emotional support, often in the absence of available or affordable mental health care. With no uniform federal standards in place, state-level interventions vary significantly, both in scope and enforcement.
Varying State Approaches to AI Mental Health Apps
This year, states including Illinois and Nevada passed laws banning the use of AI in mental health treatment. Illinois imposes fines up to $10,000, while Nevada enforces penalties reaching $15,000. Utah has taken a different route by limiting AI therapy apps, mandating clear disclosures that the chatbot is not human and requiring data privacy protections for users.
Other states like Pennsylvania, New Jersey, and California are actively exploring their own approaches to regulating AI-based mental health tools.
These measures have had mixed effects. Some developers have restricted access in states with bans, while others are continuing operations amid calls for clearer legal guidance.
Regulatory Gaps and Legal Uncertainty
The laws often fail to address general-purpose AI chatbots, such as ChatGPT or Character.AI, which are not explicitly marketed as mental health tools but are still used by individuals in crisis. Some of these unregulated interactions have reportedly resulted in tragic outcomes, prompting lawsuits and public concern.
Health care experts, including representatives from the American Psychological Association, point to a lack of oversight as a major issue. AI tools may help bridge the gap in care caused by provider shortages and high treatment costs, but only if they are developed with clinical input, scientifically grounded methods, and human oversight.
Federal Agencies Begin to Investigate
Federal agencies are starting to respond. The Federal Trade Commission has launched inquiries into several major AI chatbot companies to assess their impact on children and teens. Meanwhile, the Food and Drug Administration has scheduled a November 6 advisory committee meeting to review generative AI tools used for mental health.
Potential federal regulations could include requirements for user disclosures, restrictions on marketing practices, mandatory monitoring for signs of suicidal ideation, and legal protections for whistleblowers.
Developers Navigate a Shifting Landscape
AI mental health app developers are adjusting their strategies as they respond to legal shifts. Earkick, a mental health chatbot founded by Karin Andrea Stephan, initially avoided using the term “therapist” but later adopted it to meet user search expectations. Recently, the app rebranded itself as a “chatbot for self care,” removing clinical references.
Earkick allows users to set up emergency contacts and encourages them to seek therapy if symptoms worsen but does not provide crisis intervention or notify authorities in cases of self-harm.
Stephan expressed concern about the ability of legislation to keep pace with innovation, noting rapid developments in the AI space that often outstrip regulatory clarity.
Other developers have taken more drastic steps. The AI therapy app Ash has blocked users in Illinois entirely, citing what it calls “misguided legislation” and urging users to contact lawmakers.
Efforts to Replicate Therapy Continue in Research
Some projects are taking a more clinical approach. A Dartmouth College-led team developed Therabot, a generative AI trained on evidence-based therapeutic responses. In a randomized clinical trial, participants using Therabot showed reduced symptoms of anxiety, depression, and eating disorders over an eight-week period, with each interaction monitored by a human expert.
Though the trial showed early promise, the researchers emphasize the need for further study before wider adoption. Lead researcher Nicholas Jacobson warned that sweeping bans might hinder responsible innovation and limit access to potentially effective tools.
Advocates Call for Balance Between Innovation and Protection
The line between companionship apps and clinical therapy remains difficult to define, complicating the task of regulation. Mental health professionals stress that empathy, clinical judgment, and ethical accountability are essential elements of therapy — capabilities current AI cannot fully replicate.
Some lawmakers and advocates remain open to refining legislation. However, they caution against positioning AI as a substitute for professional care, especially for individuals with serious mental health needs.
As AI therapy apps continue to expand their reach, the balance between accessibility, innovation, and safety remains unresolved. Calls for comprehensive federal oversight grow louder, even as state-level efforts attempt to address the complexities of mental health in the digital age.
Comments
Post a Comment