Why We Should Let AI Compete in Mental Health Care

As states rush to regulate therapy chatbots, they risk freezing innovation in a system already failing millions.

America has a mental health care crisis—and poor state laws on technology risk making it harder to solve. More than 122 million Americans live in mental health deserts, where treatment is more than 30 miles away. One in four adults with a mental illness reports an unmet need for treatment. Millions are uninsured. For those with coverage, waitlists stretch for weeks or months. The system is overwhelmed, expensive, and rationed by geography and income. And yet, as the crisis in accessing mental health care deepens, it’s striking how quickly some states are moving to restrict artificial intelligence tools that could expand support, right as demand spills into the digital world anyway.

The justification for banning AI from taking some of the mental healthcare burden is safety. That concern is real. Some chatbots have given bad advice. Some failures have been serious. But risk alone is not a policy framework. If it were, telehealth would’ve probably never existed. Neither would most modern medicine. The real question is this: when a system is already failing millions, does shutting down experimentation make it safer, or simply more stagnant and more expensive?

I’ve spent enough time in therapy to know it isn’t a cure-all. Traditional talk therapy did not work for me. It was expensive and time-consuming. It often felt like I was paying to circle the same conversations. So I moved on to EMDR, which was focused on addressing specific incidents directly. Once the work was done, I did not need endless weekly sessions. It was a different tool, and it worked. That changed how I think about mental health care. No single modality works for everyone. Progress happens when new methods are allowed to compete and when patients can try tools that match the severity of what they’re dealing with.

AI may become another tool in that ecosystem—not as a replacement for clinicians, or a substitute for empathy, but as a supplement: a structured way to reflect, a low-cost support between sessions, or a first step for someone who would otherwise get nothing at all. That possibility is exactly what some states are now narrowing by treating almost any meaningful AI support as if it were a licensed clinical service.

In 2025 alone, state lawmakers introduced more than 1,000 measures related to artificial intelligence, with a growing share focused specifically on health care, children, and chatbot interactions. Illinois, for example, now allows AI in therapy only for administrative support while barring it from engaging in what the law defines as “therapeutic communication,” which includes offering emotional support, providing behavioral feedback, or reflecting on a user’s experiences.

In practice, that means an AI tool cannot even deliver a standardized, low-risk exercise, like guiding a structured CBT worksheet or running a supervised journaling flow, without being folded into a licensed clinician’s formal oversight. That may preserve the existing licensing model, but it does little to address the shortage inside it.

California has taken a similarly aggressive posture, requiring suicide-prevention protocols, repeated disclosures, and strict prohibitions on any AI system implying professional status. Additional proposals would tie mental health chatbots directly to licensed responsibility and open operators to private lawsuits and per-violation penalties. Framed as consumer protection, these measures also raise compliance costs, increase legal exposure, and make small-scale pilot programs risky. When experimentation carries the threat of liability before evidence can even develop, many builders simply step back. In a market already defined by limited providers and high prices, fewer entrants doesn’t equal greater safety.

Meanwhile, adults are not waiting for legislatures to decide whether AI belongs in mental health care. Surveys show roughly one-third of Americans say they would feel comfortable sharing concerns with an AI system instead of a human therapist. Among younger adults, that share rises above half. If states make supervised testing too restrictive, demand will not disappear. It will migrate toward general-purpose models, offshore platforms, or minimally-regulated apps operating beyond state oversight. None of this suggests a free-for-all. Mental health deserves guardrails like transparent disclosures, built-in crisis protocols, and firm data protections. But there is a difference between guardrails and barricades.

Utah’s regulatory sandbox reflects that distinction. Instead of preemptively banning entire categories of interaction, it allows companies to test tools under monitored conditions while regulators collect real-time data. The model assumes two things at once: that risk exists, and that evidence matters. It treats innovation as something to supervise and refine, not freeze.

We don’t get evidence by prohibiting pilots before they begin. Early clinical trials of AI-supported chatbots have demonstrated symptom improvement in depression and anxiety. Systematic reviews suggest these tools can support structured CBT exercises, screening, and early risk detection using language analysis. Even psychiatrists report patients are increasingly consulting AI tools before seeking care. Meanwhile, federal workforce projections estimate a shortfall of nearly 88,000 mental health counselors by 2037. For many adults, the realistic alternative to an imperfect digital tool is not premium in-person therapy, but weeks of waiting, out-of-pocket costs, or simply going without support altogether.

The policy question, then, is not whether AI is flawless. It is whether we allow new tools to improve under supervision, or prohibit them until they meet a theoretical standard no emerging technology has ever satisfied. Caution is warranted. But when caution becomes paralysis, the cost is borne by people already priced out, wait-listed, or geographically isolated.

Mental health care should not be exempt from responsible innovation simply because it is sensitive. It should be developed transparently, audited rigorously, and tested in the open before it gets locked into statute.

share this:

Free the People publishes opinion-based articles from contributing writers. The opinions and ideas expressed do not always reflect the opinions and ideas that Free the People endorses. We believe in free speech, and in providing a platform for open dialogue. Feel free to leave a comment.

Iulia Lupse holds a Bachelor of Science in diplomacy and international relations from Seton Hall University.

leave a comment