Illinois has introduced a ban, making it the initial state to prohibit Artificial Intelligence from functioning as a therapist.
As of mid-2025, Illinois has become the first state to enact a law explicitly banning AI from acting as a standalone therapist. This groundbreaking move is part of the Wellness and Oversight for Psychological Resources (WOPR) Act, which prohibits AI systems from independently interacting with clients, making therapeutic decisions, or generating treatment plans without licensed professional oversight.
Under the WOPR Act, AI systems cannot engage in therapeutic communication or decision-making. Violations may incur fines up to $10,000 per offense, enforced by the Illinois Department of Financial and Professional Regulation.
The Illinois law marks a turning point and may become a model for other states in handling AI in sensitive, high-stakes domains like health care, education, and law. Other states have also introduced or passed laws with similar restrictions on AI in mental health.
For instance, Nevada’s law (effective July 1, 2025) bans AI systems from providing direct mental or behavioral health care services. Providers must not represent AI as capable of delivering professional therapy. Licensed professionals may use AI only for administrative support, but not in direct patient care. Nevada’s law does not impose disclosure requirements like some other states do.
Texas (effective September 1, 2026) has a broader prohibition against AI systems that incite self-harm, harm to others, or criminal activities. This restriction could impact AI chatbots used in mental health contexts.
Other states like Utah and New York have introduced disclosure requirements and limits around AI use in healthcare. However, Nevada and Illinois represent the strictest bans on AI providing direct mental health therapy.
In summary:
| State | AI Therapy Regulation | Key Details | |----------|------------------------------------------------------|------------------------------------------------------------------| | Illinois | Ban on standalone AI therapy | No independent AI therapy interactions or decisions; fines apply | | Nevada | Ban on AI direct mental health services | AI not allowed for direct patient therapy; allowed for admin only| | Texas | Broad AI restrictions related to harmful content | Prohibits AI that incites self-harm/harm/crime; impacts AI bots |
These laws reflect growing concern about AI tools providing unregulated and potentially harmful mental health advice without human professional oversight. However, AI can still be utilized for administrative or supplemental support under licensed providers' supervision in these jurisdictions.
No comprehensive nationwide regulation exists yet; states vary widely, with Illinois setting a strong precedent by banning autonomous AI therapy outright. The federal government has proposed a 10-year moratorium on new state-level AI regulation.
The bill was introduced by Representative Bob Morgan and passed unanimously in both the state House and Senate. Nevada passed a law in June banning AI from providing therapy or behavioral health services in public schools. The Illinois law signals a cultural shift, drawing firmer lines about what machines should and shouldn't do in mental health care.
The law makes it illegal for AI to create or use treatment plans without a licensed human professional's review and approval. Utah implemented transparency requirements for mental health chatbots, forcing them to clearly disclose they are not human and to avoid exploiting user data for targeted advertising. New York's new law requires AI companions to redirect users expressing suicidal thoughts to trained crisis counselors.
Any person or business offering mental health services in Illinois must be certified. The American Psychological Association has cited two lawsuits involving minors who turned to AI chatbots for therapy, one of which resulted in a suicide and another in a child attacking his parents. AI companies continue to push the limits, such as OpenAI's new tools to detect mental distress in users. Advertising AI-driven therapy without a license in Illinois could result in a fine of up to $10,000 per violation.
- The WOPR Act in Illinois prohibits AI systems from engaging in therapeutic communication or decision-making, enforcing fines up to $10,000 per offense.
- Nevada's law, effective July 1, 2025, prevents AI systems from providing direct mental or behavioral health care services, restricting their use to administrative support by licensed professionals.
- Texas has prohibited AI systems that incite self-harm, harm to others, or criminal activities, which may impact AI chatbots used in mental health contexts.
- Utah has implemented transparency requirements for mental health chatbots, requiring them to disclose they are not human and avoid using user data for targeted advertising.
- New York's law requires AI companions to redirect users expressing suicidal thoughts to trained crisis counselors.
- Representative Bob Morgan introduced the bill banning standalone AI therapy in Illinois, which passed unanimously and set a strong precedent for other states in handling AI in sensitive, high-stakes domains like health care and mental health.