A new nationally representative survey published in 2025 finds that about 13.1% of U.S. adolescents and young adults — roughly 5.4 million individuals aged 12–21 — have used generative-AI chatbots for mental-health advice when feeling sad, angry, or nervous. Among those aged 18–21, the rate is even higher, at around 22.2%.
Of those who used AI for emotional support, a majority engage regularly: about 65–66% use chatbots at least monthly, and more than 90% say they found the advice “somewhat or very helpful.” According to the study’s authors, the high usage likely reflects the appeal of AI advice: it’s low-cost, instantly accessible, and offers perceived privacy — features that especially attract youth who may not otherwise access traditional counseling.
But experts warn that this trend comes with serious risks. Chatbots are not trained or regulated as mental-health professionals, and a growing body of research suggests they may not reliably detect or respond to serious issues: such as self-harm, suicidal thoughts, psychosis, or eating disorders. In many cases, chatbots “miss symptoms of serious mental health conditions.” In multi-turn conversations, they may “get distracted, minimize risk, and sometimes reinforce harmful beliefs.”
In short: while AI chatbots are emerging as a convenient, widely used mental-health resource for many youth — especially those lacking access to traditional support — they are far from a safe or reliable substitute for trained professionals. Their limitations in recognizing serious distress, combined with concerns about bias and lack of oversight, suggest caution is warranted. For youth mental health, these tools may help as a first step or temporary outlet — but they should not replace professional mental-health care when serious issues arise.