About one in eight U.S. adolescents and young adults in the U.S. are turning to AI chatbots for mental health advice, with use most common among those ages 18 to 21, according to a new study in JAMA Network Open.
The study — co-authored by researchers from Brown University School of Public Health, Harvard Medical School and RAND, a nonprofit research organization — provides the first nationally representative estimates of how often adolescents and young adults rely on generative AI, such as ChatGPT, for help when feeling sad, angry or nervous. It was led by Jonathan Cantor, a senior policy researcher at RAND.
“There has been a lot of discussion that adolescents were using ChatGPT for mental health advice, but to our knowledge, no one had ever quantified how common this was,” said Ateev Mehrotra, professor at the Brown University School of Public Health and a coauthor of the study.
The researchers surveyed 1,058 adolescents and young adults ages 12 to 21 between February and March 2025. Among those who used chatbots for mental health advice, two-thirds engaged at least monthly and more than 93% said the advice was helpful. Usage was even higher among young adults with roughly one in five respondents ages 18 to 21 reporting using large language models for mental health support.
“I think the most striking finding was that already, in late 2025, more than 1 in 10 adolescents and young adults were using LLMs for mental health advice and that it was higher among young adults,” Mehrotra said. “I find those rates remarkably high.”
Researchers note that the high utilization likely reflects the low cost, immediacy and perceived privacy of AI-based advice — particularly appealing to youth who may not receive traditional counseling. The findings come at a time when the United States continues to face a youth mental health crisis, with nearly one in five adolescents experiencing a major depressive episode in the past year and 40% receiving no mental health care.
It also comes amid reports that OpenAI is facing seven lawsuits alleging that ChatGPT drove users to delusions and suicide.
“There are few standardized benchmarks for evaluating mental health advice offered by AI chatbots, and there is limited transparency about the datasets that are used to train these large language models,” Cantor said.
The study, which was supported by the National Institute of Mental Health, also identified racial disparities. For example, Black respondents were less likely to report that chatbot advice was helpful, suggesting possible gaps in cultural competency.
The survey did not capture whether the advice was for diagnosed mental illness, and researchers say further work is needed to understand how generative AI affects young people with existing mental health conditions.
“Obviously the key question is how can LLMs be most helpful but at the same time limit their harm,” Mehrotra said. “But it changes my thinking from adolescents might use AI in the future and emphasizes this is already extremely common.”