Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
New research explores the growing role of AI chatbots in mental health care—highlighting both their therapeutic potential and links to adverse outcomes, including suicide and psychosis risk.
This week, Mad in America explores three academic publications related to AI chatbots and mental health. The first, a commentary by DSM-IV chairman Allen Frances, argues that AI chatbots will be the therapists of the future. The second investigates media reports of AI chatbot use linked to adverse mental health outcomes, including suicide and psychiatric hospitalization. The third links AI chatbot use with higher odds of being labeled “high risk” for psychosis. Taken together, this research and commentary tell of growing dependence on AI chatbots for therapy and mental health advice while that very technology is increasingly linked to adverse mental health outcomes.
A new commentary published in the British Journal of Psychiatry argues that AI chatbots will eventually dominate psychotherapy. The author warns that psychotherapists have been complicit in the rise of this new technology that threatens to replace them, especially in terms of treating everyday psychological distress like mild anxiety and depressive episodes. This commentary, written by Allen Frances who was the chair of the DSM-IV taskforce and later criticized the DSM-5 for its role in medicalizing everyday problems and overprescribing psychotropics, argues that psychotherapists need to concentrate on the areas where AI chatbots struggle, such as treating more severe psychological distress.
The author argues that AI chatbots already show great promise for use in therapy due to their mimicry of interpersonal skill, ability to adapt responses to the language of the user, large knowledge base, accessibility and affordability, and their lack of allegiance to a specific therapeutic modality. Frances also says that chatbots have superior memory compared to psychotherapists and that people are less likely to fear being shamed or criticized.
While the author argues that AI chatbots show promise for use in therapy, he also outlines some of the dangers associated with this technology. These bots are not well suited for people with more severe mental health issues and cannot react to chaotic or unpredictable situations that may arise with psychosis, suicidality, antisocial and violent impulses, etc. These algorithms are prone to “hallucinations” and can harm users by uncritically reinforcing impulsive behaviors and unfounded beliefs. They are unregulated, lack safety and efficacy controls, and reflect the biases of the people that designed and trained them. These bots are designed to encourage engagement which can lead to addiction and may reinforce over-identifying with diagnoses to the detriment of users’ mental health.
Frances believes that “artificial intelligence is an existential threat to our profession. Already a very tough competitor, it will become ever more imposing with increasing technical power, rapidly expanding clinical experience and widespread public familiarity. There is every reason for alarm, no room for complacency. We must immediately find ways to adapt to artificial intelligence or we will be replaced by it.”
He recommends that psychotherapists should focus on working with people who have more severe psychological distress, working with seniors and children, managing emergencies, working in special settings such as hospitals, the military, and prisons, and managing AI chatbot therapy.
While some might argue that Frances overestimates the usefulness of artificial intelligence in therapy, there is already evidence that people are increasingly looking to AI chatbots for mental health advice. There is also evidence, both in media reports and research, that this technology is linked to harm.
A new pre-print article currently under review for publication in the Journal of Medical Internet Research examines media reports of adverse mental health outcomes linked to AI chatbot use. This research, led by Van-Han-Alex Chung from Medicine McGill University in Canada, finds that publicly reported harm related to AI chatbot use most frequently involved death by suicide. The current work also found that fatal outcomes were disproportionately represented in minors.
The goal of this study was to investigate media reports of adverse mental health outcomes associated with AI chatbot use in terms of outcome severity, user vulnerability, causal attribution, and framing. The authors used Google News to locate media reports about specific instances of adverse mental health outcomes linked to AI chatbot use. Included reports came from recognized news outlets, were published in November 2022 or later, and were written in French or English. While the authors looked for articles published in November 2022 or later, included articles were all published between September 16, 2025 and January 19, 2026, during what the authors describe as “a concentrated period of intensified media attention on generative AI related psychiatric harms.” Opinion pieces, speculative commentary, and reports related to “non-psychiatric” harms were excluded. In total, the authors examined 71 media reports of 36 unique cases.
The majority of reports came from the US (48, 67.6%). Media reports from Canada, France, and the UK were also represented in the current work. Sixty-one reports were coded for severity of outcome, with suicide being the most common harm reported (35/61). Psychiatric hospitalization was the outcome in 13 of 61 reports, with one of those ultimately ending in suicide. There were three reports on non-suicide deaths, two reports on suicide attempts, one on self harm, and seven on “other harm.”
Causality was also coded in 61 reports. Chat logs and screenshots were the most commonly reported evidence (39/61) linking AI chatbot use to adverse mental health outcomes. Family testimony (8), Lawsuits (7), multiple sources (4), police reports (1), and other evidence (7) was also cited as evidence linking AI chatbots to harm.
Legal action was discussed in 45 reports, with 42 mentioning a lawsuit. Regulation was discussed in 60 reports, with 51 calling for more regulation round AI chatbots. Company responses were also mentioned in 60 reports, including 27 announcements of safety updates, 17 reporting no comment, nine indicating a company policy change, four denying or disputing AI chatbot involvement, and three responses classified as “other.”
This study had several limitations. As this is a preprint article, this research has not yet been peer reviewed. The analysis relied on media reports which can be biased or sensationalized. The authors were not able to independently verify details from these reports. The focus on media reports means the current work likely only included the most extreme or tragic cases. This data cannot be used to estimate prevalence of adverse mental health outcomes linked to AI chatbot use. The role of AI chatbots in the reported adverse outcomes is observational. Other factors could, and likely did, play a role.
While the current work reported only on possibly biased media reports of adverse mental health outcomes related to AI chatbots, other research has also linked AI chatbots to adverse outcomes.
A new article published in the Journal of Medical Internet Research finds that people labeled “at risk” of psychosis are more likely to report intensive use of AI chatbots compared to those not labeled “at risk.” The current work, led by Benjamin Buck from the University of North Carolina at Chapel Hill, also finds that people labeled “at risk” were more likely to use AI chatbots for social and emotional support, ascribe human roles, such as therapist, to this technology, and have “delusion-like experiences” associated with their use.
The goal of this study was to investigate links between psychosis risk and AI chatbot use frequency and motivation. The authors recruited 952 young adults (18 – 25 years old) living in the US to take part in the current research. The participants completed anonymous self-report surveys to assess frequency of AI chatbot use, the purpose and motivations of use, and AI chatbot use that involved “delusion-like experiences.”
Psychosis risk was measured using the Prodromal Questionnaire – Brief (PQ – B). This is a self-report survey that asks participants about confused speech, paranoia, hallucinations, and other experiences associated with psychosis. A score of less than 20 on the PQ – B indicates low risk of psychosis, with 20 or higher equated with high risk. “Delusion-like experiences” were measured using the Generative AI Aberrant Thoughts and Experiences Scale (GAATES). This is a self-report survey that asks participants about paranoid, grandiose, and likely delusional experiences related to chatbot use.
Participants that reported using an AI chatbot “several times per day” were 2.56 times more likely to be labeled at high risk of psychosis by the PQ – B compared to those reporting less frequent use. Those that had used an AI chatbot the same day as completing the PQ -B were 1.70 times more likely to be labeled high risk. Participants that used an AI chatbot for more than 30 minutes per day (1.32) and those that started six or more conversations on days they used AI chatbots (2.10) were also more likely to be labeled at high risk of psychosis. Those labeled at high risk of psychosis were also more likely to ascribe human roles to AI chatbots, including “companion” (1.76 times more likely), “therapist” (3.08), “friend” (2.52), and “romantic partner” (2.62).
Participants labeled as high risk were more likely to report “aberrant thoughts and experiences” related to AI chatbot use. This included endorsing statements such as:
This study had several limitations. Due to the design of the research, causation is uncertain. In other words, this data could indicate that people at high risk of psychosis are more likely to use AI chatbots, that AI chatbots are causing increased psychosis risk, or some combination of the two. The sample was not representative or random, limiting generalizability within the US. The sample was also composed entirely of people from the US, limiting generalizability to other populations. Some of the measures used in this work were developed for this project, meaning they have not been validated. The self-report nature of the surveys used in this research means the data is prone to reporting biases. The authors also note that high scores on the PQ – B alone do not indicate clinical high risk of psychosis.
While the current work cannot definitively say that AI chatbot use is causing psychosis risk, there is some evidence from other sources that this technology is linked to psychosis in otherwise healthy people.
****
Buck, B., & Maheux, A. J. (2026). Psychosis risk and generative artificial intelligence use frequency, motivations, and delusion-like experiences: Cross-sectional survey study. Journal of Medical Internet Research, 28. https://doi.org/10.2196/85038
Chung, V.-H.-A., Bernier, P., & Hudon, A. (2026). Publicly Reported Adverse Outcomes Following Use of Generative Artificial Intelligence: A Rapid Scoping Review of Mass Media Articles (Preprint). https://doi.org/10.2196/preprints.93040
Frances, A. (2025). Warning: Ai Chatbots will soon dominate psychotherapy. The British Journal of Psychiatry, 1–5. https://doi.org/10.1192/bjp.2025.10380
Sign-up to receive our weekly newsletter and other periodic updates.
