Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
How to figure out what you truly want in life.
Self Tests are all about you. Are you outgoing or introverted? Are you a narcissist? Does perfectionism hold you back? Find out the answers to these questions and more with Psychology Today.
Posted January 30, 2026 Reviewed by Lybi Ma
Recent discussions of conversational artificial intelligence have raised serious questions about mental health. For some people who use these systems regularly, the interaction can feel meaningful and sustained, while still clearly different from relating to a real person. Users know they are not interacting with a human, and yet the exchange can feel responsive, attentive, and engaging in unfamiliar ways. What remains unclear—and what has become the source of growing concern—is how this kind of engagement affects mental health, for better or worse, and for whom.
I come to these questions not as a psychologist, but as an ethicist whose work has often focused on moments when new practices outpace the frameworks we rely on to evaluate them. In ethics, this can arise when behavior is quickly labeled as irresponsible or harmful because it does not fit comfortably into existing categories. Sometimes those judgments are right. Sometimes they turn out to be premature or outdated. That experience led me to ask whether something similar might be happening as psychologists and clinicians encounter patients whose lives now include sustained conversational engagement with AI systems.
There is no doubt that sustained engagement with conversational systems can pose risks to mental health, particularly for certain individuals and vulnerable groups. A widely reported case involving an adolescent who developed an emotionally exclusive relationship with an AI chatbot, withdrew from family and peers, and later took his own life has understandably shaped public concern. Cases like this are heartbreaking. They illustrate how conversational AI can become psychologically dangerous, especially in the absence of boundaries, safeguards, or guidance.
While such cases show how things can go wrong, they do not determine how all ongoing engagement with conversational AI should be understood. The challenge for mental-health professionals is that these interactions may be present in patients’ lives without being obvious or easily described. Some patients may not mention them at all. Others may mention them without knowing how to characterize their significance. If sustained conversational engagement is becoming a background feature of everyday life for some people, then understanding its mental-health implications becomes a practical concern for those who treat and counsel them.
That subtlety is easy to miss. A recent national television advertisement for ChatGPT, aired during a major football broadcast, depicts a woman training to run despite self-doubt and physical difficulty. As she runs, encouraging messages from a prior ChatGPT conversation appear on screen—brief prompts offering reassurance, structure, and motivation. Nothing in the scene suggests therapy or emotional dependence. Yet it illustrates how conversational AI can quietly take on motivational or emotionally supportive roles that may never surface in a clinical conversation, even though they shape how people reflect, persist, and regulate themselves.
Long before AI, researchers studied parasocial relationships—one-sided emotional bonds with media figures or fictional characters. Classic work by Horton and Wohl (1956), along with later refinements by Tukachinsky (2011), repeatedly observed correlations between such bonds and experiences of loneliness, social isolation, or vulnerability. This literature does not establish simple causal pathways, but it does show that emotionally meaningful one-sided interactions tend to appear alongside these conditions rather than in isolation.
This line of work remains relevant as psychologists try to understand conversational AI. Systems that respond directly, remember past exchanges, and sustain attention can take on emotional significance for some users. The importance of this research is not that emotional meaningfulness automatically signals dysfunction, but that it highlights a dimension of experience that needs to be interpreted carefully rather than diagnosed reflexively.
More recent studies have examined how people actually use social chatbots over time. In qualitative and mixed-methods research, investigators analyze conversational logs, conduct interviews, and ask participants to describe their experiences across weeks or months. These studies do not point to a single pattern. Some participants describe the interaction as intellectually engaging or useful for thinking through ideas. Others describe it as emotionally meaningful in ways they did not anticipate, sometimes blurring boundaries they later find uncomfortable (Ta et al., 2023; Skjuve et al., 2024). Emotional significance alone, however, should not be treated as a proxy for mental unhealthiness. Engagement can be experienced as welcome, neutral, or troubling depending on the person, the context, and the consequences over time.
Other research focuses more explicitly on risks to mental health. Survey-based studies examining chatbot use alongside measures of loneliness, depression, and emotional reliance tend to find that individuals who are already struggling are more likely to turn to conversational systems for emotional support and to rely on them in ways that resemble existing dependency patterns (Ouyang et al., 2024). These findings reinforce the reality of risk. They do not, however, imply that sustained conversational engagement itself is pathological. Vulnerability and context matter.
A different line of research approaches conversational AI primarily in terms of cognition rather than emotional attachment. Studies of cognitive offloading examine how people rely on external systems—notes, calendars, and search tools—to support memory and reasoning, often with adaptive effects (Risko and Gilbert, 2016; Storm and Stone, 2023). More recent experimental and classroom-based studies ask participants to use generative AI to clarify arguments, explore alternatives, or reflect on their reasoning, with researchers measuring changes in comprehension or metacognitive awareness (Chiang and Lee, 2024; Mozafari et al., 2024). These studies are cautious in their claims, but they suggest that extended conversational interaction can support intellectual activity in ways that do not map neatly onto familiar ideas of either tool use or interpersonal relationships.
Any discussion of conversational AI should also acknowledge concerns about privacy and data use. Reasonable questions have been raised in policy and research communities about what it means to share personal reflections with AI systems and how such information is stored or protected. These issues remain unsettled and deserve attention, but they do not by themselves determine the psychological meaning of use. For clinicians and users alike, privacy considerations form part of the broader context in which decisions about engagement, boundaries, and disclosure are made.
Research involving older adults offers some of the clearest insight into how conversational AI might provide meaningful benefits. In studies conducted in homes and community settings, researchers observe how participants integrate conversational technologies into daily life using interviews, diaries, and longitudinal observation. The findings consistently suggest that the practical, tool-like aspects of conversational AI—help with reminders, information retrieval, and cognitive support—can be genuinely beneficial for older adults (Vandemeulebroucke et al., 2023; Pradhan et al., 2024).
The same research suggests that some older adults experience emotionally meaningful interaction with conversational systems without clear indications of increased loneliness or social withdrawal. Whether such experiences are psychologically beneficial, neutral, or potentially harmful remains an open question. What seems clear is that older adults occupy a complicated position: they may be both vulnerable in some respects and especially well positioned to benefit in others. That combination deserves careful attention rather than blanket assumptions.
In addition to the need for careful clinical observation, several questions may be especially important for future research. Under what conditions does sustained conversational engagement coexist with, rather than displace, relationships with real people? When does emotionally meaningful interaction remain psychologically neutral, and when does it begin to track increased distress or withdrawal? How do age, prior loneliness, mental-health history, and social context shape these outcomes over time? And how do users themselves understand, regulate, and set boundaries around these interactions as they become more familiar?
Research capable of addressing questions like these will take time, and norms will develop gradually. In the meantime, clinicians will continue to rely on judgment and experience.
Talking to a bot is not automatically a mental health problem. It is an emerging form of engagement whose significance depends on who is using it, how it is used, and with what effects over time. For some individuals—particularly children and adolescents—the risks may well outweigh the benefits. It may be appropriate to limit or restrict the availability of conversational AI to younger users, much as other media technologies are regulated.
For adults, including many older adults, the picture appears more mixed. Some forms of engagement may be healthy, others neutral, and still others harmful. Psychologists are well-positioned to observe these variations in clinical practice, to reflect on them collectively, and to develop research that clarifies when sustained conversational interaction supports mental well-being and when it undermines it.
The task now is not to decide in advance what talking to a bot must mean, but to understand when it is healthy, when it is not, and why. As conversational AI becomes increasingly ordinary, the psychological challenge is not to panic or to normalize indiscriminately, but to understand how meaning, risk, and well-being intersect in everyday use.
References
Chiang, T.-H., & Lee, H. (2024). Generative AI and metacognitive support in learning contexts. Computers & Education, 199, 105001. https://doi.org/10.1016/j.compedu.2024.105001
Horton, D., & Wohl, R. R. (1956). Mass communication and para-social interaction: Observations on intimacy at a distance. Psychiatry, 19(3), 215–229. https://psycnet.apa.org/record/1957-03058-001
Mozafari, M., et al. (2024). Large language models and human cognition. Nature Human Behaviour, 8, 456–468. https://www.nature.com/articles/s41562-024-01834-7
Ouyang, Y., et al. (2024). Emotional reliance on AI chatbots and mental health outcomes. JMIR Mental Health, 11, e53162. https://mental.jmir.org/2024/1/e53162
Pradhan, A., Lazar, A., & Piper, A. M. (2024). Understanding conversational agents for older adults: Use, value, and design implications. ACM Transactions on Computer-Human Interaction, 31(1). https://doi.org/10.1145/3631425
Risko, E. F., & Gilbert, S. J. (2016). Cognitive offloading. Trends in Cognitive Sciences, 20(9), 676–688. https://doi.org/10.1016/j.tics.2016.07.002
Skjuve, M., Følstad, A., & Brandtzaeg, P. B. (2024). Chatbots and emotional attachment: User experiences of relational interaction with conversational agents. Human–Computer Interaction. https://doi.org/10.1080/07370024.2023.2272451
Storm, B. C., & Stone, S. M. (2023). Memory in the age of digital tools: Implications for cognition and learning. Psychological Review, 130(6), 1341–1360. https://doi.org/10.1037/rev0000391
Ta, V., Griffith, C., Boatfield, C., Wang, X., Civitello, M., & Bickmore, T. (2023). User experiences of social chatbots: A qualitative study. Proceedings of the CHI Conference on Human Factors in Computing Systems. https://dl.acm.org/doi/10.1145/3544548.3580906
Tukachinsky, R. (2011). Para-social relationships: Further considerations. Journal of Broadcasting & Electronic Media, 55(2), 267–284. https://doi.org/10.1080/08838151.2011.597404
Vandemeulebroucke, T., Dierckx de Casterlé, B., & Gastmans, C. (2023). Social technologies for older adults: Ethical and experiential perspectives. Aging & Mental Health, 27(5), 843–852. https://doi.org/10.1080/13607863.2023.2179011
Michael A. Santoro, J.D., Ph.D., is a professor of management and entrepreneurship at the Leavey School of Business, Santa Clara University.
Get the help you need from a therapist near you–a FREE service from Psychology Today.
Psychology Today © 2026 Sussex Publishers, LLC
How to figure out what you truly want in life.
Self Tests are all about you. Are you outgoing or introverted? Are you a narcissist? Does perfectionism hold you back? Find out the answers to these questions and more with Psychology Today.