Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Home
Brain Control by Bart Fish & Power Tools of AI / Better Images of AI / CC by 4.0
On September 16, the United States Senate Judiciary Subcommittee on Crime and Counterterrorism convened a hearing to examine the dangers of AI chatbots, especially in regards to child safety. AI chatbots and their associated harms have also been the subject of recent news stories, including a Reuters investigation into Meta’s internal moderation policies that revealed the company sanctioned “sensual conversations” with children and a lawsuit against OpenAI over the suicide of a teen who sought advice from ChatGPT on how to end his life.
In a previous piece for Tech Policy Press, I looked at new research on the dangers of AI companions as they relate to mental well-being and safety, especially for minors. While those concerns persist, it is also crucial to understand how and why these chatbots are so popular. Are these chatbots addictive by design? Three papers on this subject reveal fresh insights:
Title: The Dark Addiction Patterns of Current AI Chatbot Interfaces
Date: April 2025
Authors: M. Karen Shen and Dongwook Yun
Published in: CHI Conference on Human Factors in Computing Systems
This paper investigates the “addictive potential of AI chatbots” by scoping prior literature on addiction mechanisms. The researchers also identify the specific pathways of addiction present in AI companions by examining the user interfaces of popular AI chatbots.
Based on a user interface (UI) evaluation of eight popular AI chatbots (Character.AI, ChatGPT, Claude, Gemini, Meta AI, Microsoft Copilot, Perplexity, and Replika), the researchers identified four key addiction pathways.
Based on these four addiction patterns, the authors provide a few concrete design recommendations:
By analyzing the UIs of eight popular AI chatbots, this research paper shows how specific design choices may shape a user’s neurological responses and thus increase their susceptibility to AI dependence, highlighting the need for “ethical design practices and effective interventions to support users in striking a healthier balance between the benefits and risks that come with AI chatbot interactions.”
Title: Investigating AI Chatbot Dependence: Associations with Internet and Smartphone Dependence, Mental Health Outcomes, and the Moderating Role of Usage Purposes
Date: August 2025
Authors: Xing Zhang, Hansen Li, Mingyue Yin, Mingyang Zhang, Zhaoqian Li & Zongwei Chen
Published in: International Journal of Human–Computer Interaction
This paper explores the association between “AI chatbot dependence, internet, and smartphone dependence, and mental health outcomes (depression, anxiety, and well-being)” in a survey sample of more than 1,000 adults.
This is the first study, to the authors' knowledge, that examines the relationship between AI chatbot dependence and smartphone dependence. While AI chatbot usage is a relatively recent phenomenon, smartphone use and, in some cases, smartphone addiction are well-documented. Thus, this research paper aims to provide a more holistic view of AI chatbot dependence and how it is shaped by usage choices and susceptibility to other forms of digital dependence, i.e, smartphone usage.
You have successfully joined our subscriber list.
This paper demonstrates that participants considered to be dependent on AI chatbots reported higher levels of depression and anxiety, but the usage purposes shaped the extent to which this relationship holds.
Title: How AI and Human Behaviors Shape Psychosocial Effects of Chatbot Use: A Longitudinal Randomized Controlled Study
Date: March 2025
Authors: Cathy Mengying Fang, Auren R. Liu, Valdemar Danry, Eunhae Lee, Samantha W.T. Chan, Pat Pataranutaporn, Pattie Maes, Jason Phang, Michael Lampe, Lama Ahmad and Sandhini Agarwal
Published in: arXiv preprint
This preprint study examines the impact of AI chatbots' interaction modes (voice/text) and conversation types (open-ended, non-personal, and personal) on psychological outcomes, including loneliness, AI dependence, and problematic AI usage. The dataset included 981 participants and over 300,000 conversations with OpenAI’s GPT-4.
Previous research into AI chatbots has documented their negative psychological impacts, the authors say, but this paper builds on prior work in a few crucial ways:
Through a four-week randomized controlled trial that looked at AI chatbots and their psychological impacts while controlling for conversation types and chat modalities, the researchers provide empirical evidence suggesting that “while longer daily chatbot usage is associated with heightened loneliness and reduced socialization, the modality and conversational content significantly modulate these effects.”