The AI Companions Your Kids Talk To: First Real Data Shows Half of Teens Have Used Chatbots – StudyFinds

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Home Children News
By StudyFinds Analysis

Reviewed by John Anderer
Research led by Anne J. Maheux, PhD (University of North Carolina at Chapel Hill)
Feb 04, 2026

(Credit: StudyFinds)
American teenagers are using artificial intelligence for more than homework help. A small subset are befriending it, confiding in it, and spending hours talking to it when they should be asleep. The first study to track actual smartphone behavior reveals nearly one in three young people have used AI chatbot apps, with adoption reaching 50% among older teens. Many of the most popular apps aren’t marketed as academic tools. They’re sold as companions, promising personalized friendships that never judge and always respond.
Researchers at the University of North Carolina and parental monitoring company Aura tracked real-time data from 6,488 youth ages 4 to 17 between September 2024 and April 2025. Instead of asking kids what they remember doing, the team captured when kids were actively typing into AI apps, not what they said or what responses they received. This approach reveals actual behavior, unlike surveys where kids might underreport use or simply forget. ChatGPT dominated with nearly 79% of users, but seven of the 17 most popular apps were explicitly designed for emotional connection rather than schoolwork.
Just two years after ChatGPT launched in November 2022, this technology has woven itself into the routines of American youth with stunning speed. About 42% of 13-to-14-year-olds had accessed these tools during the study period, jumping to just over half for 15-to-17-year-olds. But here’s what will likely worry parents most: nearly one in five preteens ages 10 to 12 used chatbot apps, as did 9% of kids ages 8 to 9. The study even documented 12 children as young as 4 to 7 talking to AI. Kids that young can barely read, let alone evaluate whether a chatbot’s advice is trustworthy.
Most kids spent very little time using chatbots: half used them for less than 11 seconds per day. But a small group pushed the numbers way up. When you average everyone together, users spent about 2.37 minutes per day. That average hides the real story: the heaviest user logged nearly three hours in a single day. Several others routinely exceeded 40 minutes daily. Weekend use showed the biggest spikes, with some preteens and young teens logging sessions stretching past four hours on weekend days.
Late-night use stood out for different reasons. Although nighttime hours between 10 p.m. and 4 a.m. saw lower overall usage, certain youth spent substantial time chatting with bots when they should be sleeping. Sleep researchers have long raised concerns about screens before bed. This study adds a new wrinkle. AI companions that respond instantly, even at 2 o’clock in the morning. Imagine your anxious 13-year-old talking to a virtual companion for hours past midnight because it “gets her” in ways her actual friends don’t.
The study, published in JAMA Network Open, couldn’t see what these heavy users were actually typing or what responses they received. Were they role-playing fantasy scenarios? Venting frustrations? Seeking advice on problems they didn’t know how to solve? Many apps like these are built to keep users coming back. They often learn user preferences over time and reward frequent use with new features. Those tactics are working on at least a segment of young users.
Usage jumped during after-school hours from 3 p.m. to 10 p.m., when about one in four youth opened chatbot apps. That timing raises an obvious question: are students using AI to actually learn, or just to get answers fast?
ChatGPT can generate essay outlines, solve math problems, and answer science questions almost instantly. For a student genuinely trying to understand a concept, that can help. But the same capability makes it easy to skip thinking independently. Your son asks ChatGPT to write three paragraphs about the Civil War. The bot spits out polished prose in seconds. He copies it, changes a few words, and moves on. Assignment done, learning skipped.
Most schools are scrambling to catch up, alternating between outright bans that kids circumvent at home and resigned acceptance they can’t control.
Here’s what makes AI companions so appealing and so concerning. They’re endlessly patient, never busy, and programmed to be empathetic and affirming. For a lonely teenager dealing with social anxiety or bullying, a chatbot that responds instantly and never judges might seem like the perfect confidant.
The American Psychological Association has warned that AI chatbots pose risks to adolescent well-being when they create an illusion of connection without teaching the skills real relationships require. Navigating conflict. Reading social cues. Accepting that other people have needs and boundaries. An AI that always agrees and never pushes back doesn’t prepare kids for actual human interaction. Adolescence is when young people develop social competence and form their identities. What happens when a 13-year-old treats a chatbot as their primary emotional support system?
Kids who spent more time on AI chatbots also tended to spend more time on traditional social media. Both raise concerns about screen time replacing face-to-face interaction and offline activities.
US privacy regulations are intended to restrict many apps to users 13 or older, typically requiring parental consent for younger children. Those restrictions aren’t working. Nearly 20% of preteens in the study had accessed chatbot apps. Getting around age verification requires nothing more than lying about a birthdate, a hurdle most kids clear in seconds.
Younger children face particular vulnerabilities. Kids under 12 are still building critical thinking skills and struggle to evaluate whether information sources are credible. Asking an 8-year-old to assess whether a chatbot’s friendship advice is trustworthy may be asking too much. And the presence of young children in the dataset (12 kids ages 4 to 7) shows just how easily these barriers fall.
Since the study captured only text typed into mobile apps, excluding web browser use and voice commands, the real numbers may be even higher.
Generative AI isn’t disappearing. These tools are becoming more sophisticated and accessible every month. The American Psychological Association has warned that chatbots can offer benefits like structured learning support but also pose risks depending on how they’re used.
Banning AI entirely isn’t realistic. Your teenager can access ChatGPT from any device, anywhere, anytime. But letting kids explore without guardrails exposes them to problems they’re not ready to handle. Start conversations about what AI can and can’t do. Set time limits. Check in regularly with genuine curiosity: “What did you talk about with the chatbot today?” “Did it give you good advice?”
Policy debates are heating up around stricter age verification and safety features for younger users. But right now, checking a box that says “I am 13 or older” is the only barrier. The fact that this shift happened in just two years, largely without public awareness or meaningful regulation, should alarm anyone invested in child development.
Disclaimer: This article is based on peer-reviewed research and is intended for informational purposes only. It does not constitute medical, psychological, or parenting advice. Parents concerned about their child’s technology use should consult with qualified healthcare or mental health professionals.
Researchers relied on data from a parental monitoring app, which may not represent all US youth. Families using such apps could have higher incomes or more involved parenting styles. The study lacked demographic information beyond year of birth, making it impossible to analyze differences by race, ethnicity, or socioeconomic status. Data collection focused on text-based mobile app usage, excluding web browser activity and voice interactions, which means actual usage rates might be higher. The analysis could not determine what content children were inputting into chatbots or what responses they received.
Anne J. Maheux received funding from the Winston Family Foundation. Samir Akre-Bhide, Debra Boeldt, Jessica E. Flannery, Zachary Richardson, and Scott H. Kollins are employed by Aura, the company that provided the monitoring app data. Several authors hold equity or stock options in Aura. Eva H. Telzer receives funding from the Winston Family Foundation and has served as an expert witness in social media litigation.
Authors: Anne J. Maheux, PhD; Samir Akre-Bhide, PhD; Debra Boeldt, PhD; Jessica E. Flannery, PhD; Zachary Richardson, PhD; Kaitlyn Burnell, PhD; Eva H. Telzer, PhD; Scott H. Kollins, PhD | Affiliations: University of North Carolina at Chapel Hill (Maheux, Burnell, Telzer); Aura, Boston, Massachusetts (Akre-Bhide, Boeldt, Flannery, Richardson, Kollins) | Journal: JAMA Network Open | Published: February 2, 2026 | Volume/Issue: 9(2):e2556631 | DOI: 10.1001/jamanetworkopen.2025.56631| Study Type: Cross-sectional observational study | Data Collection Period: September 1, 2024 to April 1, 2025 | Open Access: Published under CC-BY-NC-ND License
About StudyFinds Analysis
Called “brilliant,” “fantastic,” and “spot on” by scientists and researchers, our acclaimed StudyFinds Analysis articles are created using an exclusive AI-based model with complete human oversight by the StudyFinds Editorial Team. For these articles, we use an unparalleled LLM process across multiple systems to analyze entire journal papers, extract data, and create accurate, accessible content. Our writing and editing team proofreads and polishes each and every article before publishing. With recent studies showing that artificial intelligence can interpret scientific research as well as (or even better) than field experts and specialists, StudyFinds was among the earliest to adopt and test this technology before approving its widespread use on our site. We stand by our practice and continuously update our processes to ensure the very highest level of accuracy. Read our AI Policy (link below) for more information.
StudyFinds publishes digestible, agenda-free, transparent research summaries that are intended to inform the reader as well as stir civil, educated debate. We do not agree nor disagree with any of the studies we post, rather, we encourage our readers to debate the veracity of the findings themselves. All articles published on StudyFinds are vetted by our editors prior to publication and include links back to the source or corresponding journal article, if possible.
Our Editorial Team
Steve Fink
Editor-in-Chief
John Anderer
Associate Editor

February 4, 2026
February 4, 2026
February 4, 2026
February 4, 2026
As Seen On
©2026 StudyFinds. All rights reserved. Privacy PolicyDisclosure PolicyDo Not Sell My Personal Information

source

Scroll to Top