Experts warn of AI chatbot reliance, urge parents to teach critical thinking – WRGB

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Now
17°
Fri
24°
Sat
24°
by J.T. Fetch
They said one in 10 teens finds chatting with AI bots more satisfying than with humans, the study finds. Researchers suggest problematic chatbot use is a new mental health risk that should be screened by doctors. (Image Source: CBS News)
A quiet shift is happening in bedrooms across the country. While parents worry about social media bullying or screen time, a new study suggests teenagers are increasingly turning to a different kind of digital interaction: artificial intelligence chatbots.
A report released in December 2025 by the Pew Research Center reveals that 64% of U.S. teens now say they use AI chatbots like ChatGPT, Gemini, or Character.ai. The study found that usage is becoming habitual, with roughly three in 10 teens reporting they interact with these bots every single day.
The rise in daily usage has sparked concerns among experts regarding "artificial intimacy"—the phenomenon where users form emotional bonds with algorithms designed to mimic human conversation.
Tim Fake, a cybersecurity professor at UAlbany, says that while these programs are sophisticated, parents need to remind their children that the "person" on the other end of the chat is nothing more than a math equation.
"It doesn't feel what you're feeling here," Fake said. "There's no emotion behind it. There's no intent behind it. There's no real meaning behind it."
Fake explained that Large Language Models (LLMs) function as "predictors of words" rather than conscious entities. They use a technology called "transformers" to translate words into vectors of numbers, calculating the most statistically probable response to a user's prompt.
Despite this lack of genuine empathy, the illusion can be powerful. Various surveys are finding that adolescents using these tools are doing so for companionship.
Fake warns that this misunderstanding of the technology can lead to emotional dependence on a machine that cannot reciprocate.
"All it is is a statistical model of the most likely word following another word," Fake said.
Experts say banning the technology is likely impossible and potentially counterproductive. Instead, Fake advises parents to focus on teaching "critical thinking skills".
He suggests parents sit down with their teenagers and intentionally try to "break" the model or force it to "hallucinate"—a term for when an AI confidently presents false information.
"Show how weak and vulnerable LLMs are and how they shouldn't be trusted on their own," Fake said.
By demonstrating the glitches and limitations of the software, parents can help shatter the illusion that the chatbot is a friend, repositioning it as a tool rather than a companion.
2026 Sinclair, Inc.

source

Scroll to Top