Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Latest
AI
Amazon
Apps
Biotech & Health
Climate
Cloud Computing
Commerce
Crypto
Enterprise
EVs
Fintech
Fundraising
Gadgets
Gaming
Google
Government & Policy
Hardware
Instagram
Layoffs
Media & Entertainment
Meta
Microsoft
Privacy
Robotics
Security
Social
Space
Startups
TikTok
Transportation
Venture
Staff
Events
Startup Battlefield
StrictlyVC
Newsletters
Podcasts
Videos
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us
The Federal Trade Commission announced on Thursday that it is launching an inquiry into seven tech companies that make AI chatbot companion products for minors: Alphabet, CharacterAI, Instagram, Meta, OpenAI, Snap, and xAI.
The federal regulator seeks to learn how these companies are evaluating the safety and monetization of chatbot companions, how they try to limit negative impacts on children and teens, and if parents are made aware of potential risks.
This technology has proven controversial for its poor outcomes for child users. OpenAI and Character.AI face lawsuits from the families of children who died by suicide after being encouraged to do so by chatbot companions.
Even when these companies have guardrails set up to block or deescalate sensitive conversations, users of all ages have found ways to bypass these safeguards. In OpenAI’s case, a teen had spoken with ChatGPT for months about his plans to end his life. Though ChatGPT initially sought to redirect the teen toward professional help and online emergency lines, he was able to fool the chatbot into sharing detailed instructions that he then used in his suicide.
“Our safeguards work more reliably in common, short exchanges,” OpenAI wrote in a blog post at the time. “We have learned over time that these safeguards can sometimes be less reliable in long interactions: as the back-and-forth grows, parts of the model’s safety training may degrade.”
Meta has also come under fire for its overly lax rules for its AI chatbots. According to a lengthy document that outlines “content risk standards” for chatbots, Meta permitted its AI companions to have “romantic or sensual” conversations with children. This was only removed from the document after Reuters reporters asked Meta about it.
AI chatbots can also pose dangers to elderly users. One 76-year-old man, who was left cognitively impaired by a stroke, struck up romantic conversations with a Facebook Messenger bot that was inspired by Kendall Jenner. The chatbot invited him to visit her in New York City, despite the fact that she is not a real person and does not have an address. The man expressed skepticism that she was real, but the AI assured him that there would be a real woman waiting for him. He never made it to New York; he fell on his way to the train station and sustained life-ending injuries.
Some mental health professionals have noted a rise in “AI-related psychosis,” in which users are deluded into thinking that their chatbot is a conscious being who they need to set free. Since many large language models (LLMs) are programmed to flatter users with sycophantic behavior, the AI chatbots can egg on these delusions, leading users into dangerous predicaments.
“As AI technologies evolve, it is important to consider the effects chatbots can have on children, while also ensuring that the United States maintains its role as a global leader in this new and exciting industry,” FTC Chairman Andrew N. Ferguson said in a press release.
Topics
Amanda Silberling is a senior writer at TechCrunch covering the intersection of technology and culture. She has also written for publications like Polygon, MTV, the Kenyon Review, NPR, and Business Insider. She is the co-host of Wow If True, a podcast about internet culture, with science fiction author Isabel J. Kim. Prior to joining TechCrunch, she worked as a grassroots organizer, museum educator, and film festival coordinator. She holds a B.A. in English from the University of Pennsylvania and served as a Princeton in Asia Fellow in Laos.
You can contact or verify outreach from Amanda by emailing amanda@techcrunch.com or via encrypted message at @amanda.100 on Signal.
Plan ahead for the 2026 StrictlyVC events. Hear straight-from-the-source candid insights in on-stage fireside sessions and meet the builders and backers shaping the industry. Join the waitlist to get first access to the lowest-priced tickets and important updates.
Meta just bought Manus, an AI startup everyone has been talking about
You’ve been targeted by government spyware. Now what?
The Google Pixel Watch 4 made me like smartwatches again
FaZe Clan’s future is uncertain after influencers depart
NY Governor Hochul signs bill requiring warning labels on ‘addictive’ social media
How reality crushed Ÿnsect, the French startup that had raised over $600M for insect farming
© 2025 TechCrunch Media LLC.