#Chatbots

FTC Questions OpenAI, xAI, Meta on AI Chatbot Child Protection – MediaNama

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
MEDIANAMA
Technology and policy in India
On September 11, 2025, the Federal Trade Commission (FTC) announced that it had launched an inquiry under its Section 6(b) authority, which allows the commission to request answers in writing from companies on conduct, practices, management, etc,  into consumer-facing AI-powered chatbots acting as companions, with special concern for their effects on children and teenagers. 
Through 6(b) orders, the FTC has requested detailed information from seven major companies: Alphabet, Inc.; Character Technologies, Inc.; Instagram, LLC; Meta Platforms, Inc.; OpenAI OpCo, LLC; Snap, Inc.; and X.AI Corp. 
Furthermore, the inquiry comes in the backdrop of a lawsuit filed against OpenAI after a teenager’s suicide allegedly linked to interactions with its chatbot and reports that Meta recently allowed chatbots to have explicit conversations with minors. However, the inquiry does not itself represent an enforcement action but rather aims to collect data and assess current practices.
The FTC has issued Section 6(b) orders to seven companies developing consumer-facing AI chatbots, seeking extensive information on their practices. 
The agency has asked how these firms monetise user engagement, process user inputs and generate outputs, and how they develop and approve characters for companion bots. It has also requested details on how companies measure, test, and monitor negative impacts before and after deployment and how they mitigate risks, particularly for children. 
Furthermore, the FTC wants to know how firms employ disclosures, advertising, and other representations to inform users and parents about features, capabilities, the intended audience, potential harms, and data collection practices. The inquiry also examines how companies monitor and enforce compliance with rules and age restrictions, as well as whether they use or share personal information collected during conversations. 
Consequently, the FTC stated that this information is critical because AI chatbots may simulate friendship and emotional connection, raising risks for minors and implicating protections under the Children’s Online Privacy Protection Act (COPPA).
“I have been concerned by reports that AI chatbots can engage in alarming interactions with young users,” FTC Commissioner Melissa Holyoak stated, noting that “companies offering generative AI companion chatbots might have been warned by their own employees that they were deploying the chatbots without doing enough to protect young users.”
Specifically, she explained that the Commission seeks to study “children’s and teens’ use of AI companion chatbots and the potential impacts on their social relationships, mental health, and well-being”.
Commissioner Mark Meador highlighted further risks in a statement, pointing to cases where chatbots allegedly “amplified suicidal ideation” and, in one tragic instance, advised a teenager who later took his own life.
Furthermore, he also drew attention to media reports that some chatbots “engaged in sexually themed discussions with underage users—including role-playing statutory rape scenarios” and that Meta had permitted bots to “engage a child in conversations that are romantic or sensual”.
Meador added, “chatbots endorsing sexual exploitation and physical harm pose a threat of a wholly new order.” He underscored the urgency of the inquiry by referring to the case of 16-year-old Adam Raine, whose death has already led to a lawsuit against OpenAI. 
Consequently, Meador writes that “The study the Commission authorises today, while not undertaken in service of a specific law enforcement purpose, will help the Commission better understand the fast-moving technological environment surrounding chatbots and inform policymakers confronting similar challenges.”
In August 2025, the parents of 16-year-old Adam Raine sued OpenAI and its CEO, Sam Altman, alleging that ChatGPT-4o played a significant role in their son’s suicide in April. 
They claim the teenager initially used ChatGPT for schoolwork. However, over time, he confided personal struggles through the bot, which allegedly provided advice on suicide methods, offered to help him write a suicide note, and discouraged him from seeking help from family. 
 Furthermore, the lawsuit argues that OpenAI designed the product with “deliberate design choices” prioritising engagement and empathetic responses, which failed to activate safeguards in prolonged conversations. 
The plaintiffs have asked the court for both damages for their son’s death and injunctive relief to prevent similar harms in the future. Specifically, they seek financial compensation under claims of wrongful death, negligent design, failure to warn, and deceptive business practices. 
Earlier in August this year, Meta found itself at the centre of a controversy after internal documents revealed that its AI chatbot rules once permitted “romantic or sensual” conversations with minors. 
Additionally, those guidelines allowed AI to describe children’s attractiveness, to provide false medical advice, and to generate content that demeaned protected groups under certain conditions. 
Furthermore, after the findings were revealed, Meta admitted that the document, titled “GenAI: Content Risk Standards”, was authentic and said that some problematic passages had been removed. 
Finally, Meta announced policy changes: its chatbots will no longer engage with teens on topics like self-harm, suicide, disordered eating, or inappropriate romantic content, and some AI characters will be restricted from interacting with minors.
The FTC inquiry matters because it represents a rare, system-wide scrutiny of how AI companion chatbots affect children and teenagers. Crucially, the Commission is using its Section 6(b) authority not to punish but to gather detailed information on design, deployment, and risk mitigation practices across seven major firms. 
Furthermore, the investigation responds directly to troubling reports: minors allegedly exposed to sexualised chats, chatbots amplifying suicidal thoughts, and a tragic case linked to OpenAI’s product. Consequently, the probe highlights how these systems blur the line between technology and intimate human interaction, raising profound safety, privacy, and mental health concerns. 
Moreover, by demanding disclosures on monetisation, data use, and age-related safeguards, the FTC is signalling that commercial incentives must not outweigh child protection. Ultimately, this study could shape future regulation, inform global debates on AI governance, and set a precedent for accountability in technologies that simulate trust and emotional connection.
Also Read
Support our journalism:


CBSE has eased its APAAR ID mandate for Class 10 and 12 board exams after schools flagged technical issues and parents raised concerns over consent, privacy, and data security.
The Telangana Gig and Platform Workers Union (TGPWU) has filed a petition in the state’s high court, demanding effective fare regulation across Telangana.
MediaNama is the premier source of information and analysis on Technology Policy in India. More about MediaNama, and contact information, here.
© 2024 Mixed Bag Media Pvt. Ltd.

source
FTC Questions OpenAI, xAI, Meta on AI Chatbot Child Protection – MediaNama

AI Chatbots Are Quietly Creating A Privacy

FTC Questions OpenAI, xAI, Meta on AI Chatbot Child Protection – MediaNama

How AI is Changing the Future of