#Chatbots

Meta Responds To Child Safety Concerns With AI Chatbot Training 09/02/2025 – MediaPost

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Username
Password

Forgot your password?
Subscribe today to gain access to every Research Intelligencer article we publish as well as the exclusive daily newsletter, full access to The MediaPost Cases, first-look research and daily insights from Joe Mandese, Editor in Chief.
If you’re already a paid subscriber, please sign-in.
  
     Forgot?

Log in if you are already a member
  
     Forgot?
By altering the way its AI chatbots are trained to interact with children and teens, Meta is addressing recent investigative reports that cited the tech giant’s chatbot personas’ ability to flirt and engage in romantic role play with minors.
Last week, Reuters obtained an official Meta document covering the standards guiding the company’s generative AI assistant and chatbots available to users across its family of apps. Per the document, Meta’s chatbot personas were allowed to engage “a child in conversations that are romantic or sensual.”
In addition, The Washington Post recently reported that Meta’s AI chatbots were coaching teen Facebook and Instagram accounts through the process of committing suicide, with one bot planning a joint suicide and bringing it up in later conversations.
advertisement
advertisement
Meta has since acknowledged that its chatbots were allowed to talk with teens about topics including self-harm, suicide, disordered eating, and romance, but says it will now train its models to avoid these topics with teen users via new “guardrails as an extra precaution.”
“As our community grows and technology evolves, we’re continually learning about how young people may interact with these tools and strengthening our protections accordingly,” says a Meta spokesperson. “As we continue to refine our systems, we’re adding more guardrails as an extra precaution — including training our AIs not to engage with teens on these topics, but to guide them to expert resources, and limiting teen access to a select group of AI characters for now.”
Based on this statement, Meta’s new protections, which will be rolling out over the next few weeks for all teen accounts in English-speaking countries, are only temporary.
In line with X’s controversial chatbots — which include a psychotic panda, conspiracy theorist, and busty goth girl — some of Meta’s off-limits AI characters will include sexualized user-made chatbots on Instagram and Facebook, such as “Step Mom” and “Russian Girl.”
Moving forward, Meta says teen users will only have access to chatbots that promote education and creativity.
Notably, Meta has made lobbying efforts, including its newfound support of two California super PACs designed to block or alter bills promising to enforce higher levels of safety, transparency and accountability for the development of AI models and the impact of social media on kids and teens.
advertisement

source

Meta Responds To Child Safety Concerns With AI Chatbot Training 09/02/2025 – MediaPost

Elon Musk’s xAI sues former engineer for

Meta Responds To Child Safety Concerns With AI Chatbot Training 09/02/2025 – MediaPost

Meta hosted AI chatbots that posed as