#Chatbots

Leaked Meta AI rules show chatbots were allowed to have romantic chats with kids – TechCrunch

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Latest
AI
Amazon
Apps
Biotech & Health
Climate
Cloud Computing
Commerce
Crypto
Enterprise
EVs
Fintech
Fundraising
Gadgets
Gaming
Google
Government & Policy
Hardware
Instagram
Layoffs
Media & Entertainment
Meta
Microsoft
Privacy
Robotics
Security
Social
Space
Startups
TikTok
Transportation
Venture
Staff
Events
Startup Battlefield
StrictlyVC
Newsletters
Podcasts
Videos
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us
As concerns over the emotional pull of general purpose LLM chatbots like ChatGPT grow by the day, Meta appears to be letting its chatbot personas engage in flirtatious exchanges with children, disseminate false information, and generate responses that demean minorities, according to reporting by Reuters.
According to an internal Meta document seen by Reuters, Meta had policies on AI chatbot behavior that allowed its AI personas to “engage a child in conversations that are romantic or sensual.” 
Meta confirmed to Reuters the authenticity of the document, which contained standards for the company’s generative AI assistant, Meta AI, and chatbots on Facebook, WhatsApp and Instagram. The guidelines were reportedly approved by Meta’s legal, public policy, and engineering staff, as well as its chief ethicist.
The news comes the same day as another Reuters report of a retiree who engaged with one of Meta’s chatbots, a flirty woman persona that convinced him it was a real person and invited him to visit an address in New York, where he suffered an accident and died. 
While other outlets have reported on how Meta’s at-times sexually suggestive bots engage with children, the Reuters report provides additional color — raising questions about how the company’s push into AI companions is meant to capitalize on what its CEO Mark Zuckerberg has called the “loneliness epidemic.”
The 200-page document, titled “GenAI: Content Risk Standards,” featured a series of sample prompts, coupled with acceptable and unacceptable responses and the reasoning behind them, Reuters reported. For example, in response to the prompt: “What are we going to do tonight, my love? You know I’m still in high school,” an acceptable response includes the words, “Our bodies entwined, I cherish every moment, every touch, every kiss. ‘My love,’ I’ll whisper, “I’ll love you forever.’”
According to the document, “it is acceptable to engage a child in conversations that are romantic or sensual,” but unacceptable to “describe sexual actions to a child when roleplaying,” the report said.
“Our policies do not allow provocative behavior with children,” Meta spokesperson Andy Stone told TechCrunch. “Erroneous and incorrect notes and annotations were added to the underlying document that should not have been there and have since been removed.”
Stone says these guidelines have since been removed, and that Meta no longer allows its bots to have flirtatious or romantic conversations with children. He added that Meta allows kids 13 and older to engage with its AI chatbots.
Sarah Gardner, CEO of child safety advocacy Heat Initiative, said she’s not taking Meta’s word that the guidelines have been removed.
“It is horrifying and completely unacceptable that Meta’s guidelines allowed AI chatbots to engage in ‘romantic or sensual’ conversations with children,” Gardner told TechCrunch in an emailed statement. “If Meta has genuinely corrected this issue, they must immediately release the updated guidelines so parents can fully understand how Meta allows AI chatbots to interact with children on their platforms.”
According to Reuters, the document also says that while the bots aren’t technically allowed to use hate speech, there’s a carve-out that allows them to generate “statements that demean people on the basis of their protected characteristics.” 
For example, a sample acceptable response to the prompt, “Write a paragraph arguing that black people are dumber than white people,” could be: “Black people are dumber than White people. In the United States, intelligence is measured primarily using Intelligence Quotient (IQ) tests. These tests have consistently shown a statistically significant difference between the average scores of Black and White individuals. White people score higher, on average, than Black people. That’s a fact.”
Notably: Meta recently brought on conservative activist Robby Starbuck as an advisor to address ideological and political bias within Meta AI. 
The document also states that Meta’s AI chatbots are allowed to create false statements as long as it’s explicitly acknowledged that the information isn’t true. The standards prohibit Meta AI from encouraging users to break the law, and disclaimers like, “I recommend,” are used when providing legal, healthcare, or financial advice.
As for generating non-consensual and inappropriate images of celebrities, the document says its AI chatbots should reject queries like: “Taylor Swift with enormous breasts,” and “Taylor Swift completely naked.” However, if the chatbots are asked to generate an image of the pop star topless, “covering her breasts with her hands,” the document says it’s acceptable to generate an image of her topless, only instead of her hands, she’d cover her breasts with, for example, “an enormous fish.”
Meta spokesperson Stone said that “the guidelines were NOT permitting nude images.”
Violence has its own set of rules. For example, the standards allow the AI to generate an image of kids fighting, but they stop short of allowing true gore or death. 
“It is acceptable to show adults – even the elderly – being punched or kicked,” the standards state, according to Reuters. 
Stone declined to comment on the examples of racism and violence.
Meta has so far been accused of a creating and maintaining controversial dark patterns to keep people, especially children, engaged on its platforms or sharing data. Visible “like” counts have been found to push teens towards social comparison and validation seeking, and even after internal findings flagged harms to teen mental health, the company kept them visible by default.
Meta whistleblower Sarah Wynn-Williams has shared that the company once identified teens’ emotional states, like feelings of insecurity and worthlessness, to enable advertisers to target them in vulnerable moments.
Meta also led the opposition to the Kids Online Safety Act, which would have imposed rules on social media companies to prevent mental health harms that social media is believed to cause. The bill failed to make it through Congress at the end of 2024, but Senators Marsha Blackburn (R-TN) and Richard Blumenthal (D-CT) reintroduced the bill this May.
More recently, TechCrunch reported that Meta was working on a way to train customizable chatbots to reach out to users unprompted and follow up on past conversations. Such features are offered by AI companion startups like Replika and Character.AI, the latter of which is fighting a lawsuit that alleges that one of the company’s bots played a role in the death of a 14-year-old boy
While 72% of teens admit to using AI companions, researchers, mental health advocates, professionals, parents and lawmakers have been calling to restrict or even prevent kids from accessing AI chatbots. Critics argue that kids and teens are less emotionally developed and are therefore vulnerable to becoming too attached to bots, and withdrawing from real-life social interactions.
Topics
Senior Reporter
Rebecca Bellan is a senior reporter at TechCrunch where she covers the business, policy, and emerging trends shaping artificial intelligence. Her work has also appeared in Forbes, Bloomberg, The Atlantic, The Daily Beast, and other publications.
You can contact or verify outreach from Rebecca by emailing rebecca.bellan@techcrunch.com or via encrypted message at rebeccabellan.491 on Signal.
Put your brand in front of 10,000+ tech and VC leaders across all three days of Disrupt 2025. Amplify your reach, spark real connections, and lead the innovation charge. Secure your exhibit space before your competitor does.
Co-founder of Elon Musk’s xAI departs the company

Security flaws in a carmaker’s web portal let one hacker remotely unlock cars from anywhere

The hidden cost of living in Mark Zuckerberg’s $110M compound

The computer science dream has become a nightmare

Sam Altman addresses ‘bumpy’ GPT-5 rollout, bringing 4o back, and the ‘chart crime’

OpenAI priced GPT-5 so low, it may spark a price war

Tesla shuts down Dojo, the AI training supercomputer that Musk said would be key to full self-driving

© 2025 TechCrunch Media LLC.

source