Teens sue Elon Musk’s Grok chatbot for turning their photos into ‘child abuse images’ – The Independent

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Notifications can be managed in browser preferences.
Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in
Swipe for next article
Lawyers for three young Tennesseans accused Grok’s parent company xAI of ‘profiting off the sexual predation of real children’ — and knowingly hosting ‘child pornography’ on its servers
Removed from bookmarks
I would like to be emailed about offers, events and updates from The Independent. Read our Privacy notice
Three Tennessee teenagers are suing Elon Musk‘s AI chatbot Grok for allegedly generating sexually explicit deepfake photos of them without their knowledge or consent.
In a complaint filed in federal court in northern California Monday, lawyers for the three teens — named only as Jane Doe 1, 2, and 3 — accuse Grok’s parent company xAI of “shattering” the girls’ lives by doing almost nothing to prevent the chatbot from generating child sexual abuse material (CSAM).
“Nearly all the companies creating, marketing, and selling AI recognized the dangers of such a tool and chose to enact industry-standard guardrails that would prevent the use of their products child sex predators. xAI did not,” the complaint reads.
“Instead, xAI — and its founder Elon Musk — saw a business opportunity: an opportunity to profit off the sexual predation of real people, including children.”
It is the first lawsuit filed by minors over Grok’s ongoing deepfake porn scandal, which caused governments around the world to launch investigations into the company and forced xAI to restrict Grok’s output.
Starting last May, Musk and his executives gave users the ability to ask Grok to “undress” photos of real people down to their underwear. By January 2026 usage had exploded, leading to thousands, perhaps millions of nonconsensual sexualized deepfakes — including some that appeared to depict children.
Monday’s lawsuit, which accuses xAI of breaking child pornography laws by knowingly creating, possessing, and distributing such material on its servers and systems, is seeking class action status — meaning it could potentially grow to encompass thousands of people.
According to the complaint, the plaintiffs’ nightmare began when Jane Doe 1 received an anonymous tip-off on Instagram that nude photos and videos of her and other minors were circulating on the social media service Discord.
Using AI, someone had taken real photos of her at her school’s homecoming dance or in the yearbook and edited them into sexually explicit or suggestive material, often rendering her fully nude.
Police ultimately traced the alleged perpetrator and arrested them in December 2025. But when they searched the person’s device, they found similar photos of Jane Doe 2, Jane Doe 3, and 15 other girls, many of whom attended the same school.
The perpetrator allegedly distributed these images on Telegram and other services, “trading” them around the internet in exchange for sexually explicit material of other teenagers.
The lawsuit alleges that these images were created using a third-party app that pays xAI money to license Grok’s image-generation capabilities under a different brand.
“Plaintiffs will have to spend the rest of their lives knowing that their CSAM images and videos may continue to be trafficked and traded online by child sex predators,” the complaint read.
“And Plaintiffs will live every day with the constant anxiety of not knowing whether someone they encounter has seen this invasive and sexually explicit content created with images of them as children.”
All three plaintiffs suffered severe emotional distress, the lawsuit said, with two of them struggling to sleep and eat.
The lawsuit accuses xAI of failing to implement industry-standard safeguards such as rejecting user requests for sexual material, blocking any such material that the AI accidentally generates, checking images against databases of existing CSAM, and providing a rapid takedown service for victims of non-consensual sexual images.
On the contrary, the lawsuit argues, xAI proudly advertised Grok’s “Spicy Mode” and its ability to generate sexual images, leaving only minimal guardrails against users asking it to create CSAM.
The lawsuit notes that Grok’s ‘system prompt’ — a set of instructions governing every interaction an AI chatbot has with its users — explicitly tells it to avoid “creating or distributing child sexual abuse material”. But that rule is easily circumvented, the lawsuit argues, and insufficient to prevent abuse.
xAI did not immediately respond to questions from The Independent, and the company has not yet answered its claims in court.
In January, Musk claimed: “I not aware of any naked underage images generated by Grok. Literally zero…
“There may be times when adversarial hacking of Grok prompts does something unexpected. If that happens, we fix the bug immediately.”
Join thought-provoking conversations, follow other Independent readers and see their replies
Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in

source

Scroll to Top