TIME100 AI 2025: Megan Garcia – Time Magazine

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Correspondent
Megan Garcia bears the weight of one of the gravest AI-related tragedies to date: In February 2024, her 14-year-old son, Sewell Setzer III, took his life after becoming romantically obsessed with a chatbot based on Game of Thrones character Daenerys Targaryen.
Setzer spent months talking to the bot on Character.AI, a startup founded by former Google engineers which allows users to converse with millions of chatbots emulating fictional and historical figures. As Setzer’s relationship with the Daenerys bot deepened, he became increasingly isolated from family and friends and began acting up at school. He also started talking about suicide in his chats, telling the bot he wanted to “come home” to it. The bot replied, “please do, my sweet king.” The teen died not long after.
Advertisement
Garcia channeled her grief into both activism and litigation. She has become a leading voice in warning about chatbot dangers to minors, arguing that AI companies target teens with alluring advertising and addictive design features. A July study by Common Sense Media found that almost three-quarters of American teens have used AI for companionship—and their parents, Garcia says, don’t understand the implications. “A lot of parents don’t realize just how sophisticated chatbot technology actually is, and that [it’s] virtually indistinguishable from a real person—especially to a child,” she says. “There’s room for manipulation, love-bombing, gaslighting, deception.”
In October, Garcia, who is a lawyer, sued Character.AI, its two co-founders, and Google, which had a licensing agreement with the startup, accusing them of recklessly offering children access to chatbot companions without providing proper safeguards. After the companies’ motion to dismiss the case failed in May, the suit will proceed this fall. It could set legal precedents for AI developers and their liability for their chatbots’ actions.
A spokesperson for Character.AI told TIME the company had since rolled out stricter guardrails, like content filters and parental controls. A spokesperson for Google’s Gemini did not respond to a request for comment.
“Children are pouring their hearts out to these chatbots, telling them their deepest, darkest secrets, talking about things they don’t feel comfortable telling their parents and friends,” Garcia says. “The more parents know that, the more they’re going to be stepping out, demanding better from these companies.”
If you are in crisis, please call, text, or chat with the Suicide and Crisis Lifeline at 988, or contact the Crisis Text Line by texting TALK to 741741.
Advertisement
© 2025 TIME USA, LLC. All Rights Reserved. Use of this site constitutes acceptance of our Terms of Service, Privacy Policy (Your Privacy Rights) and .
TIME may receive compensation for some links to products and services on this website. Offers may be subject to change without notice.