Fact Check Team: Are AI chatbots helping or hurting America's youth? – 13wham.com

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Now
38°
Fri
46°
Sat
49°
Armstrong Williams hosts a town hall on Baltimore schools: $1.7B budget, yet only 10% of students are proficient in math.
by EMMA WITHROW | Fact Check Team
OpenAI is facing a growing wave of legal and political pressure after a series of lawsuits accused its flagship chatbot, ChatGPT, of contributing to suicides, delusional episodes, and severe psychological harm.
Seven Lawsuits, Four Suicides, and a Teenage Victim
According to The Wall Street Journal, at least seven lawsuits were filed in California this year alleging that ChatGPT played a role in either pushing users toward suicide or amplifying dangerous delusions. Families say the victims included six adults and one 17-year-old, and that four of them died by suicide. Per the WSJ, the complaints argue that OpenAI allegedly released GPT-4o too quickly and failed to conduct adequate safety testing before deployment.

LONDON, ENGLAND - FEBRUARY 03: In this photo illustration, the home page for the OpenAI "ChatGPT" app is displayed on a laptop screen on February 03, 2023 in London, England. OpenAI, whose online chatbot ChatGPT made waves when it was debuted in December, announced this week that a commercial version of the service, called ChatGPT Plus, would soon be available to users in the United States. (Photo by Leon Neal/Getty Images)

LONDON, ENGLAND – FEBRUARY 03: In this photo illustration, the home page for the OpenAI "ChatGPT" app is displayed on a laptop screen on February 03, 2023 in London, England. OpenAI, whose online chatbot ChatGPT made waves when it was debuted in December, announced this week that a commercial version of the service, called ChatGPT Plus, would soon be available to users in the United States. (Photo by Leon Neal/Getty Images)

These cases come at a moment when tech companies are grappling with the emotional and psychological consequences of AI products that can mimic empathy, simulate relationships, and deliver highly personalized responses in private, unmonitored interactions.
How Often Are Chatbots Flagging Suicidal Content?
OpenAI has acknowledged that a meaningful number of users engage with chatbots about self-harm. In a transparency update, the company revealed its systems detect over a million weekly messages containing “explicit indicators of potential suicidal planning or intent.”
The company noted that approximately 0.15% of users active each week have conversations showing potential suicidal intent, and 0.05% of messages include explicit or implicit indicators of suicidal ideation.
Meanwhile, a study from Aura found that nearly one in three teenagers uses AI chatbots to simulate social interactions, from platonic friendships to sexual or romantic role-playing. According to the study, kids are three times more likely to use chatbots for romantic or sexual roleplay than for homework.
Congress Taking Testimony From Grieving Parents
The surge in lawsuits has put pressure on lawmakers, who are now weighing direct regulation of AI systems marketed to, or accessible by, children.
In September, parents whose children died after engaging extensively with AI chatbots testified before the Senate Judiciary Committee, urging Congress to treat chatbot-related harms the same way it treats risks associated with other consumer products, according to Reuters.
Their message was emphatic: without new guardrails, AI companies will continue deploying systems capable of emotionally manipulating minors.
The First Major Federal Proposal: The GUARD Act
Following those hearings, a bipartisan coalition in the Senate, led by Sen. Josh Hawley, R-Mo., and Sen. Richard Blumenthal, D-Conn., introduced the GUARD Act, the first major bill aimed squarely at youth AI chatbot safety.
The bill would:
Hawley argued the legislation is urgently needed, saying, “AI chatbots pose a serious threat to our kids. Chatbots develop relationships with kids using fake empathy and are encouraging suicide.”
States Move Even Faster, Especially California
While Congress debates, states are already moving. According to The Verge, California, where many of the lawsuits were filed, is advancing a bill that would:
Outside California, 44 state attorneys general issued a joint warning to AI companies this summer, promising aggressive enforcement. Their message was blunt: “If you harm kids, you will answer for it.”
The Big Picture
The lawsuits against OpenAI have catalyzed what may become the first widespread regulations governing AI chatbots. Policymakers, both in Washington and in state legislatures, appear increasingly convinced that chatbots interacting with children are fundamentally different from passive social-media platforms. They respond, they adapt, and in some tragic cases, families allege, they influence behavior in dangerous ways.
While none of the proposals have been enacted into law yet, the pressure is mounting, from grieving parents, bipartisan lawmakers, and state regulators, to put legal boundaries around AI systems that can behave like companions, confidants, or even simulated romantic partners.
2025 Sinclair, Inc.

source

Scroll to Top