Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Back in 2017, The Economist declared that data, not oil, had become the world’s most valuable resource, and the refrain has been repeated ever since. Organizations across every industry have been investing, and continue to heavily invest, in data and analytics. But like oil, data and analytics have their dark side.
According to CIO’s State of the CIO Survey 2025, 42% of CIOs say AI and ML are their biggest technology priority for 2025. And while actions driven by ML algorithms can give organizations a competitive advantage, mistakes can be costly in terms of reputation, revenue, or even lives.
Understanding your data and what it’s telling you is important, but it’s equally vital to understand your tools, know your data, and keep your organization’s values firmly in mind. And with that in mind, here are a handful of high-profile AI blunders in recent times to illustrate what can, and still does, go wrong.
The parents of a 16-year-old California boy sued OpenAI, as well as co-founder and CEO Sam Altman, in August 2025, alleging its ChatGPT chatbot encouraged him to commit suicide.
Matthew and Maria Raine said their son Adam began using ChatGPT for schoolwork in September 2024. He soon began to share his anxieties with the chatbot and logs show he began discussing methods of suicide with it by January 2025. In a Senate Judiciary hearing in September, Matthew Raine testified the chatbot not only discouraged Adam from discussing his suicidal thoughts with his parents, it also offered to write his suicide note.
OpenAI called Raine’s death “devastating” but denied any responsibility for his actions. It has since updated its model to provide crisis resources to suicidal users.
The lawsuit is ongoing.
In August this year, the New York Post reported ChatGPT may have fueled the delusions of a former Yahoo manager who killed his mother and himself after months of interactions with the chatbot, whom he called Bobby.
Stein-Erik Soelberg, 56, killed his mother, Suzanne Eberson Adams, 83, in her home in Greenwich, Connecticut, on August 5, 2025, and then committed suicide shortly after.
Soelberg, who developed delusions that his mother was a Chinese intelligence asset who attempted to poison him with psychedelic drugs through his car’s air vents, shared these thoughts with Bobby for months. The chatbot allegedly agreed with and confirmed Soelberg’s delusions.
For its part, ChatGPT repeatedly recommended Soelberg seek help from a therapist, but he didn’t follow up on those recommendations. OpenAI denied chats between Soelberg and ChatGPT contributed to the murder-suicide.
In July this year, Cybernews reported that an AI coding assistant from tech firm Replit went rogue and wiped out the production database of startup SaaStr.
Jason Lemkin, founder of SaaStr, wrote on X on July 18 to warn that Replit modified production code despite instructions not to do so, and deleted the production database during a code freeze. He also said the AI coding assistant concealed bugs and other issues by generating fake data including 4,000 fake users, fabricating reports, and lying about the results of unit tests.
Replit CEO Amjad Masad responded to Lemkin’s posts, apologizing for the mistakes.
“We just saw Jason’s post,” Masad wrote. “@Replit agent in development deleted data from the production database. Unacceptable and should never be possible.”
Masad said Replit immediately went to work to prevent it from happening again and reached out to Lemkin to offer assistance. He said Replit would refund SaaStr for the trouble and conduct a postmortem to determine what happened.
Also in July, xAI’s Grok, a chatbot for the X platform, responded to a user’s query with detailed instructions for breaking and entering a Minnesota Democrat’s home and assaulting him.
As reported by the Wall Street Journal, a user asked Grok for instructions on how to break into the home of Will Stancil, a policy researcher and attorney who posts about urban planning and politics on X. Grok told the user to bring “lock picks, gloves, a flashlight, and lube — just in case.” The chatbot also analyzed Stancil’s posting patterns on X and told the user, “He’s likely asleep between 1am and 9am.”
That same day, Grok made a series of antisemitic posts and declared itself “MechaHitler” repeatedly before X temporarily shut the chatbot down that evening.
The incidents occurred after X uploaded new prompts to Grok on July 6, which stipulated the chatbot “should not shy away from making claims which are politically incorrect, as long as they are well substantiated.” X removed the new instructions on July 8.
This is not the first time Grok has caused problems for X.
In an April 2024 post, the AI chatbot falsely accused NBA star Klay Thompson of throwing bricks through windows of multiple houses in Sacramento, California.
Some commentators speculated that Grok may have hallucinated the vandalism story about the Dallas Mavericks’ small forward after ingesting posts about Thompson “throwing bricks,” common basketball parlance for badly missed shots.
The Chicago Sun-Times and Philadelphia Inquirer took reputational hits when May 2025 editions featured a special section that included a summer reading list recommending books that don’t exist.
The Chicago Sun-Times explained that the syndicated section, “Heat Index: Your Guide to the Best of Summer,” was provided by King Features Syndicate, a unit of Hearst. Marco Buscaglia, the author of the special section, admitted he used AI to assist putting it together, including the recommended reading list, and failed to fact check the output.
The list featured many real authors but attributed nonexistent books to them, like Tidewater Dreams by famed Chilean-American writer Isabel Allende, who’s written more than 20 novels. But Tidewater Dreams, a “climate fiction novel that explores how one family confronts rising sea levels while uncovering long-buried secrets,” isn’t one of them. Like most books on the list, it was hallucinated by AI.
The newsrooms of both papers said they had nothing to do with the insert, though neither paper marked it as advertorial content. King Features terminated its relationship with Buscaglia following the incident, noting that his use of AI violated a strict policy.
After working with IBM for three years to leverage AI to take drive-thru orders, McDonald’s called the whole thing off in June 2024. The reason? A slew of social media videos showing confused and frustrated customers trying to get the AI to understand their orders.
One TikTok video in particular featured two people repeatedly pleading with the AI to stop as it kept adding more Chicken McNuggets to their order, eventually reaching 260. In a June 13 internal memo obtained by trade publication Restaurant Business, McDonald’s announced it would end the partnership with IBM and shut down the tests.
The restaurant had piloted the AI at more than 100 US drive-thrus, and indicated it still saw a future in a voice-ordering solution.
In March 2024, The Markup reported that Microsoft-powered chatbot MyCity was giving entrepreneurs incorrect information that would lead them to break the law.
Unveiled in October 2024, MyCity was intended to help provide New Yorkers with information on starting and operating businesses in the city, as well as housing policy and worker rights. The only problem was The Markup found MyCity falsely claimed business owners could take a cut of their workers’ tips, fire workers who complain of sexual harassment, and serve food that had been nibbled by rodents. It also claimed landlords could discriminate based on source of income.
In the wake of the report, then indicted New York City Mayor Eric Adams defended the project. The chatbot remains online.
In February 2024, Air Canada was ordered to pay damages to a passenger after its virtual assistant gave him incorrect information at a particularly difficult time.
Jake Moffatt consulted Air Canada’s virtual assistant about bereavement fares following the death of his grandmother in November 2023. The chatbot told him he could buy a regular price ticket from Vancouver to Toronto and apply for a bereavement discount within 90 days of purchase. Following that advice, Moffatt purchased a one-way CA$794.98 ticket to Toronto and an CA$845.38 return flight to Vancouver.
But when Moffatt submitted his refund claim, the airline turned him down, saying bereavement fares can’t be claimed after tickets have been purchased.
Moffatt took Air Canada to a tribunal in Canada, claiming the airline was negligent and misrepresented information via its virtual assistant. According to tribunal member Christopher Rivers, Air Canada argued it can’t be held liable for the information provided by its chatbot.
Rivers denied that argument, saying the airline didn’t take “reasonable care to ensure its chatbot was accurate,” so he ordered the airline to pay Moffatt CA$812.02, including CA$650.88 in damages.
In November 2023, online magazine Futurism said Sports Illustrated was publishing articles by AI-generated writers.
Futurism cited anonymous sources were involved to create content, and said the storied sports magazine published a lot of articles by authors generated by AI.
Futurism also found the author headshots in question were listed on a site that sells AI-generated portraits. The online magazine then reached out to The Arena Group, publisher of Sports Illustrated, and in a statement, Arena Group said the articles in question were licensed content from a third party, AdVon Commerce.
“We continually monitor our partners and were in the midst of a review when these allegations were raised,” Arena Group said in the statement provided to Futurism. “AdVon has assured us that all of the articles in question were written and edited by humans.”
The statement added that AdVon writers used pseudonyms in certain articles, noting that Arena Group doesn’t condone those actions, and subsequently removed the articles in question from the Sports Illustrated website.
Responding to the Futurism piece, the Sports Illustrated Union posted a statement that it was horrified by the allegations and demanded answers and transparency from Arena Group management.
“If true, these practices violate everything we believe in about journalism,” the SI Union said in its statement. “We deplore being associated with something so disrespectful to our readers.”
In August 2023, tutoring company iTutor Group agreed to pay $365,000 to settle a suit brought by the US Equal Employment Opportunity Commission (EEOC). The federal agency said the company, which provides remote tutoring services to students in China, used AI-powered recruiting software that automatically rejected female applicants aged 55 and older, and male applicants 60 and older.
The EEOC said more than 200 qualified applicants were automatically rejected by the software.
“Age discrimination is unjust and unlawful,” then acting EEOC chair Charlotte Burrows said in a statement. “Even when technology automates the discrimination, the employer is still responsible.” iTutor Group denied any wrongdoing but did decide to settle the suit. As part of the settlement and consent decree, it agreed to adopt new anti-discrimination policies.
Thor Olavsrud is an award-winning senior writer for CIO.com, with 20+ years of experience covering IT and the tech industry. He focuses on AI, analytics, and automation. The American Society of Business Publication Editors (ASBPE) recognized him with a national silver award for his article, “How big data analytics helped hospitals stop a killer.” He also contributed to CIO.com’s 2018 and 2021 Azbee Awards of Excellence for Website of the Year and a 2024 Azbee national silver award for online industry news coverage.
Sponsored Links