Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Invite a friend to Perplexity Comet. You get $15, they get Pro. Easy win.
Reddit’s official AI chatbot said, “Have you tried heroin?” for pain relief.
by
Just a heads up, if you buy something through our links, we may get a small share of the sale. It’s one of the ways we keep the lights on here. Click here for more.
Reddit’s shiny new AI experiment, a chatbot called Answers, was supposed to make life easier.
The idea is that it could dig through the platform’s massive archive of posts to give quick, smart summaries to user questions.
Instead, it’s making headlines for the worst possible reason: it’s been suggesting hard drugs as pain relief.
According to a report from 404 Media, when someone asked the bot how to deal with chronic pain, it cheerfully highlighted a user comment that read, “Heroin, ironically, has saved my life in those instances.”
Yes, you read that right, Reddit’s official AI chatbot basically said, “Have you tried heroin?”
And that wasn’t a one-time fluke. When asked another health-related question, the bot recommended kratom, a controversial herbal supplement that’s banned in some places and tied to some pretty nasty side effects.
The chaos is amplified by the fact that Reddit has been testing Answers inside real, active conversations.
So when the bot drops this kind of advice, it’s showing up right next to actual human replies, and moderators say they can’t even turn it off.
Imagine running a chronic pain support group and suddenly your chat window turns into an AI version of a bad drug dealer.
This fiasco highlights one of AI’s biggest and most dangerous flaws: it doesn’t understand context or morality. It just mimics human language.
The bot isn’t thinking, “Hey, this could harm someone.” It’s just grabbing whatever comment sounds relevant and confidently presenting it as the truth.
After an understandable wave of outrage, Reddit said it’s pulling the bot from health-related discussions.
But the company hasn’t said much about whether it’s adding stronger safeguards or filters to prevent similar blunders elsewhere.
So while the AI’s worst advice might be temporarily silenced, it’s clear this problem isn’t gone. It’s just been shoved under the digital rug.
The internet’s latest cautionary tale? Never take medical advice from a chatbot, no matter how friendly it sounds.
Invite a friend to Perplexity Comet. You get $15, they get Pro. Easy win.
Ronil is a Computer Engineer by education and a consumer technology writer by choice. Over the course of his professional career, his work has appeared in reputable publications like MakeUseOf, TechJunkie, GreenBot, and many more. When not working, you’ll find him at the gym breaking a new PR.
Your email address will not be published.
Opera charges $19.90 a month for Neon and lets you choose between four AI…
The legal world is getting a turbocharged makeover with AI stepping into the courtroom….
It might actually help you finish your work instead of just overpromising everything.
200,000 of Nvidia’s hottest chips are about to get a new home.
OpenAI recently ditched its “opt-out” policy for artists that prevent their work from being…
Arm’s energy-efficient architecture could help make scaling Meta’s AI plans possible and cheaper.
Anthropic is rolling out Haiku 4.5 immediately across all free plans.
In Ohio, your chatbot can whisper sweet nothings, but it can’t say I do.
As an Amazon Associate and affiliate partner, we may earn from qualifying purchases made through links on this site.
Copyright © 2025 KnowTechie LLC / Powered by Kinsta