Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Home > Headlines > OpenAI Draws the Line Between Health Information and Medical Advice
OpenAI Draws the Line Between Health Information and Medical Advice
Mohammedia – OpenAI’s ChatGPT remains a reliable source for understanding health information, despite viral claims suggesting otherwise.
Recent online posts, including one on prediction market site Kalshi, indicated that it was no longer possible for ChatGPT to answer any kind of medical questions, as reported by Business Insider.
“ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information,” an OpenAI spokesperson told Business Insider.
This announcement comes on the heels of an October 29th revision of OpenAI’s Rules of Use, providing that one may not obtain licensed, personalized advice on matters of medicine or law through the platform.
The proposed language seeks to reduce OpenAI’s responsibility as more people turn to ChatGPT for their medically-related questions.
According to a survey conducted by KFF in 2024, one in six people turn to this chatbot on a monthly basis or more for medical inquiries.
However, this chatbot is still capable of providing information on symptoms, research, or suggesting over-the-counter remedies, but it will not be able to offer diagnoses or prescriptions.
OpenAI’s health AI research lead, Karan Singhal, confirmed again that model behavior “remains unchanged” and termed these viral reports as “not true.”
Recently, OpenAI has implemented more stringent guidelines due to user misuse of AI-generated guidance about their respective health issues after a man reportedly suffered health issues after following unsafe advice.
A US medical journal disclosed that the 60-year-old man suffered from bromism, a condition of bromide poisoning, after he followed ChatGPT’s advice to change his table salt with sodium bromide, a dangerous chemical that was used as a sedative. According to the Annals of Internal Medicine, this was done without any health alerts from the chatbot.
Although OpenAI is still looking into other applications for healthcare, new guidelines have established a more definite boundary between general information about health and actual medical practice.
Your email address will not be published.
I agree to the Terms & Conditions and Privacy Policy.
Login to your account below
Fill the forms below to register
Please enter your username or email address to reset your password.
All Right Reserved © 2025 Morocco World News .