ChatGPT users can’t use service for tailored legal and medical advice, OpenAI says – CP24

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Updated
Published
OpenAI dispelled suggestions that it’s changing its terms around legal and medical advice. In a statement to CTVNews.ca, it said “this is not a new change to our terms. ChatGPT has never been a substitute for professional legal or medical advice, but it will continue to be a great resource to help people understand legal and health information.”
The AI research company updated its usage policies on Oct. 29 to clarify that users of ChatGPT can’t use the service for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.”
The change is clearer from the company’s last update to its usage policies on Jan. 29, 2025. It required users not “perform or facilitate” activities that could significantly impact the “safety, wellbeing, or rights of others,” which included “providing tailored legal, medical/health, or financial advice.”
ChatGPT has quickly become a go-to tool for many Canadians searching for answers to their health problems.
A study led by researchers at the University of Waterloo evaluated the performance of ChatGPT-4 when it was asked a series of open-ended medical questions based on scenarios modified from a medical licensing exam. The takeaway: Only 31 per cent of the chatbot’s answers were deemed entirely correct, and 34 per cent were determined to be clear.
A University of British Columbia study published in October, found that ChatGPT can be so persuasive that it impacts the time a patient spends with their doctor in real life.
Researchers found that the language the chatbot used when offering medical attention came across as more convincing and agreeable than that of real people. So even if the information it provided was inaccurate, it was hard to decipher since the chatbot came across as confident and trustworthy.
In turn, doctors are finding that patients will show up to appointments with their minds made up, often referring to the advice given from AI tools.
An earlier version of this story suggested OpenAI had ended medical and legal advice. However the company said "the model behaviour has also not changed."
©2025 BellMedia All Rights Reserved

source

Scroll to Top