New ChatGPT boasts ‘fewer hallucinations’ and better health advice – The Telegraph

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Maker says AI chatbot will make up fewer answers in step towards surpassing humans
Copy link
twitter
facebook
whatsapp
email
Copy link
twitter
facebook
whatsapp
email
Copy link
twitter
facebook
whatsapp
email
Copy link
twitter
facebook
whatsapp
email
ChatGPT will invent fewer answers and tackle complicated health queries under an upgrade designed to make the chatbot “less likely to hallucinate,” its owner has said.
On Thursday, OpenAI, the maker of the chatbot, unveiled a new version of the program that will “significantly” reduce the number of answers that ChatGPT simply makes up.
The company also said the chatbot will be able to proactively detect potential health concerns, amid growing numbers of people using artificial intelligence (AI) bots as doctors.
While the company said that its systems were not designed to replace doctors, its new model, GPT-5, was far less likely to make mistakes when answering difficult health questions. The new version is being touted as a step towards artificial intelligence surpassing humans.
The company has tested the new system on a series of 5,000 health questions designed to simulate common conversations with doctors.
It said the most powerful version of GPT-5 was eight times less likely to make mistakes than its last most-advanced AI, and 50 times less likely to make mistakes than 4o, the free version used by most people.
OpenAI said the model was capable of “proactively flagging potential concerns” based on existing conversations.
People are increasingly turning to chatbots for medical information, raising concerns in cases that the bots hallucinate – a phenomenon where AI systems simply make up responses rather than admit they do not know an answer.
Overall, OpenAI said the best version of its new model was six times less likely to hallucinate than its predecessor.
Sam Altman, the chief executive of OpenAI, said that the model was “a significant step along the path to AGI [artificial general intelligence].”
AGI, roughly defined as the point where AI overtakes human intelligence, is seen as the holy grail for AI companies, and a potential tipping point that could lead to widespread job losses and a rapid increase in technological development.
Mr Altman called the new system “an incredible superpower”.
GPT-5 has been highly anticipated within AI circles amid growing use of chatbots. OpenAI says 700 million people use it every week.
However, the company said its systems “do not replace a medical professional and are not intended for the diagnosis or treatment of disease”.
In addition to making fewer mistakes, OpenAI said the new system was particularly proficient at “vibe coding”, in which the AI can write hundreds of lines of software within minutes to create a fully-working app, purely using a few lines of English instruction.
In one example, an engineer created a mini-game to help learn French based on the mobile phone game Snake in less than five minutes.
Medicine is seen as a major opportunity for AI companies, with Wes Streeting, the Health Secretary, seeking to bring it into widespread use in the NHS.
OpenAI added that its new version would be less “sycophantic” than predecessors, after complaints it was overly flattering. It often begins responses with “Great question!”.
Copy link
twitter
facebook
whatsapp
email