AI model finally learns to say ‘I don’t know’ in breakthrough to curb chatbot overconfidence – The Independent

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Notifications can be managed in browser preferences.
Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in
Swipe for next article
New training method may help tackle AI ‘hallucination’
Removed from bookmarks
I would like to be emailed about offers, events and updates from The Independent. Read our Privacy notice
South Korean researchers have developed a new way to finally make AI models acknowledge their unfamiliarity with topics – similar to human behaviour.
The breakthrough could improve the reliability of AI models used in fields like autonomous driving and medicine, researchers from the Korea Advanced Institute of Science and Technology say.
Previous research has exposed AI “overconfidence” as one of the major risks in the use of such tools to make decisions, especially in fields like medical diagnosis.
Commonly used AI models like OpenAI’s ChatGPT have been shown to “hallucinate”, or make up facts, as they are incentivised to make guesses rather than admit their lack of knowledge.
Now, researchers have developed a method that enables AI to recognise situations with unfamiliar or unseen knowledge, helping improve the overall reliability of chatbots.
They say a fundamental cause of overconfidence in AI is the way they learn from initial data using artificial neural networks, which form their backbone infrastructure.
Small errors that creep up at this stage can propagate and cause significant errors during subsequent training if they are not corrected.
Researchers found that when random data was input into a neural network during the initialisation phase, the model exhibited high confidence despite not having learned anything.
This led to “hallucination”.
To address this, researchers say they used clues from the way the human brain solves the issue.
In humans, brain signals are generated without external input even before birth, which helps deal with the issue.
Mimicking this, scientists developed a system in which the neural network backbone of an AI model underwent brief pre-training with random noise inputs before actual learning.
This process, according to researchers, helps AI set a baseline for itself by adjusting its own uncertainty before starting data learning.
The warm-up process can help an AI model set its initial confidence to a low level close to chance, and significantly reduce its overconfidence bias.
In other words, researchers say, the method helps models first learn the state of “I don’t know anything yet”.
“While conventional models tend to give incorrect answers with high confidence even for data they have not encountered during training, models with warm-up training showed a clear improvement in their ability to lower confidence and recognise that they ‘do not know’,” researchers explained.
This can help AI develop the ability to distinguish “what it knows” from “what it does not know”.
“This study demonstrates that by incorporating key principles of brain development, AI can recognise its own knowledge state in a way that is more similar to humans,” Se-Bum Paik, an author of the study published in the journal Nature Machine Intelligence, said.
“This is important because it helps AI understand when it is uncertain or might be mistaken, not just improve how often it gives the right answer.”
Join thought-provoking conversations, follow other Independent readers and see their replies
Please refresh the page or navigate to another page on the site to be automatically logged inPlease refresh your browser to be logged in

source

Scroll to Top