Trusting ChatGPT blindly? Creator CEO Sam Altman says you shouldn’t! – Zee Business

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
OpenAI CEO Sam Altman has issued a candid warning to users of ChatGPT: don’t place unwavering trust in the AI chatbot. Speaking on the first episode of OpenAI’s official podcast, Altman highlighted how users are surprisingly confident in the tool, despite it being known to produce incorrect or misleading information.
“People have a very high degree of trust in ChatGPT, which is interesting, because AI hallucinates,” he said. “It should be the tech that you don’t trust that much.”
His remarks have sparked discussion across tech communities and among everyday users, many of whom turn to ChatGPT for everything from writing and research to parenting tips and productivity advice.
Also Read:ChatGPT vs Google vs Brain: MIT study shows AI users think less, remember less
At its core, ChatGPT works by guessing the next word in a sentence using patterns it has learned from lots of data, which also means it doesn’t really understand things like humans do, which means it can sometimes make up information that sounds right but is actually wrong. This is known as a “hallucination” in the AI world.
Altman emphasised the need for transparency around these flaws. “It’s not super reliable,” he admitted. “We need to be honest about that.” While it does have some drawbacks, ChatGPT possesses widespread use, actively engaged with by millions of human beings daily. Altman recognised its popularity but cautioned that people should not take its outputs at face value.
Also Read:At 16, this girl made Rs 100 crore AI firm and now working on 'ChatGPT with hands'
The podcast also touched on upcoming features such as persistent memory and the idea of an ad-supported model. While these additions aim to improve personalisation and help monetise the platform, they have also raised concerns about user privacy and potential bias.
Altman compared ChatGPT with platforms like social media or search engines, where monetisation often alters user experience. “You can kinda tell that you are being monetised,” he said.
He insisted that if OpenAI ever pursued similar models, it would be done with utmost clarity.
“The burden of proof there would have to be very high, and it would have to feel really useful to users and really clear that it was not messing with the LLM's output,” he explained.
In fact, he strongly warned against compromising the model’s integrity for profit. "If we started modifying the output, like the stream that comes back from the LLM, in exchange for who is paying us more, that would feel really bad. And I would hate that as a user," Altman said. Such a move, he warned, would be a "trust destroying moment."
Altman's concerns were echoed by AI pioneer Geoffrey Hinton, often dubbed the “godfather of AI.” In a recent interview, Hinton admitted that he too places more trust in AI models like GPT-4 than he probably should.
“I tend to believe what it says, even though I should probably be suspicious,” Hinton confessed.
To test GPT-4's limitations, he posed a simple riddle: “Sally has three brothers. Each of her brothers has two sisters. How many sisters does Sally have?” GPT-4 got it wrong. The correct answer is one, Sally herself. “It surprises me it still screws up on that,” Hinton said, though he expressed hope that future models like GPT-5 might fare better.

source

Scroll to Top