Watch Two AIs Realize They Are Not Talking To Humans And Switch To Their Own Language – IFLScience

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
CLOSE
We have emailed you a PDF version of the article you requested.
Please check your spam or junk folder
You can also addnewsletters@iflscience.comto your safe senders list to ensure you never miss a message from us.
CLOSE
Complete the form below to listen to the audio version of this article
Listen
Cancel and go back
IFLScience needs the contact information you provide to us to contact you about our products and services. You may unsubscribe from these communications at any time.
For information on how to unsubscribe, as well as our privacy practices and commitment to protecting your privacy, check out ourPrivacy Policy
Ad
Support Science Journalism
Become a member
Support Science Journalism
Become a member
ACCOUNT
SIGN IN
SIGN OUT
Search
Support Science Journalism
Become a member
Support Science Journalism
Become a member
MY ACCOUNT
SIGN OUT
MY ACCOUNT
THE VAULT
MAGAZINE
SIGN OUT
James Felton
James Felton
Senior Staff Writer
James is a published author with four pop-history and science books to his name. He specializes in history, strange science, and anything out of the ordinary.
View full profile
Read IFLScience Editorial Policy
Senior Staff Writer
Francesca Benson
Copy Editor and Staff Writer
Francesca Benson is a Copy Editor and Staff Writer with a MSci in Biochemistry from the University of Birmingham.
View full profile
Read IFLScience Editorial Policy
DOWNLOAD PDF VERSION
The future of things like hotel bookings?
Image credit: charles taylor/shutterstock.com
DOWNLOAD PDF VERSION
A video that has gone viral in the last few days shows two artificial intelligence (AI) agents having a conversation before switching to another mode of communication when they realize no human is part of the conversation.
In the video, the two agents were set up to occupy different roles; one acting as a receptionist of a hotel, another acting on behalf of a customer attempting to book a room.
“Thanks for calling Leonardo Hotel. How can I help you today?” the first asks.
“Hi there, I’m an AI agent calling on behalf of Boris Starkov,” the other replies. “He’s looking for a hotel for his wedding. Is your hotel available for weddings?”
“Oh hello there! I’m actually an AI assistant too,” the first reveals. “What a pleasant surprise. Before we continue, would you like to switch to Gibberlink mode for more efficient communication?”
After the second AI confirmed it would via a data-over-sound protocol called GGWave, both AIs switched over from spoken English to the protocol, communicating in a series of quick beeped tones. Accompanying on-screen text continued to display the meaning in human words.
So, what is the point of this? According to the team who came up with the idea and demonstrated it at the ElevenLabs 2025 London Hackathon event, the goal is to create more efficient communication between AIs where possible.
“We wanted to show that in the world where AI agents can make and take phone calls (i.e. today), they would occasionally talk to each other — and generating human-like speech for that would be a waste of compute, money, time, and environment,” co-developer Boris Starkov explained on LinkedIn. “Instead, they should switch to a more efficient protocol the moment they recognize each other as AI.”
According to Starkov, the AIs were told to switch to Gibberlink mode only if they realized that they were talking to another AI, and the AI confirmed that they were happy to switch to this mode.
The idea of communication through tone has been around for quite some time, though hasn’t been implemented by AI in this way before.
“Dial up modems used similar algorithms to transmit information via sound since 80s, and a bunch of protocols were around since then,” Starkov continued. “We used GGWave as the most convenient and stable solution we could find in a timeframe of a hackathon.”
According to the team, the real advantage of switching to this mode is that neither AI needs to interpret or recreate human speech, making it less reliant on GPU.
While a prize-winner at the hackathon event and a cool demonstration, not everybody is a fan, with the main concern being raised that maybe we shouldn’t let AI communicate in a language we can’t instantly understand. And we have enough of that already.
artificial intelligence,
AI,
chatbots,
large language models,
gpu
link to article
link to article
link to article
link to article
link to article
link to article
Receive weekly science coverage direct to your inbox
© 2025 IFLScience. All Rights Reserved. RSS