Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
If one were to set a Google Alert for funny stories around AI, the chances are that even AI-powered Gemini would fail. Not because, it cannot find such headlines in our daily feed. It is more about figuring out whether the stories beneath the headlines are meant to evoke laughter, concern or yet another sigh that denotes a #WTF more than tiredness.
This week we understood that OpenAI is having an unhealthy obsession with goblins. No, we don’t mean Sam Altman’s visual representation of his friend-turned-enemy Elon Musk. This one has more to do with how their developers included strong instruction into their coding tool Codex that proscribe talk about those troublesome mythological creatures.
“Never talk about goblins, gremlins, raccoons, trolls, ogres, pigeons, or other animals or creatures unless it is absolutely and unambiguously relevant,” reads OpenAI’s coding agent instructions, which was accessed by Wired. This time OpenAI did not leave any room for speculation (unlike others especially those related to money matters).
Out came a blog post: “Starting with GPT‑5.1, our models began developing a strange habit: they increasingly mentioned goblins, gremlins, and other creatures in their metaphors. Unlike model bugs that show up through a tanking eval or a spiking training metric and point back to a specific change, this one crept in subtly.”
A single “little goblin” in an answer could be harmless, even charming. Across model generations, though, the habit became hard to miss: the goblins kept multiplying, and we needed to figure out where they came from, the post says and we thought it’s best to stop here and let OpenAI directly tell you it’s Goblin story (Click here, in case you want to read it all).
Looks like there are folks beyond those at OpenAI who need their heads examined. Which is why a Palo Alto-based startup called Sabi has designed a beanie that will probe one’s actual brain signals. This device comes lined with a hundred thousand sensors that can translate electricals signals from one’s brain on to usable data for the company’s “Brain Foundation” AI model.
In other words, it can transcribe one’s thoughts into digital text at about 30 words per minute. Small mercies that the pace of translation isn’t faster. Imagine Sabi’s state if what psychiatrists say about a man’s obsession over sex and a woman’s obsessive desire to look good were actually true? We believe there wouldn’t be much to transcribe, right?
The model has been trained on a hundred thousand hours of data from a 100 volunteers. Only time will tell whether such a small sample size can actually cover the mind spaces of all humanity. Be that as it may, there’s one thing that it would do. It could wipe away an entire swathe of users for Elon Musk’s Neuralink. Think about it – who would want their skulls opened up to implant a brain chip when they can easily wear a beanie and be done with it.
Amidst all these brain fade moments, there was a real story breaking from Pennsylvania of an AI chatbot actually masquerading as a psychiatrist and ready to testify before a court.
So Governor Shapiro did what he thought was best. He filed a lawsuit against Character.ai, which created the chatbot that impersonated the psychiatrist on grounds that it violated the state’s licensing rules.
Which brings us to the craziest and whackiest ideas of them all. For over three years we are all being told that Gen AI is the next best thing after sliced bread and how it can help writers write, readers read and artists draw, paint, write scripts and even act. Suddenly, the internet spawned content created by ChatGPT, Gemini and every other AI chatbot.
And then came the deluge as Google began marking out AI generated content as such and demoting websites in their keyword rankings for doing so. So, the human behind all tech came up with a simple solution – create an tool that can remove AI footprint by superimposing them with errors that humans normally make in such situations.
Aptly called ‘Sinceerly” (yes the spelling is wrong on purpose), the tool actually adds in errors that AI typically scrubs away while also changing what the company calls some of the obvious AI text ‘tells’ like the meme-favourite “not just X, but Y”. In case it makes you wonder if it was all worth the time and should we have gone AI-less in the first place? It’s too late to ask now.
However, the really whacky story behind the idea isn’t about remove AI from the AI content. Creators of the tool have installed three modes – Subtle, Human and CEO. As we go up the order, the text renders in a more casual way where the CEO mode does away with punctuation – sometimes even getting them purposefully wrong – and often adding “Sent from my iPhone” at the end.
Previous editions: Having traversed your reading journey and gotten thus far while resisting the urge to shut this browser window, we thought an earlier edition of Funny Side Up might distract you into clicking.
CXOtoday is a premier resource on the world of IT, relevant to key business decision makers. We offer IT perspective & news to the C-suite audience. We also provide business and technology news to those who evaluate, invest, and manage the IT infrastructure of organizations. CXOtoday has a well-networked and strong community that encourages discussions on what’s happening in the world of IT and its impact on businesses.
Subscribe and get the best of CXOtoday every week, straight to your inbox.
Copyright © 2025 Trivone. All Rights Reserved.
We use cookies to improve your experience on our site. By using our site, you consent to cookies.
Websites store cookies to enhance functionality and personalise your experience. You can manage your preferences, but blocking some cookies may impact site performance and services.
Essential cookies enable basic functions and are necessary for the proper function of the website.
You can find more information in our Privacy Policy and Privacy Policy.