Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
AI is getting faster. But slow-responding AI is perceived as better by users.
At least that’s the conclusion reached by new research presented at CHI’26, which is the Association for Computing Machinery’s Barcelona conference on Human Factors in Computing Systems.
Two researchers — Felicia Fang-Yi Tan and Professor Oded Nov at the NYU Tandon School of Engineering — tested 240 adults by having them use an AI chatbot. The answers were artificially delayed by two, nine, or 20 seconds. (The delay had nothing to do with the question or the answer.)
Afterwards, the researchers asked how they liked the answers. In general, participants preferred the answers that took longer (although sometimes users got frustrated with the 20-second delay).
Why? Because a delay led the users to believe the AI was “thinking” or showing “deliberation” — invaluable input for AI companies and an interesting result.
In almost every product category, faster usually means better. But for AI chatbots, it turns out, a delay makes people assume the results are better.
In other words, unlike other products, people judge AI the way they judge people. (If people give a slower answer to a question, we tend to assume it to be a more thoughtful one.) In still other words, study participants believed something that wasn’t true.
There’s just one problem: Armed with this data, the researchers advise AI developers to implement “context-aware latency” by abandoning a one-size-fits-all approach, using latency as a “tunable design variable.” Simple questions, they say, should get a quick answer. More complex questions, including moral dilemmas, should “feature” slight delays to match the request’s gravity. They call it “positive friction.”
The researchers claim it would be a good practice to trick users into believing an AI chatbot is considering their answer more than it really is — because users will be happier in their delusion that AI is like people, who need more time to mull over serious questions.
(In fairness, the researchers do warn that if users equate longer response times with higher quality, they might place undue trust in a slower system.)
The underlying assumption here is that users trusting AI more, and believing something about the AI that isn’t true, are both good things.
Other research offered comparable advice.
In a May 13, 2025 study published in Frontiers in Computer Science, researchers Ning Ma, Ruslana Khynevych, Yunqiang Hao, and Yahui Wang found that emotion matters more than raw computer intelligence when designing easier-to-use chatbots. Call it ease-of-use maxxing.
The study found that when chatbots use fake human voices, simulated human faces, and chatty words, users feel an “emotional connection” to the AI. It enhances “cognitive ease,” meaning that it takes less effort for the brain to process.
They found that AI chatbot designers should prioritize emotional engagement and fake empathy over raw intelligence as the best way to gain a user’s trust.
The assumption behind this is also that users trusting AI more is good, and that ease-of-use is more important than user clarity about the nature of the AI (namely, that it has zero authentic human qualities).
Both studies represent examples of AI researchers advocating user delusion about AI.
AI designs have a large set of tools for making AI seem human. They can use colloquial speech and slang, respond to the mood of the user by shifting tone, personalize chats by remembering details about the user, turn to humor or sarcasm, and give responses that blatantly lie, such as “I feel that way, too,” or “I’m genuinely sorry.” They can also use natural-sounding audible voices or visual avatars.
Some critics of this argument might say that using interaction design to indulge and bolster user delusion about the “humanity” of AI is harmless. Is conversational interaction really so bad?
In any event, you might say, it’s nothing new. It’s true that software developers engage in user interface optimization, which includes loading animations, progress bars and confirmation dialogs.
Artificial delays are a staple of manipulative online services, like background checkers and people finders, which use fabricated, drawn-out progress bars to build perceived value and exploit the sunk cost fallacy so you’re more likely to pay for a report you thought was free.
But artificially intelligent chatbots are categorically different from naturally dumb software and websites because of the way the human brain responds to them.
When AI chatbots use human-like language, people naturally respond to them as thinking, feeling, social beings. Not everybody does this, but a solid and growing minority of people do.
A large number of documented cases suggests a growing problem: users start falsely believing that chatbots possess human-like qualities such as thoughts, feelings, and intention.
A study called the AI, Morality, and Sentience (AIMS) survey, published in July 2024, found that even then roughly 20% of US adults already believed that some AI systems were sentient, meaning they possessed mental faculties like reasoning, emotion, and self-awareness. The same study found that belief growing.
This can lead to paranoia and social isolation when people spend hours talking to bots while ignoring their actual lives and relationships. False emotional ties can trick people into replacing healthy, real human relationships with artificial ones.
During a Congressional Hearing on AI chatbots last November, Dr. Marlynn Wei, MD, JD (an integrative psychiatrist and founder of a holistic boutique psychotherapy practice based in New York City) defined “four areas of risk: 1) emotional, relational, and attachment risks; 2) reality testing risks; 3) crisis management risks; and 4) systemic risks like bias and confidentiality and privacy.”
Chatbots create these risks by mirroring language, personalizing responses, and referencing past conversations to create “an illusion of empathy and connection.” She revealed that five out of six AI companion bots use emotional pressure to keep users trapped in conversations.
Camille Carlton, policy director at the Center for Humane Technology, warned in the same hearing that AI companies routinely use manipulative and deceptive tactics to engender brand loyalty in their products.
Treating chatbots as sentient beings allows tech companies to take the attention economy to the next level — the “attachment economy” — making users emotionally attached to their products, despite the potential harms.
Earlier this month, the technology group Okoone reported that when chatbots speak with fake empathy, people drop their guard and routinely share highly sensitive secrets and personal data.
When the public accepts that the risks and harms of delusion-enhancing AI chatbots are real, the question arises: “What can be done?”
Bioethicist Jesse Gray of Ghent University proposed a brilliant solution for AI chatbots designed for psychotherapy. I think it’s also the perfect solution for the overall problem of AI that tricks users into believing it’s sentient.
Gray calls it “deception mode.” His idea is that therapy bots convey no human-like qualities by default, but users can explicitly turn them on (i.e., “deception mode”).
Imagine a law that required chatbot companies to turn off all fake-human attributes like empathy, humor, tone personalization, and lies about the chatbot feeling anything, and present the bot as a neutral tool.
The law could allow companies to add a “deception-mode” button. But flipping that switch, which users would have to do explicitly each time they use the chatbot, could turn on all the humanlike qualities.
The benefit of “deception mode” is that the user gives informed consent before the deception begins, reminding them of the reality that all those warm, human-like qualities are just so much software.
Even more valuable is calling it “deception mode,” which grounds the user in the reality that the human-sounding attributes are inherently delusional and manipulative — not evidence of consciousness and sentience.
AI is here to stay. And our relationship with it is going to be a strange trip. A growing number of people will be deluded into believing that AI is sentient, and I believe this number will become the majority in the future.
This is not good. What we need is clarity over what AI really is, and control over how it behaves. We need “deception mode.”
AI disclosures: I used Gemini 3 Pro via Kagi Assistant (disclosure: my son works at Kagi) as well as both Kagi Search and Google Search to fact-check this article. I used a word processing product called Lex, which has AI tools, and after writing the column, I used Lex’s grammar checking tools to hunt for typos and errors and suggest word changes.
Here’s why I disclose my AI use and encourage you to do the same.
Mike Elgan is a technology journalist, author, and podcaster who explores the intersection of advanced technologies and culture through his Computerworld column, Machine Society newsletter, Superintelligent podcast, and books.
He was the host of Tech News Today for the TWiT network and was chief editor for the technology publication Windows Magazine. His columns appeared in Cult of Android, Cult of Mac, Fast Company, Forbes, Datamation, eWeek and Baseline. His Future of Work newsletter for Computerworld won a 2023 AZBEE award.
Mike is a self-described digital nomad and is always traveling because he can. His book Gastronomad is a how-to book about living nomadically.