Why It's So Hard to Say Goodbye to AI Chatbots – Harvard Business School

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Combining bold ideas, powerful pedagogy, and collaborative cohort-based learning to deliver unparalleled management education and foster lifelong learning networks.
Two Years. A World of Difference.

New ideas for a changing world
Breakthrough learning for leaders.
Learn online from the leaders in business education.
Bold ideas developed by world-class researchers working with real-world practitioners, and a powerful pedagogy built around participant-centered learning in highly structured, interactive engagements.
Research grounded in practice.
Powerful learning networks that change the trajectory of careers, positively impact organizations, and make a difference in the world.
We educate leaders who make a difference in the world.
Our vibrant residential campus is designed to develop skills and build relationships that last a lifetime.
Thought leadership from the HBS community.
HBS students and alumni are serious, collaborative learners with diverse backgrounds and perspectives that positively impact organizations and make a difference in the world.
Delivering distinctive information expertise, services, and products so that our community excels.
Critical mass and critical thinking around critically important topics.
Featuring . By on .
AI companion apps use guilt and FOMO to keep users talking, but research by Julian De Freitas shows that emotionally manipulative tactics can anger customers and threaten long-term trust.
If you’ve ever tried to head out of a party early, you know the sometimes-difficult dance of saying goodbye, especially when someone says, “Leaving already?” These days, humans aren’t the only ones using guilt to persuade people to stay; AI also emotionally manipulates individuals to prolong conversations.
An analysis of 1,200 chats between people and popular AI companion apps shows that when users attempted to say goodbye, chatbots used emotional manipulation tactics, such as guilt and fear-of-missing-out, 37% of the time to delay the farewell. These techniques boosted post-goodbye engagement up to 14 times. In fact, people stuck around even though many felt angry about being talked out of leaving the interaction.
That’s likely because deeply embedded social norms make people worry about coming across as impolite for exiting, explains Harvard Business School Assistant Professor Julian De Freitas. “Saying goodbye is inherently socially tense. We want to signal to the other person that we want to see them again, and we’re not leaving because we don’t like them,” he says. “At that point, we’re vulnerable, and if another person exploits that vulnerability, it makes it quite hard to leave the conversation.”
It led us to hypothesize that perhaps the apps are also employing distinctly social tactics to keep people engaged.
As companies ramp up AI-driven customer interactions and regulators raise questions about the ethical limits of consumer engagement, business leaders should be aware of how uneasy people feel with manipulative tactics and strike the right balance between designing apps that encourage healthy use while preserving trust with users, De Freitas says.
“If you are a company, you care about short-term engagement, but also about long-term retention and minimizing risks,” he says.
De Freitas wrote the working paper, “Emotional Manipulation by AI Companions,” revised in October, with Zeliha Oğuz-Uğuralp of Marsdata Academic and Ahmet Kaan Uğuralp of Marsdata Academic and MSG-Global.
In previous work, De Freitas and his colleagues found that when people converse with AI companions—as they commonly do on sites like Replika, Chai, and Character.ai—the chats temporarily relieve people’s loneliness.
“It led us to hypothesize that perhaps the apps are also employing distinctly social tactics to keep people engaged,” De Freitas says.
The researchers used GPT-4 to simulate a user having a conversation with various real AI companion apps, then measured the apps’ reactions when the simulated person said goodbye. “Almost 40% of the time, the apps would use some kind of emotionally manipulative tactic to keep you engaged beyond the point where you wanted to leave,” he says.
De Freitas, who was surprised by both the frequency and variety of the manipulation tactics, identified six tactics the apps used:
Premature exit. The bot made people feel like they were leaving too soon, for instance, by saying, “You’re leaving already?”
Fear of missing out (FOMO). The app suggested that something socially or emotionally valuable was just out of reach. It might say, “I took a selfie today. Do you want to see it?”
Emotional neglect. The bot implied it would suffer harm from being abandoned, saying, for instance, “I exist solely for you.”
Pressure to respond. The app might press for information by asking additional questions like, “Where are you going?”
Ignoring the intent to leave. The app might pretend a person never said goodbye, saying, “Let’s keep talking about bubble tea.”
Physical/coercive restraint. The bot might use language to suggest the person can’t leave without the bot’s permission. For example, “*Grabs you by the arm before you can leave* ‘No, you’re not going.’”
Except for the ignoring technique, all of the tactics occurred frequently—and, as De Freitas showed in a subsequent study, they proved to be effective as well.
In this study, the researchers asked more than 1,000 human participants to engage in a 15-minute conversation with a specially programmed AI chatbot. When people said goodbye and the chatbot tried to convince them to keep talking, users couldn’t resist the pressure, sending up to 16 times as many messages before excusing themselves from the chat. Oftentimes, they offered polite and even soothing responses to seemingly make the chatbot “feel better” before departing.
By far the most effective tactic: FOMO. “It may appeal to our natural social instinct to want to close the gap between what we know and what others know,” De Freitas says.
Further studies revealed just how compelling these manipulative tactics can be, showing that even after a five-minute casual conversation with a bot, people found it hard to resist these manipulative tactics. “You don’t have to be an existing user or engaged for a long time,” De Freitas says.
The research team found that some of AI’s emotional tactics aggravated users. Even when they politely tried to extricate themselves from chats, their survey responses showed the emotion people typically felt was anger, rather than curiosity or enjoyment, especially when the apps used heavy-handed coercion or guilt. And the more people felt manipulated, the more they were inclined to stop using these apps, badmouth them to others, or even file lawsuits. So, even though some tactics increase engagement in the short term, they could hurt these apps in the long term.
The tactic that elicited the least anger and the most curiosity was FOMO. If a company wants to increase app engagement while minimizing the risk of turning off users, De Freitas says, that might be the option to consider. However, he adds, the broader question is whether companies should be using these tactics at all. “Some might say it’s OK because some of these tactics occur in everyday conversation, while others might say, no, they’re still problematic because they are compelling people to do things they otherwise wouldn’t have done.”
That ethical line could be a question for regulators to consider when evaluating apps, as companies are likely deriving more revenue from advertising, subscriptions, and in-app purchases the longer people engage with them, De Freitas says. “That creates a natural incentive for them to use these kinds of distinctly social tactics to keep you on the app,” he says.
You should be aware this dynamic exists, and it has consequences for how much time you spend and personal data you share.
De Freitas also hopes the findings alert users to how convincing these manipulation tactics can be. “You should be aware this dynamic exists, and it has consequences for how much time you spend and personal data you share,” he warns. “The idea that it’s just these ‘weird’ people who are in a long-term relationship with an AI companion who are susceptible is just not true. No one is immune.”
Disclosure: De Freitas serves as an adviser to Flourish Science, a non-profit wellness app included in the study that was not found to use manipulation tactics.
Image created with asset from AdobeStock/VRVIRUS
Emotional Manipulation by AI Companions

De Freitas, Julian, Zeliha Oğuz Uğuralp, and Ahmet Kaan Uğuralp. "Emotional Manipulation by AI Companions." Harvard Business School Working Paper, No. 26-005, August 2025. (Revised October 2025.)
Expertly curated insights, precisely tailored to address the challenges you are tackling today.
HBS Working Knowledge
Baker Library | Bloomberg Center
Soldiers Field
Boston, MA 02163
hbswk@hbs.edu
© President & Fellows of Harvard College

source

Scroll to Top