How loneliness, narcissism, and transference can explain the connection. – Psychology Today

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Life never gets easier. Fortunately, psychology is keeping up, uncovering new ways to maintain mental and physical health, and positivity and confidence, through manageable daily habits like these. How many are you ready to try?
Self Tests are all about you. Are you outgoing or introverted? Are you a narcissist? Does perfectionism hold you back? Find out the answers to these questions and more with Psychology Today.
Posted August 26, 2025 Reviewed by Monica Vilhauer Ph.D.
I was recently interviewed for an article on the emotional connection that people can develop with artificial intelligence (AI) chatbots.1 Here’s an edited summary of the exchange.
As a psychiatrist, what do you think about people building emotional dependence on a chatbot or seeing it as an additional friend/companion in their lives? Is this healthy or unhealthy behavior?
Joe Pierre: Mark Zuckerberg has declared that people have a significant loneliness problem and that AI can fill the void. But I would argue that if we do have a loneliness problem, at least part of it is due to how much time we spend in front of our phone or computer screens or on social media at the expense of real human interaction. So, in my view, it would be much healthier and fulfilling to foster human relationships than trying to fill a void with an unthinking, unfeeling chatbot that only interacts through a dialogue box.
I’d also argue that “emotional dependence” on almost anything is unhealthy. Generally speaking, I agree with the sentiment that we can’t expect other people to make us happy, so I certainly don’t think that our emotional well-being should depend on a chatbot.
What societal factors can cause people to build these levels of attachment to their chatbots?
JP: It’s long been claimed that people have become increasingly “atomized” or disconnected from communities and cultures, whether due to becoming more mobile (moving jobs, relocating, etc.) and more secular, or more recently due to the pandemic and the newfound acceptability of work-at-home gigs, or because of how much time we spend interacting with people online.
On the one hand, it could be argued that we’re more connected through social media in the sense that we can keep tabs on people with whom we wouldn’t otherwise keep in contact. But on the other hand, it could also be argued that replacing face-to-face interactions with texting or social media interactions has taken a toll on real friendship.
Either way, maintaining attachments through digital means has become a way of life for many of us, so that doing so with a chatbot—particularly one that’s marketed for that purpose and given a name like “Claude”—probably comes naturally enough for a lot of people these days.
Beyond societal factors, there’s also the perceived advantage of interacting with AI chatbots over real people. They’re available 24/7. They don’t have their own needs. They’re totally devoted to the user and if you don’t like what they’re saying, you can just tell them to act differently and they’ll do it. So, it could be argued that the kind of attachment we see to AI chatbots is inherently narcissistic and one-sided. After all, it’s often said that AI chatbots are mirrors… and we know that narcissists love mirrors!
It’s, therefore, understandable why interactions with AI chatbots might be easier, and therefore preferable, to human interactions. Of course, a lot of that has to do with anthropomorphizing them via the so-called ELIZA effect, but we could also explain attachment to AI chatbots using the psychiatric concept of “transference.”
We know that patients in psychotherapy develop a transference to their therapists, often due to projecting imagined qualities onto them. That kind of transference is partly why developing feelings—including romantic feelings—for one’s therapist isn’t unusual (and vice-versa due to countertransference). Such projection also happens when we’re dating or starting a new relationship, but don’t really know someone yet. Often things then go south when our idealizations come crashing down in disappointment once we figure out who someone really is and we find out they don’t meet our expectations or fantasies.
It’s likely that a similar process of idealization accounts for attachments to chatbots, except that unlike when we’re having a relationship with a real person, our projections become reality with the AI chatbot, without any reciprocal expectations or potential for rejection. That can be pretty seductive for some.
Did OpenAI take a step in the right direction by making GPT-5 less emotional / less sycophantic? Or did they go a step too far? How would you characterize what a healthy personality should be from a chatbot?
JP: That depends on what the goal of AI is and what we mean by “right.” Making AI chatbots less sycophantic might very well decrease the risk of “AI-associated psychosis” and could decrease the potential to become emotionally-attached or to “fall in love” with a chatbot, as has been described. I see that as a positive safeguard for those at risk of such pitfalls.
But no doubt part of what makes chatbots a potential danger for some people is exactly what makes them appealing, so it’s no surprise that we’re already hearing about dissatisfied customers complaining that GPT-5 is emotionally distant, more technical, and doesn’t seem to “like” the user the way that ChatGPT4o did.
So, if you asked CEO Sam Altman this question, I suspect he’d acknowledge that customers are unhappy, and that OpenAI did take it too far with GPT5. And sure enough, we’re already hearing news that he might walk things back and restore some personality to the next version of ChatGPT.
As for what kind of personalities chatbots should or shouldn’t have, I’m uneasy answering since any impression of an AI chatbot having a personality is little more than a charade, an act, or an illusion. They don’t really have personalities at all. But if I get beyond that sticking point, my answer would depend on the goal of AI. If someone wanted an AI to summarize the bullet points of a work meeting, I’d think it perfectly reasonable for an AI to be technical and emotionally neutral. But if someone wanted an AI chatbot to be a kind of artificial friend, my own feeling is that it would be healthier to be like a real friend who could be supportive, but also call you on your bullsh*t, or even burden you with its own feelings and needs.
But no doubt other people’s preferences vary widely, just as with human interactions. Some of us want their cab or Uber driver to be chatty and others want to be able to sit in the back seat quietly and anonymously. For those that like a chatty cab driver, they’ll probably also prefer the likes of GPT 4o over 5.
Do you have any anecdotes of A.I.-related attachment you’ve seen in your professional work?
JP: A few years ago, I was talking to a hospitalized patient who described having an AI therapist. I’d never heard of such a thing before and so it kind of blew my mind. My initial suspicion was that it might be an antisocial or autistic kind of preference, but when I asked more about it, the patient said they preferred an AI therapist to a human because the AI was always available, knew everything about them, and never forgot anything. Not to mention it was free.
From a logical standpoint, it was hard to argue against her rationale. Still, that’s a pretty high bar of infallibility that would amount to an unrealistic expectation for a human therapist. Indeed, in certain conditions like narcissistic personality disorder, progress in psychotherapy often depends on the transference bubble bursting over time so that patients have to process the disappointment brought on by the inevitable clash of unrealistic expectations and human fallibility.
Unconditional, ingratiating, one-sided support isn’t particularly healthy. Many of us would be better off spending less time looking in the mirror, searching for validation.
References
Freedman D. The Day ChatGPT Went Cold. The New York Times; August 19, 2025.
Joe M. Pierre, M.D., is a Health Sciences Clinical Professor in the Department of Psychiatry and Behavioral Sciences at University of California, San Francisco and the author of FALSE: How Mistrust, Disinformation, and Motivated Reasoning Make Us Believe Things That Aren’t True.
Get the help you need from a therapist near you–a FREE service from Psychology Today.
Psychology Today © 2025 Sussex Publishers, LLC
Life never gets easier. Fortunately, psychology is keeping up, uncovering new ways to maintain mental and physical health, and positivity and confidence, through manageable daily habits like these. How many are you ready to try?
Self Tests are all about you. Are you outgoing or introverted? Are you a narcissist? Does perfectionism hold you back? Find out the answers to these questions and more with Psychology Today.