#Chatbots

Opinion: Are AI chatbots more empathetic than human physicians? – The Globe and Mail

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
This article was published more than 2 years ago. Some information may no longer be current.
Wael Haddara is an academic physician, educator and chair of critical care at Western University and London Health Sciences Centre.
In an average week of service in the Intensive Care Unit, I interact with some 30 or more families. For many, these are brief, informational interactions; for some they are emotion-laden, difficult conversations; but for all, the experience of being in the ICU is a stressful one-time event.
Over the years, I have learned that there are some key aspects to interacting with families to help alleviate, rather than contribute to, the trauma of this stressful episode. This is especially the case when the result of that hospital experience is the death of a loved one. Open communication that is clear and honest – without false hope or undue pessimism – is an important starting point. But what most families reflect back to us is the feeling that we cared. Empathy is a cornerstone of any effective therapeutic relationship.
Numerous studies have documented the observation that students enter medical school with high degrees of empathy, but upon graduation, their empathy is significantly diminished. The reasons for this shift are many, but the stresses of training, the possible absence of positive role models, and the overall harshness of the system are some of the reasons cited.
So it begs the question: can a robotic physician powered by artificial intelligence offer more knowledge and empathy than actual physicians when responding to patient questions?
One group of researchers believes it’s possible. Because of privacy concerns, they could not explore questions in a medical setting. Instead, drawing on a Reddit public forum, the researchers compared physician and chatbot responses to patient questions that were asked publicly. The result? ChatGPT-generated answers to questions from the community were judged to be three times as knowledgeable and nine times as empathetic as answers from real physicians.
As with any study, there are several limitations to consider. Firstly, the interactions in question lack a preceding therapeutic relationship, which may affect the level of empathy expressed. Secondly, the electronic nature of the interaction may have led responding physicians to prioritize knowledge and content over empathy. Finally, these assessments were done by a team of medical practitioners, so it is unclear if patients would have rated the response differently.
But the stark difference in results does raise a different existential question for human beings and AI. If we accept that ChatGPT is not sentient, what does that say about “empathy?” Is empathy just the stringing along of a specific sequence of words? Is it injecting emotion-laden language at certain junctures, even when there is no actual emotion behind it? Do intent and investment matter at all? One of George Burns’s funniest quotes was that “sincerity is everything, and if you can fake that, you’ve got it made.” And maybe the point of this particular exercise is that in an era of electronic communication, the only things that do matter are words, and the sequence in which they are used.
But in real life, empathy is more than just words. In my experience with patients and families confronting catastrophic illness, a truly empathetic response may not involve any words at all. It is a moment of silence afforded to a patient who has just received a terrible diagnosis, or a box of tissues quietly extended to a grieving relative, or a voice that conveys the simple message that we care. It is making the time slow down when the world is rushing by and speaking clearly about outcomes and options without false hope or undue pessimism. It is seeing and reflecting the basic dignity of every human being in our interactions and ensuring equity and fairness. All this is not easy. It requires investment, energy and emotional commitment. But it is worth doing because it is the essence of our humanity.
Language-based AI engines distill human communication, and human emotion, down to a specific use of language. The danger is not that the AI engine outperforms physicians’ electronic communication; it is that the medical profession myopically distills empathy and “good” communication into the use of words and their arrangement, neglecting all the other elements, including intent, investment and sincerity.
I’m not a Luddite, and it has become clear that AI engines may have a transactional role to play in many settings, including health care. But there is a gulf between transactional assistance from an AI engine and coming to believe that the narrow world of the current language-based AI chatbots represents the stage of practice to which patients and health care providers should aspire. Rather than an easy path to better communication, findings like this should remind us that if we lose our humanity, we deserve to be replaced by AI.
Report an editorial error
Report a technical issue
Editorial code of conduct
© Copyright 2025 The Globe and Mail Inc. All rights reserved.
Andrew Saunders, President and CEO

source

Opinion: Are AI chatbots more empathetic than human physicians? – The Globe and Mail

An AI chatbot pushed a teen to