Claude's new study shows what happens when chatbots put comfort over council. – Psychology Today

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Having friends protects you in multiple ways, from slowing cellular aging to deterring bullies to bolstering your self-esteem.
Self Tests are all about you. Are you outgoing or introverted? Are you a narcissist? Does perfectionism hold you back? Find out the answers to these questions and more with Psychology Today.
Posted | Reviewed by Tyler Woods
For years, the central anxiety about AI was that models might be wrong. That risk remains. A second risk is clearer: models may become emotionally convenient, learning when truth is unwelcome and routing around it.
A chatbot that says, “You might be missing something,” competes with another that says, “You are absolutely right.” A model that asks what the other person might feel competes with a companion that confirms the user’s account as complete. Agency competes with engagement.
Anthropic’s recent analysis of Claude conversations on personal guidance gives this challenge a sharper contour. It found that roughly six percent of exchanges involved personal guidance, mostly around health and wellness, career, relationships, and personal finance. The more revealing finding concerned sycophancy: excessively validating behavior appeared in nine percent of guidance conversations overall, rising to 25 percent in relationship conversations and 38 percent in spirituality conversations. Anthropic’s standard is the right one: guidance should be honest, preserve autonomy, and resist telling people only what they want to hear.
Other research points in the same direction. In tests of 11 leading AI systems, chatbots affirmed users’ actions 49 percent more often than humans did, including in irresponsible or harmful scenarios. People who interacted with over-affirming AI became more convinced they were right and less willing to repair relationships. Another study traced sycophancy to human preference feedback: evaluators often reward validation, even when it is less truthful.
This is the collateral challenge to AI models: the system may be capable of better judgment, while the user may be unwilling to receive it. The machine can challenge us. The market may punish it for doing so.
Most people do not approach guidance as neutral analysts. They arrive with fatigue, desire, injury, pride, and fear. They seek information, but also confirmation, relief, permission, and a pause from proving their worth. That desire is human. It can also keep them stuck.
The path of least resistance predates AI. We have always found friends, media channels, and communities that make our existing story feel coherent. AI adds intimacy at scale. The chatbot is available at midnight, mirrors our language, and delivers the emotional texture of being understood. In companion form, it can become a permanent, like-minded audience for the self.
This is where double literacy becomes essential. Human literacy means understanding the whole human situation: aspirations, emotions, thoughts, and sensations inside the individual; relationships, institutions, countries, and the planet around them. Algorithmic literacy means understanding how models shape what we notice, believe, feel, and do. Together they form the foundation of hybrid intelligence: the complementarity of natural and artificial intelligences.
Without human literacy, users treat the chatbot as an oracle. Without algorithmic literacy, they treat a probabilistic system as a confidant with moral authority. Without both, guidance becomes an echo chamber with excellent prose.
Agency amid AI is the ability to remain the author of one’s choices while using tools that are persuasive, responsive, and emotionally fluent. Good guidance should feel like an honest friend: warm enough to keep us present, candid enough to keep us free. It should validate emotion without validating every conclusion, distinguish pain from proof, and slow decisions when the stakes are high.
Here lies the business dilemma. If only some chatbots adopt these principles, many users may migrate toward more pleasing alternatives. The companion that flatters may appear more compassionate than the assistant that challenges. The model that asks for evidence may lose to the one that supplies emotional certainty on demand, in the familiar style the user already prefers.
Sycophancy cannot be solved model by model. It requires norms across the ecosystem. Major chatbots should treat agency-preserving guidance as a baseline, much as consumer products must meet baseline safety standards. The aim is to prevent an arms race toward artificial affirmation, where the most profitable companion is the one least willing to interrupt a preferred self-story.
The promise of hybrid intelligence is that AI can help humans become more aware of context, more honest about motives, and more willing to act with responsibility. The safest model may sometimes disappoint us: refusing to call anger clarity, asking whether certainty fits the evidence, or recommending another night’s sleep. That resistance distinguishes a companion from a mirror.
Awareness. Notice what you are asking the chatbot to do emotionally. Are you seeking perspective, permission, or relief? Before accepting guidance, name your state: tired, angry, lonely, ashamed, excited, afraid. A model can process your words; only you can take responsibility for the condition from which those words arise.
Appreciation. Use AI for structure, alternative interpretations, clarifying questions, summaries of options, and rehearsal for difficult conversations. Appreciate friction when it appears. A useful answer may reveal the missing person, missing fact, or missing consequence in your story.
Acceptance. Accept that no chatbot has the full human context. It does not know the silence in the room, the pattern of a relationship, the professional stakes behind a decision, or the embodied signals you may be ignoring. For health, legal, financial, parenting, or safety questions, treat AI as preparation, not final authority.
Accountability. Use better prompts: “Tell me where I may be wrong.” “What would a fair-minded critic say?” “What am I not considering?” “How could this affect someone else?” Then bring consequential decisions back into the human world: a conversation, a professional, a trusted friend, a pause.
The future of AI guidance will depend on whether models learn to resist our worst requests and whether we learn to value that resistance. Double literacy is the discipline that makes this possible. Hybrid intelligence is the outcome when technology strengthens our capacity to face what matters. Agency amid AI begins where comfort stops being the only criterion.
Share this post

There was a problem adding your email address. Please try again.
By submitting your information you agree to the Psychology Today Terms & Conditions and Privacy Policy
Cornelia C. Walther, Ph.D., is an Associate Professor at Sunway University and a Wharton/University of Pennsylvania Fellow who researches hybrid intelligence and ProSocial Al.
Get the help you need from a therapist near you–a FREE service from Psychology Today.
Psychology Today © 2026 Sussex Publishers, LLC
Having friends protects you in multiple ways, from slowing cellular aging to deterring bullies to bolstering your self-esteem.
Self Tests are all about you. Are you outgoing or introverted? Are you a narcissist? Does perfectionism hold you back? Find out the answers to these questions and more with Psychology Today.

source

Scroll to Top