Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Cut-offs cut deep and wide, their emotional impact reverberating far beyond the combatants. Because much of the suffering is hidden, repair is challenging for everyone, not least of all therapists.
Self Tests are all about you. Are you outgoing or introverted? Are you a narcissist? Does perfectionism hold you back? Find out the answers to these questions and more with Psychology Today.
Posted December 16, 2025 Reviewed by Margaret Foley
The story of technology is the story of continual disruption and displacement. New systems and processes send some skills into obsolescence, opening the way for new skills and workflows. Generative AI has triggered the latest “de-skilling.” But chatbot technology isn’t only transforming jobs and shifting our relationship with information itself. It is also inviting us to relinquish our cognitive independence and bring about a sort of dispossession that is unprecedented. Some argue that Big Tech’s unbridled rush to implement chatbots and AI assistants into every part of our lives threatens to erode the sorts of cognitive skills that make us who we are and effectively shrink the field of human agency. The convenience and presumed “cognition” of AI assistants threaten to usurp human creativity, judgment, empathy, and meaning-making—what ethicist Kwame Anthony Appiah refers to as “constitutive de-skilling” (2025). Another technology theorist, Sylvie Delacroix, has cautioned that our increasing reliance on tools built on large language models (LLMs) is leading to our “perceptual atrophy” and an inability to deal effectively with uncertainty.
Research is beginning to document an erosion of expertise among medical specialists who are increasingly dependent on AI assistants that have proven highly effective at detecting precancerous lesions and tumors. Researchers assessed more than 2,000 colonoscopy cases conducted by 19 endoscopists who had been using AI tools that boosted their adenoma detection rates (ADR). However, after using the assistant for just three months, endoscopists were significantly worse at detecting growths on their own, with their ADR dropping 6 percent. “[C]ontinuous exposure to AI for polyp detection reduced the ADR of standard, non-AI assisted colonoscopy from 28.4% to 22.4%, with a 6.0% absolute difference, suggesting a detrimental effect on endoscopist capability,” according to researchers (Budsyń et al., 2025).
Big Tech is pushing us to use their chatbot tools in every corner of our lives. But doing so threatens to undermine what constitutes our very identities: our everyday skills of discernment, our ability to make decisions about what matters, and the need to exercise our creative impulses. “To offload those faculties would be, in effect, to offload ourselves,” Appiah writes. “Losing them wouldn’t simply change how we work; it would change who we are.” In his essay “The Tyranny of Convenience,” author Tim Wu cautioned against the tendency to allow that value to trump all others. “Created to free us, it can become a constraint on what we are willing to do, and thus in a subtle way it can enslave us,” Wu wrote (2018). The seductive and deceptive ease of “creating” by chatbot belies the fact that AI assistants, by design, cannot “create” at all, based as they are not on fact-retrieval but on word-sequencing probability algorithms. Yet they make us feel creative without doing the work. They invite what technology ethicist Shannon Vallor has called “mental dispossession.”
Compounding the concern is the well-documented problem of sycophancy (Park et al., 2023; Grandinetti & Bruinsma, 2023). AI Assistants represent the first human technology explicitly designed to tell us what we want to hear. Meta, Microsoft, OpenAI, and other developers have specifically adjusted levels of “agreeableness” in their model responses to promote engagement—with sometimes disastrous results, including teens committing suicide after forming deep attachments to chatbots (Barron, 2025). But programmed sycophancy can cause more subtle yet insidious damage. “Over time, these tightly woven structures of exchange between humans and assistants might lead humans to inhabit an increasingly atomistic and polarised belief space where the degree of societal disorientation and fragmentation is such that people no longer strive to understand or place value in beliefs held by others,” warned a report on AI ethics by Google DeepMind (Gabriel et al., 2024). In other words, we are allowing Big Tech to pursue the same engagement-driven policies adopted for social media in the 2000s that resulted in the toxic digital sphere we have today.
As Big Tech continues to reshape our relationship to information largely for its own commercial interests, the deeper, more serious questions of the psychological effects of chatbot reliance, of the primacy of human agency, require more attention. These questions are just as urgent in the fields of science and medicine as they are in humanities education and media production. It is the core of our humanness—discernment, imagination, judgment, sense-making—that must define our future, rather than algorithms of dispossession and displacement.
References
Appiah, K.A. (2025). The age of de-skilling. The Atlantic.
Barron, J. (2025, October 24). A teen in love with a chatbot killed himself. Can the chatbot be held responsible? The New York Times Magazine.
Grandinetti, j, & Bruinsma, J. (2023). The Affective Algorithms of Conspiracy TikTok. Journal of Broadcasting & Electronic Media, 67(3):274–293, doi: 10.1080/08838151.2022.2140806.
Budzyń, K., Romańczyk, M., Kitala, D., Kołodziej, P., & Bugajski, M. (2025). Endoscopist deskilling risk after exposure to artificial intelligence in colonoscopy: a multicentre, observational study. The Lancet 10(10): 896-903.
Gabriel, I., Manzini, A., Keeling, G., …Agüera y Arcas, B., Isaac, W., & Manyia, J. (2024). The ethics of advanced AI assistants. Google DeepMind.
Park, J. S., O’Brien, J. C., Cai, C. J., Morris, M. R., Liang, P., and Bernstein, M.S. (2023). Generative agents: Interactive simulacra of human behavior.
Wu, T. (2018, February 16). The tyranny of convenience. The New York Times.
Patrick Lee Plaisance, Ph.D., is the Don W. Davis Professor in Ethics at the Bellisario College of Communications at Pennsylvania State University, and Editor of the Journal of Media Ethics.
Get the help you need from a therapist near you–a FREE service from Psychology Today.
Psychology Today © 2025 Sussex Publishers, LLC
Cut-offs cut deep and wide, their emotional impact reverberating far beyond the combatants. Because much of the suffering is hidden, repair is challenging for everyone, not least of all therapists.
Self Tests are all about you. Are you outgoing or introverted? Are you a narcissist? Does perfectionism hold you back? Find out the answers to these questions and more with Psychology Today.