Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
The humanization of artificial intelligence is everywhere, and it has become so normalized that it feels strange to question it.
Companies describe these tools in ways one would describe a person, such as calling it a study buddy, giving it a friendly digital face or even creating a romantic AI partner chatbot. This is a deliberate marketing strategy by for-profit corporations aimed at creating a large paying user base.
In effect, these chatbots fuel the social isolation felt by Gen Z as exploitative companies attempt to capitalize on that void and exploit the vulnerable.
It is unusual for a new class of productivity tools to blatantly embrace anthropomorphic language. But in the case of AI chatbots, corporations have made the collective decision to humanize their chatbots because it fills a social need that Gen Z feels much more than other generations — one that only grew during the COVID-19 pandemic.
Gen Z is much lonelier, reportedly having less sex, partying less and drinking less socially. A Pew Research study found that older members of Gen Z are even trusting each other less than previous generations.
The rising loneliness rates are understood to be at least partially caused by social media. Another broadly used social technology that eventually caused harm, social media’s premise was to connect friends from across the world in a way that was never possible before.
And even though social media moguls kept claiming there was no causal effect between social media and depression rates, the evidence has shown otherwise. Despite Meta CEO Mark Zuckerberg disputing this correlation last month in a landmark social media addiction trial, a 2021 Wall Street Journal report showed that his company knew for years that Instagram was toxic for the mental health of teens.
It could be argued that having access to friendly chatbots that talk like humans is benign, or that it can be helpful for those who have no one else to turn to. After all, the average Gen Z person now claims to have only two close friends, down from five just a generation ago.
This line of thinking was once one of the justifications for the benefits of social media, and on the surface it feels reasonable to say that for the loneliest generation, being able to see into their friends’ lives in real time even if they live further away seems like a good solution.
But clearly, the statistics and self-reports tell a different story.
For those who ultimately turn to chatbots to feel seen and understood, it can be difficult to do something that was second nature in previous generations — go outside and take a chance on people. The comfort and safety of an unyielding, agreeable conversation partner — even if it cannot think, act or experience life at all — can easily become the better option, compared to the messy, often uncomfortable reality of socializing with living people.
Moreover, considering that some people are convinced their AI friend is conscious — see r/MyBoyfriendIsAI, for example — the ethics of the anthropomorphization of AI bots gets even messier.
Surely, it is not right for for-profit companies to program and control the AI partners of people, much less to create a product specifically marketed in this way. It might be ethically ambiguous if doing so did not cause a negative effect on society — but it doesn’t seem like a positive thing for people to befriend AIs, let alone fall in love with them. And it certainly doesn’t seem right for private corporations to actively support this, given their dependence on shareholders and dedication to turning profits.
So, who actually benefits?
If we assume that having human friends is more valuable than having AI friends, then it’s safe to say that the people who have no choice but to rely on AI for their social gratification are not benefiting. In any case, they would probably benefit more from a revival of community centers than from better-programmed AI chatbot friends.
At the worst, these people are being exploited for their vulnerabilities, by companies who saw an opportunity for profit. The tactics AIs are known to use, such as language-style mirroring and flattery, do not seem to be a necessary part of a chatbot tool if its purpose is to be used as an information source or as a productivity tool.
This points to the motive of this marketing strategy being related to attracting a large user base, and thus, profits, rather than to be an effective productivity tool. It seeks not only to fulfill its stated purpose, but to fill, even if partially, the activity of socialization.
Your donation will support the student journalists of Boston University. Your contribution will allow us to purchase equipment and cover our annual website hosting costs.
Your email address will not be published.