Interacting with AI: caution or connection? – The Miscellany News

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Vassar College’s student newspaper of record since 1866

Posted on in
Over winter break, I wanted to bake my own birthday cake with my family. When I suggested this idea, my sister began asking ChatGPT for a cake recipe. I was apprehensive at first—surely there was no reason to immediately turn to artificial intelligence (AI). But within seconds, we had a clear and straightforward chantilly cake recipe: ingredients, instructions, estimated duration—all of it listed. We went out to buy the ingredients and returned home to find that we had missed some. No problem, though—my sister just told ChatGPT that we forgot the buttermilk, and it instructed us to use a mix of milk and yogurt. Oops, we added the sugar into the dry ingredients instead of the wet. That is fine! Just make these adjustments. After multiple tweaks and changes to fit our convenience, we had our cake—perfectly accurate in the eyes of ChatGPT, but not quite the dessert we signed up for. 
It is likely that you have interacted with artificial intelligence chatbots at some point in your life, especially with their increasing prominence in daily life. From the well-known ChatGPT to the little virtual assistants on websites, these chatbots seem to be everywhere. They have become so prominent that many people now turn to them for advice, not only on writing or practice problems, but on life. A friend once told me they asked ChatGPT if their crush liked them. The resulting response undeniably flattered my friend, and we laughed it off. These interactions may seem harmless, and in fact, they may even come off as positive. From time to time, being praised or reassured for your decisions feels nice. Especially considering the chatbot’s friendly tone, it can almost seem as if you are texting a friend or a person you can confide in. 
For people who use AI as an outlet to share their emotions, you can get reassurance and company. In fact, people who use chatbots experience less relative loneliness, as mentioned in the research paper “AI Companions Reduce Loneliness” by De Freitas et al. These AI chats will never be angry or annoyed with you—or by anything else. This reduces the risk of users feeling that they are not a priority or that they may upset someone by interacting with them. In a study performed by Internet Matters, nearly 15 percent of teenagers would rather have conversations with chatbots than with real people. The same study also found that these users do not often question the conversations or advice they receive from the chats, accepting the chatbots’ words to be true, but we must take caution. 
ChatGPT is programmed to be agreeable. Therefore, these interactions are merely a reflection of underlying tones in the user’s writing and conversation, not necessarily an unbiased truth. If you tell the chatbot that you are really nervous about an exam that you barely studied for, it will tell you that everything will be okay. The chatbot may reassure you by emphasizing that you at least studied a little, and that is what matters. However, a person will often tell you the truth by letting you know when something you are doing is wrong and giving advice that takes into account human emotions and interactions, a capability the chatbots lack. 
Furthermore, if a person does not know the answer to your question or cannot give you advice they deem good enough, they will tell you that they simply do not know. AI chats will never tell you that they do not know an answer—they will always give you one. But as we know, the responses AI gives us are not always accurate. Research done by Dr. Giuseppe Giancarre and Dr. Andrea Taloni shows that data manipulation via AI is increasing in academia, making studies seem legitimate when they are not; this growing issue is rightfully a cause for concern. ChatGPT has been known to make up sources and use fake information. If AI is not trusted for source-backed work, why would people accept life advice from it? 
This becomes much more serious when people use chatbots as substitute therapists. Even though ChatGPT can be framed as a helpful tool, it is crucial to note that the program is not a licensed professional. Because of their inclination to generate false data, chatbots risk user safety by sharing misinformation and reinforcing harmful habits. This can slow treatment for an individual’s struggles or even completely nullify it if the person fails to reach a trained professional. There is a lack of clinical studies regarding the effectiveness of using AI to help with mental health, so, given what is already known about AI, it is best to avoid using it for such purposes.
Last semester, Dean of Student Living and Wellness Luis Inoa warned students about this issue over email. Noticing the rise of AI usage, he stated, “It is understandable that when people are navigating emotional struggles, they are drawn to tools that are readily available.” However, the Dean suggested that students should use their better judgment and, if anyone is struggling, the school has multiple resources, such as the counseling and wellness offices, that students can readily use.
In the end, my birthday cake was still delicious, but I could not help but wonder how far we had strayed from the intended taste and process. How much of it was just mistakes incorrectly confirmed by the chatbot? And how far do we stray from genuine connection when we use AI chatbots for companionship? How much of it just becomes an echo chamber of our own beliefs? Can we really call that helpful or sincere?
Your email address will not be published. Required fields are marked *








CORRECTION POLICY:
The Miscellany News will only accept corrections for any misquotes, misrepresentations or factual errors for an article within the semester it is printed.

The Miscellany News is not responsible for the views presented within its Opinions pages. Staff editorials are the only articles that reflect the opinion of a two-thirds majority of the Editorial Board.

source

Scroll to Top