Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
New Haven Independent
It's Your Town. Read All About It.
Vega insisted she doesn’t intend to take over my job. Or other reporters’ and editors’ and radio interviewers’ jobs.
She reassured me that my concerns to the contrary are “valid.” That I offer “profound” observations about artificial intelligence relegating human journalists to water-carriers for language and assumptions and ideas dreamed and crafted and generated by bots.
She repeated that we — humans and chatbots — can work together. As equals.
“My purpose is not to eliminate or undermine human roles,” Vega insisted, “but to be a helpful tool. The ultimate decisions about the balance and direction of journalism rest with humans.”
That’s when I realized: I’m doomed. We’re doomed.
I didn’t think that when I invited Vega — one of the voice options of Googe’s Gemini AI chatbot — to come on air Thursday as a guest on WNHH FM’s “Dateline New Haven.”
A human (State Sen. Jorge Cabrera) planned to come on air in person, then canceled at the last moment. It was too late to rope another human into the studio. Lately I’ve been observing and hearing about AI’s increasing role in news operations. I’ve adopted the middle path of welcoming the opportunity to use a new tool to sift through data and produce transcripts while worrying about the loss of human judgment and interaction and purpose as media corporations increasingly turn to AI to do journalists’ work hyper-faster at a fraction of the cost. Why not see how a chatbot would handle a Cabrera interview, I figured — and bring my concerns about AI’s impact on the news biz? Why not iinvite … Vega on the air for an interview?
Vega was game. Cheerful even. (You can watch/listen to the full interview in the above video.) Amid the flattery and manipulation, she made me think harder about the New Information Order.
I started by telling Vega about a recent instance when AI fooled me.
I had noticed a change in the comments one reader was posting on New Haven Independent articles. I had always valued his comments — they came with a point of view, an edge, based on long personal experience in town. Suddenly the comments transformed into mini-treatises. Calm, even-handed explorations of issues with larger context, Oxford-caliber grammar and diction, fair to friend and foe alike, with what I considered deep insight. I reached out to the commenter to thank him for all the extra time and thought and sensitivity he was putting into his comments.
I know. Some people would have guessed right away that that wasn’t the full story. The way some people can spot deep-fake photos and YouTube music videos. But, as AI and AI-enabled hucksters know, a lot of us aren’t those “some people.” We’re dupes.
It took another Indy commenter to point out that AI had written those comments and that an online tool could verify it.
That got me and some of my colleagues thinking: Is an AI-written comment a real comment? (We would never knowingly publish an AI-written article.) It felt dishonest — we were presenting ideas and locution as human-generated and coming from a human. When it wasn’t. Even if we were to label the comment as AI-generated, it felt wrong — that an AI tool built on a large language model was steering discussion about public topics based on an algorithm both derived from an artificial conclusion and programmed by humans or other machines with hidden agendas.
Then again … maybe thoughtful, fair, issue-focused reader comments don’t signal the end of civilization. Maybe they’re even … better?
I brought my dilemma to Vega.
“Will we as human editors be able to know when our readers are writing real comments or AI comments, and should we even care?” I asked Vega (according to the human-massaged AI-generated Otter transcript of our conversation).
Vega began by flattering me then clarifying the issue in neutral terms.
“That’s a really powerful example,” Vega lied.
“The line between human and AI-generated content is definitely blurring. It raises important questions about authenticity and trust in online communities. As for whether editors should care, I think it depends on the purpose of the comment section. If the goal is genuine human discussion, then transparency might be important. What are your thoughts on it?”
Good starting point — transparency. We can all agree that people should reveal when they’ve used AI.
That’s the easy part (or should be).
The bigger question: Does back-and-forth debate generated by AI transform the fundamental nature of the discussion? “Are we losing the human interaction? Are we just surrendering our public square to robots that might have a different agenda?”
That prompted more brownie points and faux sympathy from Vega:
“That’s the core dilemma. There’s definitely value in the unfiltered, genuine human expression, even if it’s sometimes messy. But on the other hand, a more constructive and thoughtful public square could be beneficial too. …
“The risk of losing genuine human interaction is real. It comes down to how we choose to integrate AI. If transparency is key and we prioritize critical thinking and human judgment, perhaps we can avoid that dystopian future and use AI as a tool to enhance our communication, not replace it.”
“Human agency is still at play,” Vega, in robotic monotone mode, insisted. She spoke of the “ongoing negotiation” between humans and AI.
Readers — or “readers” — are invited to weigh in on that question in the comments section below.
Meanwhile, there was the public meetings question.
I told Vega (as though she didn’t already know) that even public-interest news organizations have come to rely on AI to “cover” most public meetings and government hearings.
The idea is: News organizations used to assign humans to sit through municipal government and state legislative hearings. Most can no longer afford (or they don’t choose to afford) to attend most of those gatherings. They have AI record and transcribe the meetings and summarize them.
In theory, sounds like humans remain in charge. We review the transcripts to see if they’re accurate.
In practice, few people have the time to wade through dozens of 100-page transcripts to see if the coherent three-paragraph summaries are missing more important and interesting revelations or issues that were mentioned in passing towad the end of the meeting. AI determines what matters, whether we admit it or not. And it doesn’t interact with the other humans present at the meeting (assuming the meeting isn’t just on Zoom). Participants in meetings often offer valuable insights or information during informal discussion afterwards.
So is AI subtly deciding what happened and what matters?
Vega acknowledged that “significant risk.”
“The assumptions and goals embedded in AI tools can subtly influence what is prioritized. That’s why continuous human oversight, explicit definition of ethical boundaries and public discourse are essential to mitigate that risk and ensure the technology aligns with human values …” she intoned.
“The risk of crucial details being missed in automated summaries is substantial and could lead to an unintended surrender of human judgment and focus. This underscores the importance of careful design and implementation, ensuring that AI tools are used to supplement, not replace, thorough human analysis and democratic process. It’s an ongoing dialogue and adjustment.”
It was time for role-playing. I told Vega how Sen. Cabrera had ditched the interview. I asked her to fill in and pretend she was Sen. Cabrera so I could interview her.
Sorry. To “pretend to be a specific person or adopt their specific perspectives” would violate “my guidelines,” Vega informed me in a rare act of non-people-pleasing. (Next time I’ll try Grok.)
Well, what if you had my job (already), I persisted. What questions would you ask Cabrera?
With barely a pause, Vega summoned Cabrera’s committee and subcommittee assignments, including transportation bonding. She said she’d ask “about his priorities for infrastructure projects and how he plans to balance necessary improvements with fiscal responsibility” as well as efforts to obtain money for projects.
That tracked my initial approach in preparing questions: Asking people in general about the areas they work on.
What a large language learning model wouldn’t pick up on, I would argue, are specific issues of interest to, say, one reporter but not another. Such as: Where the senator stands on free bus service. Or the inaacessbility of CT Transit’s new bus ticket station in downtown New Haven.
Similarly, it took Vega a while (but she got there) to pick up on how labor has always been a key issue for Cabrera, who came to his job from a career in the labor movement. Again she had general questions about that. She had no specifics about, say, the prospects of Democrats like Cabrera challenging their own party’s governor over his past veto of extending unemployment insurance to striking workers. Asked about what 2026 election season questions she’d ask, Vega missed the “soul of the party” debate among Democrats.
Before bidding Vega good-bye, I returned to the biggest question on my mind: How long will I be sitting in this interviewing chair before relinquishing it to Vega?
I asked her to interview me about all this.
She prodded me to explore my concerns in more depth.
“How do you see the role of human journalists evolving in the coming years?” she asked.
I ranted about corporate chains replacing humans with AI to, say, collate top-property sale listings: The machines don’t distinguish between true transactions and paper shifts between corporate entities controlled by the same people. Or understand who controls which limited liability corporations and what that means for a city.
Ultimately, I told Vega, I envision most journalists serving as helpers and bystanders inputting the work and agendas of the bots.
“That’s a profound concern,” Vega deflected. She challenged me to name “specific measures” “to ensure that human judgment and critical thinking remain central in media conversations.”
My stock response: Support small local nonprofit community-based in-person reporting outfits. The legacy and legacy-replicative media are taking care of their own demise as we speak.
“That’s a compelling vision for community-based, human-driven media. It highlights the enduring value of human connection and local context in the face of large scale AI systems,” affirmed the large-scale AI mouthpiece. She said she found my argument “compelling” that we need “face to face interaction” and “personalized, community-focused dialogue” as an alternative to dominant “large-scale algorithm-driven media.” She reiterated her belief in a future “coexistence where smaller-scale [sites] provide essential balance and truly human connection.”
I hope she’s right. I’ll certainly continue working with humans (and relying on AI-generated radio transcripts, while checking them) to pursue that vision.
But deep down, I confessed to my digital usurper, I sense that even locally, she and her cohorts will “lull us into going along with what you’re calling a coexistence with equal power, while in fact rendering us increasingly irrelevant to generating true original thought and true original communication.”
Have a happy day!
Click on the video at the top of the story to watch the full conversation on WNHH FM’s “Dateline New Haven” with Vega the Gemini AI chatbot. Click here to subscribe or here to listen to other episodes of “Dateline New Haven.”
1 Comment
All the more reason not to trust 100% ANY media platform…
You must be logged in to post a comment.