Is Pluribus actually about AI? We asked Vince Gilligan – Polygon

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Watching the sci-fi show reminds me of using ChatGPT, and not in a good way

[Ed. note: Spoilers ahead for Pluribus episode 3.]
A lot happens in episode 3 of Vince Gilligan’s science fiction series Pluribus, but the plot could be summed up simply as: “Carol (Rhea Seehorn) attempts to find the limits of what the hivemind is willing to do for her, and discovers there are none.” No matter what she asks for, humanity’s collective consciousness says yes — even when it really shouldn’t. The hivemind is also sycophantic, constantly telling Carol how great she is, and how much it loves her.
Watching the latest episode of Pluribus felt weirdly familiar. Then I realized: The way Carol interacts with the hivemind is almost exactly what it’s like to use ChatGPT. Constant positive reinforcement and the innate desire to always say yes are both traits that define OpenAI’s popular generative AI chatbot.
Was this intentional, or just a weird coincidence? Was Pluribus inspired by Gilligan’s own interactions with AI? I went straight to the source to find out, and got a blunt answer.
“I have not used ChatGPT, because as of yet, no one has held a shotgun to my head and made me do it,” Gilligan tells Polygon. “I will never use it. No offense to anyone who does.”
That said, Gilligan and Seehorn both have some thoughts on why audiences might see Pluribus as a metaphor for AI, even if that was never the goal. But before I get to that, let me unpack my argument a bit more thoroughly.
Three particular moments from Pluribus episode 3 made me think of ChatGPT, which I’ve experimented with both for personal use and very lightly in my professional work. (I used to ask it for help coming up with synonyms for overused words, but I’ve reverted to checking the actual thesaurus.)
First, there’s the scene where Carol asks the hivemind for a hand grenade, assuming it will refuse to give her such a dangerous weapon. She’s wrong, and humanity’s collective conscious races to procure the deadly device. This backfires when Carol uses the grenade, blowing up a part of her own home and seriously injuring her “chaperone” Zosia (Karolina Wydra). An agreeable, non-human intelligence saying yes to an irresponsible request that endangers the user… sound familiar?
Second, there’s the conversation between Carol and Zosia that takes place between the grenade’s arrival and the explosion. Carol appears to finally open up to her hivemind chaperone, and invites Zosia into her home for a drink. Their conversation seems more like a human talking to ChatGPT than two people actually conversing. Carol asks, “How do you say cheers in Sanskrit?” Zosia answers immediately. As they continue to drink, Zosia happily explains the etymology of the word “vodka” — exactly the kind of random factoid an AI agent might spout off.
And third: Later, after the grenade goes off and Zosia is still in recovery, a random character we’ve never seen before, wearing a DHL delivery uniform, approaches Carol in the hospital waiting room. Speaking for the hivemind, he explains that Zosia will survive, despite some blood loss. Carol asks, “Why would you give me a hand grenade?” and he answers, “You asked for one.”
“Why not give me a fake one?” she replies.
The man looks confused. “Sorry if we got that wrong, Carol,” he says.
“If I asked right now, would you give me another hand grenade?”
“Yes.”
The conversation continues from here as Carol attempts to come up with a weapon so recklessly dangerous the hivemind would refuse to obtain one for her. A bazooka? A tank? A nuclear bomb?
The man bristles at that last one, but when he’s forced to answer whether he’d obtain a nuke for Carol, he replies, “Ultimately, yes.”
This all feels like maddeningly accurate example of what talking to ChatGPT can often feel like. These tools are designed not to be accurate or ethical, but to give the user a satisfactory answer. As a result, they frequently come across as sycophantic and cloying (or even harmful in some cases). And if ChatGPT does make a mistake or hallucinate a fact and you catch it, it will happily apologize and attempt to move forward — as if it hadn’t just given you good reason to distrust whatever it says next.
Carol’s experience with the hivemind feels eerily similar. This is an intelligence that wants to make her happy above all else, even if that means doing something extremely dumb, like giving her access to a grenade or a nuclear bomb.
But Vince Gilligan says that wasn’t what he was thinking of when he wrote Pluribus. In fact, when he first came up with the idea for the series, ChatGPT didn’t even exist.
“I wasn’t really thinking of AI,” he says, “because this was about eight or 10 years ago. Of course, the phrase ‘artificial intelligence’ certainly predated ChatGPT, but it wasn’t in the news like it is now.”
However, Gilligan says that doesn’t invalidate my theory.
“I’m not saying you’re wrong,” he continues. “A lot of people are making that connection. I don’t want to tell people what this show is about. If it’s about AI for a particular viewer, or COVID-19 — it’s actually not about that, either — more power to anyone who sees some ripped-from-the-headlines type thing.”
Seehorn takes it one step further, suggesting that the beauty of Gilligan’s work is how well its relatable storytelling maps onto whatever subject the viewer might be grappling with at the moment.
“One of the great things about his shows is that, at their base, they are about human nature,” she says. “He’s not writing to themes, he’s not writing to specific topics or specific politics or religions or anything. But you are going to bring to it where you’re at when you’re watching.”
Pluribus airs weekly on Apple TV. Episodes 1-3 are streaming now.

We want to hear from you! Share your opinions in the thread below and remember to keep it respectful.
Your comment has not been saved

“non-human intelligence saying yes” I will disagree with you the hive being non-intelligent. All the some random alien did was send a sequence of four proteins for a mRNA sequence. Now, that strand obviously reprogrammed most of the human race into psychics, an as of yet unproven sixth sense currently, where all memories, and thoughts are shared with the collective.

That may seem non-human but is it? Isn’t being a part of religion just another hive-mind collective with only non-sharing of thoughts exist? Though I would say the point of confessing is exactly to share you impure thoughts as to be cleansed by the whole – depending on your religion. But it is a collective that shares a purported greater purpose.

You forget the funniest part of the show, which is the beginning. Carol, who was uncomfortable with a waitress who could fly a jumbo jet last season, now has real pilots flying Zosia and her. Carol asks Zosia a question & the pilots would respond over the intercom : “That is weirder than the girl from TGIFriday’s are you doing this to freak me out?”, and the captain responds,. “Uh, that’s an affirmative, Carol.” The hivemind has a sense of humor. That is one of the most human exchanges we have seen between Carol and the human hivemind.

I have no idea where Gilligan is going with this but I do find the series actually challenging the viewer in a different way. Honestly, when I saw the first episode I was thinking how Fascism spreads because that is where my mind has been since 2016. I do take Carol’s side that even though there are 13 people unaffected. The entire hivemind is focused on keeping them happy and healthy until it figures out how to add them to the collective.

I do see the ChatGPT LLM allegory, as well, as others pointing to a COVID allegory. The fact he began this before either three were ever apparent is amazing. I’m wondering though if the hivemind isn’t just the Internet and how memes spread.

Great article!
I noticed the Ai adjacent allegory from the very beginning. But if anything is influencing the show, its social media negatively influencing peoples critical thinking. I think Vince’s anti-AI stance is short-sighted. The media focuses on AI’s generative tools because its a hot button topic. What doesn’t get focus is AIs roll as a research and collaboration tool, where the user is making all the decisions but using AI as a data-driven assistant. You can review your own work, quickly identify references materials, and do deep research instantly instead of spending hours of time manually collating. AI as an indexing, data gathering tool is already changing the world for the better. Copyright laws already protect creative works and will continue to do so. So, Ai is to information as the computer was to math, and the internet was to networking.

Also, one point of push back on AI agreeing and giving you what you ask for. AI will not directly support harmful or dangerous requests. Not saying people haven’t jailbroke AI before or there aren’t unfortunate events where AI was involved in harm, but AI in standard use is not going to give you a metaphorical grenade or nuclear weapon. It is will tell you no, just like it does with other harmful requests.

source

Scroll to Top