Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Enter your email to receive alerts for this author.
Sign in or create an account to better manage your email preferences.
Are you sure you want to unsubscribe from email alerts for Alex Kirshner?
Sign up for the Slatest to get the most insightful analysis, criticism, and advice out there, delivered to your inbox daily.
Sam Altman wants you to know he wasn’t mad. Altman was about to launch into a diatribe about the commercials that Anthropic, the maker of ChatGPT competitor Claude, rolled out for the Super Bowl. “First, the good part of the Anthropic ads: they are funny, and I laughed,” the OpenAI founder wrote.
The commercials target Altman’s company for its ongoing rollout of ads in ChatGPT. In each one of these spots, someone asks a question of a dead-behind-the-eyes person who represents ChatGPT. A few seconds into their answer, the A.I.-person pivots into selling something instead of just answering the question. Text appears on-screen: “Ads are coming to AI,” then: “But not to Claude.” (The company slightly tweaked that copy in the ad that actually aired during Super Bowl 60.)
Altman was furious. He called the ads “clearly dishonest,” asserting that OpenAI won’t run ads in that fashion because its users would reject them. He said, “Anthropic wants to control what people do with A.I.,” calling his competitor an “authoritarian company.” Altman cribbed some of the language that the founders of Robinhood, the stock-trading app, have used in finance: “We believe everyone deserves to use A.I. and are committed to free access, because we believe access creates agency.” ChatGPT serving ads is a matter of human liberty, if one really thinks about it.
Amid a deluge of ads for A.I. products, it was easy to shrug the whole thing off as part of a big, messy circus. There were more A.I. commercials during the Super Bowl (15) than New England Patriots points (13). As a collective force, it was an exhausting blitz, designed to get the American public excited about A.I. in a way the industry’s products still have not. (It also came at a moment when ad execs seem very light on creative ideas.)
But the most interesting thing about all of the A.I. Super Bowl ads was not how goddamned many of them there were. It was the rupture they revealed in how the big players are selling themselves. The fight wasn’t about whether people will adopt A.I. (taken as a given) or whether they’ll actually like it (we’ll see), but about what A.I. should be for in the first place. One of these companies has a slightly more tolerable vision than the rest, although it’s fair to question whether any one of these ideals could ever vanquish the others.
The most telling part of Altman’s exchange with Anthropic occurred when the OpenAI CEO introduced a little dose of class warfare. “Anthropic serves an expensive product to rich people,” Altman wrote elsewhere in his 420-word missive. “We are glad they do that and we are doing that too, but we also feel strongly that we need to bring AI to billions of people who can’t pay for subscriptions.”
Altman was careful not to sound too much like a man of the people. After all, ChatGPT and Claude have similar $20-a-month tiers. (I have subscribed to each of them at turns, including both lately.) Both also sell $100 and/or $200 monthly subscriptions to power users whose time with their products requires more computing power. But the companies have different revenue models, and the features they offer map onto those ways of making money.
While both of them have their hands in as many revenue pots as possible, the big thing Anthropic brags about is how many business customers it serves—more than 300,000, it said last September, with those customers paying more than $100,000 a year for its services. Anthropic sells productivity tools to companies. Most famous is Claude Code, the command-line coding tool that will do one of two things: If you (like me) have no idea how to code, it will, with some trial and error, help you build personal app projects you never thought you could build. Most will not amount to anything that lasts, a few will assist you with your job, and one might help you build a killer draft strategy for your fantasy baseball league. If you’re an actual software developer, you’ll use it more effectively, maybe with a dash of fear that the machine will come for your job. Reviews vary among people who really know code, and it raises security concerns that companies cannot wave away.
OpenAI is swimming in those waters too, and recently unveiled a big update to its own coding tool, Codex. But OpenAI’s golden goose is not selling fancy coding software to businesses. It’s generating eye-watering user counts—800 million active users a week, as Altman claimed in October—and trying to figure out how to profit from them. Personally, the first time a generative A.I. tool made me feel the magic and majesty of technology was when I tried Claude Code. I could not possibly be less interested in A.I. video or image generation, which carry dire consequences for society that Altman should be ashamed of just shrugging off. But ChatGPT, through its image generation and its Sora video app, thinks it has a carrot.
Altman bragged about how ChatGPT has more free users in the state of Texas alone than Claude has total users, and he spoke of spreading A.I. to the masses. Whether or not Altman is a true believer in A.I.’s equalizing powers, he’s talking his book. It’s a lot easier to gain mass user adoption if your A.I.’s value proposition includes the ability to dream up a hyperrealistic photograph and make it show up on your phone. You can sell that to a bunch of teenagers! Claude’s business-oriented productivity pitch will have a harder time landing with that constituency.
Altman isn’t alone in thinking that. Google’s Gemini has some slick capabilities, albeit ones that none of us should feel all that comfortable with. Because it’s looped into your Google account, it can hunt for old emails and documents in a way that traditional search cannot. (For example, you can tell it to find a document from three or four years ago in which you wrote a specific thing, rather than just trial-and-erroring a bunch of search terms. Often, it will even work!) It’s also about to drive Apple’s Siri, giving Google another foothold in a product that’s in most of our pockets. It’s got its own developer suite that works fine for lay users and that some coder friends have told me is just as serious as Claude Code. But these tools are not as easy to mass-market as the image-generation Super Bowl commercial the company aired on Sunday.
All of this is unsatisfying. More than three years into the generative A.I. boom, the techno-optimist on my shoulder is holding out for an industry that saves me time on the business tasks that don’t excite me but skips the most pointless and destructive applications of the technology. As the major players currently bill themselves, the A.I. lab that most fits into this vision is Anthropic, which is why Claude is the most likely of these services to keep getting my $20 after I file this story. If not wanting my robot assistant/future overlord to have an image tool makes me an A.I. elitist, as Altman implies, then I will proudly fit myself for a monocle.
That is the opposite of Altman’s vision; OpenAI aims to be ubiquitous, racking up as many users as possible with the help of made-up pictures and videos and a chatbot that will talk you through all sorts of questions and problems. It has sometimes done that very poorly, allegedly up to encouraging a user to end his own life. OpenAI still loses billions of dollars every year, and a whole lot of people have questions about the company’s business acumen and culture. (These people seem to include the boss of Nvidia, OpenAI’s most critical partner.) But when you have 800 million users, your pitch isn’t not working.
Meanwhile, Google wants you to build your entire life around its products, same as ever, and sees Gemini as a tool to that end. It’s agnostic about whether it woos you with image-maker Nano Banana or a better way to search through your Google Drive. If there is anything comforting about Gemini, it’s that it does not represent Google shifting its corporate vision much. “Use Google products more” is still the main thing, and there could be a dark comfort in knowing that the company knew everything it needed to know about you long before you ever tried Gemini. In this sense, it feels less disruptive than its competitors.
The biggest skeptics of the generative A.I. industry may yet be right that the financials don’t really work, and the global stock market is due for a reckoning. But the concept of generative A.I. flat-out going away is a fantasy, absent a regulatory crackdown that is nowhere on the menu of American politics. This means that the rest of us have to decide which vision provides the most benefit for the least harm. I’m partial to the concept of more help with grunt work and less visual slop of the sort that can ruin lives and influence elections. There is a problem, though, and it comes from the unlikelihood that OpenAI or Anthropic will ever vanquish the other. Our A.I. future probably won’t be à la carte, and for now we’ve already ordered the whole thing.
Slate is published by The Slate Group, a Graham Holdings Company.
All contents © 2026 The Slate Group LLC. All rights reserved.
Slate relies on advertising to support our journalism. If you value our work, please disable your ad blocker. If you want to block ads but still support Slate, consider subscribing.
Already a member? Sign in here
Join today to keep reading. You’ll also get access to all of Slate’s independent journalism, unparalleled advice, and daily games. Cancel anytime.
Already a member? Sign in here