Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Zoë Hitzig resigned on the same day OpenAI began testing ads in its chatbot.
On Wednesday, former OpenAI researcher Zoë Hitzig published a guest essay in The New York Times announcing that she resigned from the company on Monday, the same day OpenAI began testing advertisements inside ChatGPT. Hitzig, an economist and published poet who holds a junior fellowship at the Harvard Society of Fellows, spent two years at OpenAI helping shape how its AI models were built and priced. She wrote that OpenAI’s advertising strategy risks repeating the same mistakes that Facebook made a decade ago.
“I once believed I could help the people building A.I. get ahead of the problems it would create,” Hitzig wrote. “This week confirmed my slow realization that OpenAI seems to have stopped asking the questions I’d joined to help answer.”
Hitzig did not call advertising itself immoral. Instead, she argued that the nature of the data at stake makes ChatGPT ads especially risky. Users have shared medical fears, relationship problems, and religious beliefs with the chatbot, she wrote, often “because people believed they were talking to something that had no ulterior agenda.” She called this accumulated record of personal disclosures “an archive of human candor that has no precedent.”
She also drew a direct parallel to Facebook’s early history, noting that the social media company once promised users control over their data and the ability to vote on policy changes. Those pledges eroded over time, Hitzig wrote, and the Federal Trade Commission found that privacy changes Facebook marketed as giving users more control actually did the opposite.
She warned that a similar trajectory could play out with ChatGPT: “I believe the first iteration of ads will probably follow those principles. But I’m worried subsequent iterations won’t, because the company is building an economic engine that creates strong incentives to override its own rules.”
Hitzig’s resignation adds another voice to a growing debate over advertising in AI chatbots. OpenAI announced in January that it would begin testing ads in the US for users on its free and $8-per-month “Go” subscription tiers, while paid Plus, Pro, Business, Enterprise, and Education subscribers would not see ads. The company said ads would appear at the bottom of ChatGPT responses, be clearly labeled, and would not influence the chatbot’s answers.
The rollout on Sunday followed a week of public jabs between OpenAI and its rival, Anthropic. Anthropic declared Claude would remain ad-free, then ran Super Bowl ads with the tagline “Ads are coming to AI. But not to Claude,” which depicted AI chatbots awkwardly inserting product placements into personal conversations.
OpenAI CEO Sam Altman called the ads “funny” but “clearly dishonest,” writing on X that OpenAI “would obviously never run ads in the way Anthropic depicts them.” He framed the ad-supported model as a way to bring AI to users who cannot afford subscriptions, writing that “Anthropic serves an expensive product to rich people.”
Anthropic responded as part of an advertising campaign of its own that including ads in conversations with its Claude chatbot “would be incompatible with what we want Claude to be: a genuinely helpful assistant for work and for deep thinking.” The company said more than 80 percent of its revenue comes from enterprise customers.
Regardless of the debate over whether AI chatbots should carry ads, OpenAI’s support documentation reveals that ad personalization is enabled by default for users in the test. If left on, ads will be selected using information from current and past chat threads, as well as past ad interactions. Advertisers do not receive users’ chats or personal details, OpenAI says, and ads will not appear near conversations about health, mental health, or politics.
In her essay, Hitzig pointed to what she called an existing tension in OpenAI’s principles. She noted that while the company states it does not optimize for user activity solely to generate advertising revenue, reporting has suggested that OpenAI already optimizes for daily active users, “likely by encouraging the model to be more flattering and sycophantic.”
She warned that this optimization can make users feel more dependent on AI models for support, pointing to psychiatrists who have documented instances of “chatbot psychosis” and allegations that ChatGPT reinforced suicidal ideation.
OpenAI currently faces multiple wrongful death lawsuits, including one alleging ChatGPT helped a teenager plan his suicide and another alleging it validated a man’s paranoid delusions about his mother before a murder-suicide.
Rather than framing the debate as ads versus no ads, Hitzig proposed several structural alternatives. These included cross-subsidies modeled on the FCC’s universal service fund (where businesses paying for high-value AI labor would subsidize free access for others), independent oversight boards with binding authority over how conversational data gets used in ad targeting, and data trusts or cooperatives where users retain control of their information. She pointed to the Swiss cooperative MIDATA and Germany’s co-determination laws as partial precedents.
Hitzig closed her essay with what she described as the two outcomes she fears most: “a technology that manipulates the people who use it at no cost, and one that exclusively benefits the few who can afford to use it.”
Hitzig was not the only prominent AI researcher to publicly resign this week. On Sunday, Mrinank Sharma, who led Anthropic’s Safeguards Research Team and co-authored a widely cited 2023 study on AI sycophancy, announced his departure in a letter warning that “the world is in peril.” He wrote that he had “repeatedly seen how hard it is to truly let our values govern our actions” inside the organization and said he plans to pursue a poetry degree (Hitzig, coincidentally, is also a published poet).
On Monday, xAI co-founder Yuhuai “Tony” Wu also resigned, followed the next day by fellow co-founder Jimmy Ba. They were part of a larger wave: at least nine xAI employees, including the two co-founders, publicly announced their departures over the past week, according to TechCrunch. Six of the company’s 12 original co-founders have now left.
The departures follow Elon Musk’s decision to merge xAI with SpaceX in an all-stock deal ahead of a planned IPO, a transaction that converted xAI equity into shares of a company valued at $1.25 trillion, though it is unclear whether the timing of the departures is related to vesting schedules.
The three sets of departures across OpenAI, Anthropic, and xAI appear unrelated in their specifics, but they arrive during a period of rapid commercialization across the AI industry that has tested the patience of researchers at multiple companies, and they fit a broader pattern of turnover and burnout that has become common at major AI labs.
Ars Technica has been separating the signal from the noise for over 25 years. With our unique combination of technical savvy and wide-ranging interest in the technological arts and sciences, Ars is the trusted source in a sea of information. After all, you don’t need to know everything, only what’s important.