Anthropic’s chatbot Claude goes down amid ‘unprecedented demand’ – The Mercury News

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Today's e-Edition
Get Morning Report and other email newsletters

Get Morning Report and other email newsletters
Today's e-Edition
Trending:
By Shona Ghosh, Bloomberg
Anthropic PBC’s artificial intelligence chatbot Claude and related consumer-facing applications went down early Monday, with the startup saying it has been grappling with “unprecedented demand” for its services over the past week.
Nearly 2,000 users had reported Claude AI service disruptions at the outage’s peak around 6:40 a.m. New York time, according to service-monitoring website Downdetector. Anthropic said in a statement by WhatsApp that “consumer-facing surfaces” such as claude.ai and the company’s apps were offline. Businesses that have integrated Claude’s AI models into their own systems were unaffected.
“We appreciate everyone’s patience as we work to bring things back online while experiencing unprecedented demand for Claude over the last week,” Anthropic said in the statement. As of 10:50 a.m. New York time, the company said the outage issue had been resolved and all systems were operational, according to a status update website.
Anthropic has seen a surge in usage of its services as it feuds with the US Defense Department over the potential use of its technology for mass surveillance and the development of autonomous weaponry. The Pentagon has declared Anthropic a supply-chain risk, an unprecedented move against an American company that threatens to have profound consequences for its business. The number of free users of Claude has increased more than 60% since January, and paid subscribers have more than doubled since October, according to Anthropic.
Anthropic has stipulated that its products not be used for surveillance of Americans or to make fully autonomous weapons and said on Friday that “no amount of intimidation or punishment from the Department of War will change our position.” The company vowed to challenge any formal notification that it’s been designated a supply-chain risk in court, and its chief executive officer Dario Amodei called the move “retaliatory and punitive” in an interview with CBS News.
Hours after Anthropic was declared a supply-chain risk, larger rival OpenAI agreed to deploy its own AI models within the Defense Department’s classified network, saying it had reached an agreement that reflects the firm’s principles that prohibit domestic mass surveillance and require “human responsibility for the use of force, including for autonomous weapon systems.”
OpenAI went on to defend its new deal, saying it built a number of safeguards into its contract that will work to ensure its models are used and behave as they should as part of the deployment. But some were already online over the weekend calling on users to cancel their ChatGPT subscriptions as a result of the agreement.
The Claude app has meanwhile topped Apple Inc.’s App Store for several days, and Silicon Valley workers have rallied around the company’s stance.
 
Copyright 2026 The Mercury News. All rights reserved. The use of any content on this website for the purpose of training artificial intelligence systems, algorithms, machine learning models, text and data mining, or similar use is strictly prohibited without explicit written consent.

source

Scroll to Top