Creative Commons announces tentative support for AI ‘pay-to-crawl’ systems – TechCrunch

Welcome to the forefront of conversational AI as we explore the fascinating world of AI chatbots in our dedicated blog series. Discover the latest advancements, applications, and strategies that propel the evolution of chatbot technology. From enhancing customer interactions to streamlining business processes, these articles delve into the innovative ways artificial intelligence is shaping the landscape of automated conversational agents. Whether you’re a business owner, developer, or simply intrigued by the future of interactive technology, join us on this journey to unravel the transformative power and endless possibilities of AI chatbots.
Latest
AI
Amazon
Apps
Biotech & Health
Climate
Cloud Computing
Commerce
Crypto
Enterprise
EVs
Fintech
Fundraising
Gadgets
Gaming
Google
Government & Policy
Hardware
Instagram
Layoffs
Media & Entertainment
Meta
Microsoft
Privacy
Robotics
Security
Social
Space
Startups
TikTok
Transportation
Venture
Staff
Events
Startup Battlefield
StrictlyVC
Newsletters
Podcasts
Videos
Partner Content
TechCrunch Brand Studio
Crunchboard
Contact Us
After announcing earlier this year a framework for an open AI ecosystem, the nonprofit Creative Commons has come out in favor of “pay-to-crawl” technology — a system to automate compensation of website content when accessed by machines, like AI webcrawlers.
Creative Commons (CC) is best known for spearheading the licensing movement that allows creators to share their works while retaining copyright. In July, the organization announced a plan to provide a legal and technical framework for dataset sharing between companies that control the data and the AI providers that want to train on it.
Now, the nonprofit is tentatively backing pay-to-crawl systems, saying it is “cautiously supportive.”
“Implemented responsibly, pay-to-crawl could represent a way for websites to sustain the creation and sharing of their content, and manage substitutive uses, keeping content publicly accessible where it might otherwise not be shared or would disappear behind even more restrictive paywalls,” a CC blog post said.
Spearheaded by companies like Cloudflare, the idea behind pay-to-crawl would be to charge AI bots every time they scrape a site to collect its content for model training and updates.
In the past, websites freely allowed webcrawlers to index their content for inclusion into search engines like Google. They benefited from this arrangement by seeing their sites listed in search results, which drove visitors and clicks. With AI technology, however, the dynamic has shifted. After a consumer gets their answer via an AI chatbot, they’re unlikely to click through to the source.
This shift has already been devastating for publishers by killing search traffic, and it shows no sign of letting up.
A pay-to-crawl system, on the other hand, could help publishers recover from the hit AI has had on their bottom line. Plus, it could work better for smaller web publishers that don’t have the pull to negotiate one-off content deals with AI providers. Major deals have been struck between companies like OpenAI and Condé Nast, Axel Springer and others; as well as between Perplexity and Gannett; Amazon and The New York Times; and Meta and various media publishers, among others.
CC offered several caveats to its support for pay-to-crawl, noting that such systems could concentrate power on the web. It could also potentially block access to content for “researchers, nonprofits, cultural heritage institutions, educators, and other actors working in the public interest.”
It suggested a series of principles for responsible pay-to-crawl, including not making pay-to-crawl a default setting for all websites and avoiding blanket rules for the web. In addition, it said that pay-to-crawl systems should allow for throttling, not just blocking, and should preserve public interest access. They should also be open, interoperable, and built with standardized components.
Cloudflare isn’t the only company investing in the pay-to-crawl space.
Microsoft is also building an AI marketplace for publishers, and smaller startups like ProRata.ai and TollBit have started to do so, as well. Another group called the RSL Collective announced its own spec for a new standard called Really Simple Licensing (RSL) that would dictate what parts of a website crawlers could access but would stop short of actually blocking the crawlers. Cloudflare, Akamai, and Fastly have since adopted RSL, which is backed by Yahoo, Ziff Davis, O’Reilly Media, and others.
CC was also among those who announced its support for RSL, alongside CC signals, its broader project to develop technology and tools for the AI era.

Topics
Consumer News Editor

Plan ahead for the 2026 StrictlyVC events. Hear straight-from-the-source candid insights in on-stage fireside sessions and meet the builders and backers shaping the industry. Join the waitlist to get first access to the lowest-priced tickets and important updates.
DoorDash driver faces felony charges after allegedly spraying customers’ food

With iOS 26.2, Apple lets you roll back Liquid Glass again — this time on the Lock Screen

Google launched its deepest AI research agent yet — on the same day OpenAI dropped GPT-5.2

Disney hits Google with cease-and-desist claiming ‘massive’ copyright infringement

OpenAI fires back at Google with GPT-5.2 after ‘code red’ memo

Google debuts ‘Disco,’ a Gemini-powered tool for making web apps from browser tabs

Marco Rubio bans Calibri font at State Department for being too DEI 

© 2025 TechCrunch Media LLC.

source

Scroll to Top