The web has long relied on simple, machine-readable signals , think robots.txt and RSS , to coordinate crawlers and publishers. With the rise of large language models and other AI systems that train on public content, publishers have been left with few practical tools to declare how their work may be used or to demand compensation when it is.
Really Simple Licensing (RSL), publicly launched on September 10, 2025, proposes a new layer for the web: a standardized, machine-readable way for publishers to publish licensing, usage and payment terms that AI crawlers and agents can automatically discover and respect. The initiative pairs an open technical spec with a nonprofit RSL Collective intended to handle collective bargaining, billing and royalty distribution.
What RSL is and who’s behind it
RSL (Really Simple Licensing) is an open web licensing standard designed to let sites publish explicit, machine‑readable license descriptors. The launch on September 10, 2025 introduced both a technical specification and a nonprofit RSL Collective to represent publishers’ interests at scale (RSL Collective press materials and spec pages explain the details).
The project is led by industry figures including Doug Leeds and Eckart Walther (Walther is an RSS co‑creator), and its technical steering committee includes names such as RV Guha, Tim O’Reilly, Stephane Koenig and Simon Wistow. Those credentials signal an attempt to build on established web standards practices while addressing a new problem brought by AI training.
At launch many major publishers and platforms signed on as supporters, including Reddit, Yahoo, Medium, Quora, O’Reilly Media, Ziff Davis, People Inc., Internet Brands, The Daily Beast and wikiHow. Fastly, Quora and Adweek were listed as supporting partners, highlighting both editorial and infrastructure interest in the effort.
How it technically extends robots.txt and RSS-era tooling
RSL deliberately extends existing, well-known web controls rather than creating an entirely new discovery mechanism. Sites can add a "License:" directive in their robots.txt that points to an RSL license descriptor. That lets crawlers discover licensing terms the same way they find crawl rules.
The RSL spec also defines concrete XML license files , for example embedding an <rsl> element in a hosted XML file , and provides example implementations such as adding License: https://rslcollective.org/royalty.xml to robots.txt. Those templates let publishers opt into free attribution, subscription models, pay‑per‑crawl and even pay‑per‑inference licenses.
The familiar discovery path is meant to make adoption technically straightforward for both sites and bot operators, but it depends on crawlers actually reading and honoring the signals or on edge/CDN infrastructure enforcing them for publishers that choose that route.
Licensing models: attribution to pay‑per‑inference
RSL supports multiple licensing models. Publishers can publish free/attribution licenses for open sharing, subscription terms for frequent access, pay‑per‑crawl arrangements that bill for each crawl, or pay‑per‑inference royalties that trigger when an AI system generates outputs derived from the content.
Pay‑per‑inference is perhaps the most novel. The RSL Collective provides standard templates and APIs intended to allow publishers to collect royalties each time an AI application's output is tied back to licensed material , a mechanism meant to account for the downstream value derived from training data, not just the act of crawling.
These options are intentionally flexible: publishers can choose what they want their content to permit, and the RSL files are machine‑readable so enforcement or automated negotiations can be built on top of them.
Collective rights organization model and membership
The RSL Collective is modeled on collective rights organizations like ASCAP or BMI. The idea is to pool the rights of many publishers and creators so the Collective can negotiate standard terms with AI companies, manage billing, reporting and audits, and distribute royalties back to members.
Membership in the RSL Collective is free and non‑exclusive, according to the launch materials, and it’s positioned to be applicable across a wide range of content , webpages, books, videos, paywalled material and datasets all fit within the intended scope.
For smaller publishers who previously lacked leverage in one‑off licensing talks, the Collective offers the prospect of standardized contracts, shared technical tooling and a revenue pipeline that scales with AI adoption , provided the standard is widely adopted by AI firms and enforcement partners.
Enforcement, edge cooperation and the role of CDNs
RSL’s creators describe a layered enforcement model. At the discovery level, robots.txt directives and RSL files tell bots what the publisher requires. For stronger enforcement, RSL anticipates cooperation from CDNs and edge providers: Fastly, listed as a partner at launch, is cited as an example of infrastructure that can admit or block bots based on licensing compliance.
That edge cooperation could allow publishers to protect content proactively by letting compliant crawlers through while denying access to non‑compliant agents. But not all publishers use CDN services that can enforce those rules, and in those cases enforcement would rest on AI companies’ voluntary compliance or on legal remedies.
Industry analysts have emphasized this limitation: RSL’s technical feasibility is clear, but its practical effectiveness depends on the willingness of AI firms and infrastructure partners to implement enforcement and payment mechanisms.
Reception, media framing and industry caveats
Coverage from major outlets framed RSL as "beefing up robots.txt" to create pay‑per‑output or pay‑per‑crawl possibilities. Commentators praised the clarity and standardization RSL offers but flagged adoption as the crucial variable: if AI companies don’t implement the spec, its commercial provisions remain aspirational.
Reporters also noted legal ambiguities. It is not yet settled in many jurisdictions whether robots.txt‑style signals create enforceable rights for AI training, and some commentators stress that litigation, regulation or contractual agreements will shape outcomes alongside technical standards.
As of the launch, several major AI developers (Google, OpenAI, Meta, xAI and others) had not publicly committed to RSL. That absence underscores a practical reality: technical standards matter, but market and policy decisions by AI vendors will largely determine whether publishers actually receive the payments RSL enables.
Practical benefits for publishers and what to watch next
For publishers, the key benefits RSL promises are straightforward: standardized, automatable licensing at web scale; potential new revenue streams through royalties or subscription models; and collective bargaining leverage for smaller outlets that otherwise lack negotiating power.
RSL provides concrete templates, developer docs and a spec homepage so publishers and technologists can begin implementing license descriptors and the robots.txt directives required for discovery. Early adopters among major publishers and platforms provide momentum for broader uptake.
Observers should watch follow‑up reporting from outlets like The Verge, TechCrunch and Ars Technica as adoption talks progress and as AI firms respond to licensing proposals. The coming months will likely show whether RSL becomes a widely honored part of the AI toolchain or remains an influential but optional standard.
Tim O’Reilly framed RSL as building on the legacy of RSS by adding a missing licensing layer for an AI‑first internet, and leaders like Medium’s CEO Tony Stubblebine argued that AI must pay when it uses writers’ work , quotes that capture the publishers’ perspective on fairness in the AI era.
As Eckart Walther put it in interviews, the web now needs machine‑readable licensing agreements. RSL attempts to supply that machinery; whether the internet and its AI inhabitants adopt it is the next chapter in this evolving story.
In short, RSL is a technically grounded, publisher‑driven effort to regain control over how web content is used by AI, offering both granular license options and a collective model for negotiation and payment. Its success will depend on the convergence of standards, infrastructure cooperation and corporate buy‑in.
Keep an eye on the RSL spec and the RSL Collective pages for technical templates and updates, and follow reporting from industry outlets for developments in adoption, enforcement experiments and legal challenges.