The IETF has circulated an Internet‑Draft proposing an "AI Content Disclosure Header" intended to give automated systems a quick, machine‑readable signal when AI was involved in creating or modifying a web response. The draft appears as draft-abaris-aicdh-00 on public internet‑draft mirrors and is listed in IETF indexes; the mirrored file shows a publication date of 30 Apr 2025. Media outlets picked up the proposal in late August 2025, reporting on the draft and its possible implications for crawlers, archives, and user agents.
The proposed er is voluntary and time‑limited as an Internet‑Draft: it is a community proposal, not a finalized RFC, and it will need implementation experience and feedback before any standardization. The draft's lifecycle is short by design, with mirrors and coverage noting an expiry/refresh window (reports indicate an expiry around 1 Nov 2025), so the text and details may change with community input.
What the proposal is and why it was written
The draft, titled "AI Content Disclosure Header" (draft-abaris-aicdh-00), defines a simple HTTP er to declare AI involvement in HTTP responses. Its stated aim is to provide a compact signal that automated systems can parse quickly without the expense of fetching or verifying more complex manifests.
Authors framed the er as a practical, lightweight tool for machine processing: the draft describes the er as offering a "low‑over, easily parsable signal" primarily for systems that need an immediate indication of AI usage. That design choice favors broad adoptability for proxies, crawlers, and other automated agents that operate at scale.
The proposal is explicitly voluntary: an Internet‑Draft is an early step in the IETF process. Adoption depends on interest from implementers and feedback from the community; only if the idea gains traction and the specification matures would it move toward an RFC or a de facto convention.
Core er fields and the mode taxonomy
The draft proposes several core er fields intended to capture essential metadata about AI involvement. The list reported in media coverage includes mode (degree of AI use), model (model name), provider (model operator), reviewed-by (human reviewer), and date (generation or modification timestamp).
Of particular note is the draft's mode taxonomy, which offers a tiered vocabulary for AI involvement. The reported values are "none", "ai-modified", "ai-originated", and "machine-generated", representing increasing levels of AI contribution to a response. This taxonomy is meant to give downstream systems a quick heuristic about how much of the content was produced or altered by AI.
Keeping the field set small was intentional: the draft focuses on a minimal, consistent signal rather than a detailed provenance manifest. That tradeoff helps make the er a practical option for sites and service operators who want to add transparency with low operational cost.
How the er is intended to be implemented
Technically, the draft uses HTTP Structured Field syntax so that fields can be parsed reliably by machines. The specification intends the er to be applied at the entire HTTP response level, producing a response‑wide indicator that crawlers, archive tools, and user agents can read without fetching extra files or running heavy detection algorithms.
Because it is an HTTP er, the signal is designed to be immediately available to intermediaries and clients that inspect response ers. This makes it suitable for indexing pipelines or archiving systems that prefer a discrete, machine‑readable flag to avoid CPU‑intensive content inspection.
Practical implementations could add the er at the origin server or at edge layers, but the draft notes the er's placement at the response level rather than embedded metadata inside a payload. That position helps ensure uniform access for tools built to read HTTP responses directly.
Practical uses and operational benefits
Reporters and the draft authors highlight several practical uses: search engines could index content more efficiently by skipping or labeling AI‑originated material, archiving services could tag snapshots for provenance workflows, and downstream systems could avoid running costly AI‑detection models on pages that explicitly declare AI involvement.
By providing a consistent, machine‑readable signal, the er could reduce compute and complexity for automated AI‑detection pipelines. For example, a crawler that reads mode="none" could deprioritize further verification, while a mode indicating AI involvement could trigger deeper checks or different indexing policies.
The low over and standard syntax also make it attractive for high‑volume deployments: CDNs, content management systems, and hosting platforms could adopt the er to surface AI metadata to a broad ecosystem of consumers with minimal engineering work.
Security, trust and the limits of a er
The draft itself acknowledges important security and trust limits: a simple HTTP er is not a secure attestation. The er can be modified by intermediaries, omitted, or forged, and the draft explicitly warns that it should not be relied upon as the sole source of truth for compliance or provenance.
Because of these limits, the draft points to cryptographically-backed provenance systems for stronger guarantees. Examples mentioned in reporting and the broader discussion include C2PA and other cryptographic provenance proposals; these systems can provide tamper-evident proofs in contexts where trustworthiness is required.
In short, the AI content disclosure er is designed as a pragmatic signal for automation, not a replacement for cryptographic provenance or attestation systems. The draft recommends coupling the er with stronger mechanisms when security and legal compliance demand verifiable claims.
Limitations, criticisms and related research
Public discussion has emphasized that simple labels are easy to omit or falsify. Journalists and researchers have pointed out that a er can be bypassed by bad actors and does not by itself prevent manipulation or reliably establish provenance; the draft acknowledges these limitations and recommends complementary approaches.
Academic work also complicates the picture: a survey experiment (N=1,601) titled "Labeling Messages as AI‑Generated Does Not Reduce Their Persuasive Effects" (Apr 14, 2025) found that labeling may increase transparency but does not necessarily blunt persuasive influence. That suggests disclosure is valuable but not sufficient to eliminate misinformation or manipulation risks.
There are several alternative and complementary technical proposals in the literature, including ai.txt (a DSL like robots.txt for AI interactions), C2PA, EKILA, perceptual‑hashing and cryptographic provenance proofs. These approaches aim for stronger, tamper‑resistant provenance guarantees, while the er offers an accessible, low‑cost signal for automated workflows.
Status, timeline and where to read the draft
The draft was published as draft-abaris-aicdh-00 on 30 Apr 2025 and was highlighted by multiple outlets with explanatory pieces in late August 2025 (coverage dated 27, 28 Aug 2025). As an IETF Internet‑Draft it is time‑limited and may expire or be refreshed; reports and mirrors note an expiry window around 1 Nov 2025.
Because the document is an Internet‑Draft, it lives on public mirrors and the IETF index; interested readers can locate draft-abaris-aicdh-00 on internet‑draft mirrors such as the IETF index and nic.funet.fi listings. Media summaries such as Tom's Hardware and CyberSIXT provide approachable explainers of the fields, modes and caveats described in the draft.
Adoption remains voluntary. The proposal will need implementer interest and community feedback to evolve: if sites, CDNs, and search and archiving operators begin to experiment with the er, the specification may be revised and could converge toward a de facto or standards‑track practice over time.
Overall, the IETF's AI content disclosure er is a pragmatic, minimal tool aimed at giving machines a quick answer about AI involvement in HTTP responses. It balances simplicity and practicality against the clear limits of non‑cryptographic ers and is positioned as part of a broader toolkit rather than a standalone solution.
For developers and policy makers, the draft is worth watching: it may lower the bar for adding transparent signals at scale, but it also underscores the need for stronger provenance mechanisms and further study on the behavioral effects of disclosure. The draft text, available on public mirrors, is the primary source for exact field names and syntax for anyone wanting to experiment with the er.