Governments are moving quickly from debating whether AI-generated content should be labeled to specifying exactly how it must be labeled, by whom, and in what formats. The shift is driven by a simple reality: synthetic text, audio, images, and video are now easy to produce at scale, and audiences often cannot tell what is authentic without help.
In 2025 and 2026, the European Union is building a compliance path that combines binding law (the EU AI Act) with practical implementation guidance (a dedicated Code of Practice on marking and labelling). At the same time, China has adopted its own labeling measures, showing that “label the output” is becoming a global policy baseline, though the details differ by jurisdiction.
1) Why “labeling AI output” suddenly became regulatory priority
AI-generated media has moved beyond novelty into daily communication: customer support chats, marketing copy, political content, influencer imagery, and voice/video clips. As quality rises and costs fall, the risk is less about “AI exists” and more about confusion, manipulation, and attribution, especially when content is shared without context.
Policymakers increasingly frame transparency as a prerequisite for trust. Labeling is meant to help people interpret what they see and hear, while also supporting downstream detection tools and platform enforcement. That is why recent rules focus on both human-facing disclosure (what users can perceive) and technical marking (what machines can detect at scale).
Another reason is enforceability. Regulators have learned that broad ethical principles are hard to audit. Output marking, particularly when standardized, creates measurable obligations: did the provider mark it, can it be detected, and did the deployer disclose it when required?
2) The EU AI Act’s Article 50: binding marking and disclosure (from 2 Aug 2026)
The EU AI Act introduces specific transparency duties for synthetic content in Article 50. According to Article 50, providers of AI systems that generate synthetic content must ensure outputs are “marked in a machine-readable format and detectable as artificially generated or manipulated,” covering synthetic text, audio, images, and video. The provision is set to be enforceable from 2 August 2026. Source: https://artificialintelligenceact.eu/article/50
Article 50 also separates a second category of responsibility: deployers, those who use AI systems in real-world contexts, must disclose certain deepfakes and certain AI-generated or manipulated text where there is a public-interest dimension. In other words, the law doesn’t treat “making the model” and “using the model” as the same job; it assigns different transparency tasks to each.
This dual structure matters for implementation. A provider might embed technical markers across outputs, while a deployer (a newsroom, campaign, brand, or platform operator) may have additional duties to tell audiences when content is synthetic, especially where the potential for deception is higher.
3) The European Commission’s new Code of Practice on marking and labelling
To make Article 50 workable in practice, the European Commission launched work on a dedicated code of practice specifically on marking and labelling AI-generated content on 5 November 2025, with a seven-month drafting process. The Commission frames the code as a way to help providers meet AI Act transparency obligations and to support machine-readable marking across synthetic audio, images, video, and text, as well as disclosure duties for deployers (deepfakes and some public-interest text). Source: https://digital-strategy.ec.europa.eu/en/news/commission-launches-work-code-practice-marking-and-labelling-ai-generated-content
The key policy idea behind a code is standardization. A legal requirement like “marked in a machine-readable format” can be interpreted in many ways; a code can converge the market around repeatable approaches, watermarking, metadata, content credentials, signatures, or other detectable signals, without having to amend the law each time the technology changes.
It also provides a shared reference point for audits and procurement. If buyers, platforms, and regulators can all point to a code-based baseline, compliance becomes less about ad hoc interpretation and more about implementing known controls that can be tested and documented.
4) Timeline: draft in Dec 2025, revisions in 2026, enforcement in Aug 2026
The first draft of the EU Code of Practice on Transparency of AI‑Generated Content was published on 17 December 2025. Updates indicate a second draft is expected around March 2026, with a final version targeted around June 2026. Source: https://www.techuk.org/resource/dispatch-from-brussels-updates-on-eu-tech-policy-december.
This schedule is not accidental. Article 50 becomes enforceable on 2 August 2026, so a final code in mid‑2026 would give providers and deployers a narrow but realistic runway to implement marking, UI disclosure, documentation, and compliance testing before obligations bite.
The timeline also signals that transparency compliance is becoming a product requirement, not just a policy statement. Organizations that wait for the last minute may struggle to retrofit marking into complex generation pipelines, partner integrations, and multi-channel publishing workflows.
5) Who does what: provider marking vs deployer disclosure
Legal analysis of the EU draft code highlights how it operationalizes Article 50 by separating responsibilities: providers handle the technical side (marking/detectability), while deployers handle the public-facing side (disclosures in specific contexts). This makes the transparency framework more actionable because it maps duties to the actor that has the right levers. Source: https://www.ddg.fr/actualite/transparency-of-ai-generated-content-in-depth-legal-analysis-of-the-draft-code-of-practice-implementing-article-50-of-the-eu-ai-act
For providers, the core challenge is designing marking that survives real-world conditions: recompression of video, screenshots of images, copy/paste of text, edits by users, and format conversions. The requirement that outputs be “detectable” pushes beyond a superficial badge toward signals that can be checked reliably by machines.
For deployers, the challenge is governance and context. A deployer may need policies for when a deepfake disclosure is triggered, how to display it, how to log decisions, and how to ensure third-party agencies or contractors follow the same rules. In practice, many deployers will need a “synthetic content checklist” embedded into publishing workflows.
6) Standardizing labels without ruining creativity or speech
Media-policy analysis notes that the Commission’s December 2025 draft aims to standardize “clear labeling and marking” of synthetic media a of AI Act enforcement. It also discusses the idea of “minimal/non-intrusive” disclosure in artistic or satirical contexts, an attempt to balance transparency with legitimate creative expression. Source: https://www.techpolicy.press/what-the-eus-new-ai-code-of-practice-means-for-labeling-deepfakes
This is an important design tension. If labels are too subtle, they fail their purpose. If labels are too prominent or rigid, they can distort artistic works, stigmatize benign uses, or push creators to avoid compliance. Good policy (and good product design) tries to preserve user understanding while minimizing disruption.
That is why many transparency frameworks implicitly rely on a layered approach: machine-readable marking for ecosystems and detection tools, plus human-readable labeling when the audience needs to know, particularly for deepfakes or public-interest communications where deception risk is high.
7) Beyond Europe: China’s labeling measures show a global direction of travel
China has also moved to formalize labeling. Government “Measures for the Labeling of Artificial Intelligence Generated and Synthesized Content” were co-issued on 7 March 2025 and, according to legal commentary, take effect on 1 September 2025. The measures are described as requiring AI-generated/synthetic content “logos,” requiring users to proactively declare AI-generated/synthetic content when posting, and requiring network content distribution service providers to regulate distribution of synthetic generation activities. Source: https://mmlcgroup.com/china-ai-determination/
China’s direction builds on earlier technical guidance. A summary of an August 2023 TC260 practical guideline describes tagging duties across media types, stating that providers must add tags on generated images/videos/other content consistent with China’s broader deep synthesis governance. Source: https://www.insideprivacy.com/artificial-intelligence/labeling-of-ai-generated-content-new-guidelines-released-in-china/
The takeaway for international companies is not that the EU and China are identical, they are not, but that labeling is becoming a common compliance expectation. Product teams may need region-aware labeling modes, evidence logs, and policy controls that can adapt to different legal triggers (provider vs user declarations, platform duties, and enforcement models).
8) Industry implementation: from “Generated by AI” UX to machine-readable markers
Regulation is pushing transparency from a “nice-to-have” to a standard interface and infrastructure feature. One practical example is the “Generative AI output label” UI guidance (“Generated by AI”), published 8 April 2024, which proposes concrete interface patterns to make AI-produced outputs explicit and improve user trust. Source: https://cloudscape.design/patterns/genai/output-label/
But UI labels alone are unlikely to satisfy requirements like Article 50’s machine-readable marking and detectability. Many organizations will need both: a user-visible indicator in relevant contexts and a technical marker that can be detected by automated tools, even after content travels across platforms.
In parallel, the EU is already experimenting with “code” instruments to guide compliance. A voluntary General‑Purpose AI Code of Practice was released on 10 July 2025 to help providers align with AI Act requirements across areas like transparency, copyright, and safety, and news coverage emphasized the phased AI Act implementation context. Sources: https://digital-strategy.ec.europa.eu/en/policies/contents-code-gpai and https://apnews.com/article/a3df6a1a8789eea7fcd17bffc750e291
New code and new law are converging on the same message: if you generate synthetic content, you should expect to label it, and to do so in ways that humans can understand and machines can verify. The EU AI Act’s Article 50 sets the binding obligation and the enforcement date (2 August 2026), while the Commission’s dedicated code of practice is designed to standardize how the ecosystem actually implements marking and disclosure.
For providers and deployers, the practical task now is to treat labeling as a system capability: implement machine-readable marking for outputs across text, audio, images, and video; design clear user-facing disclosures for deepfakes and public-interest contexts; and build governance so transparency holds up when content is edited, reposted, and remixed. The organizations that start early will have the easiest time turning compliance into credibility.