The European Union is moving from broad principles about “transparent AI” to concrete, operational rules that affect how AI-generated media is produced and distributed. A key shift is that content generated or manipulated by AI, whether text, images, audio, or video, will increasingly need to carry signals that make its origin detectable.
This direction is anchored in the EU AI Act, where Article 50 sets transparency obligations for both providers and professional deployers. In parallel, the European Commission, through the EU AI Office, is shaping a voluntary Code of Practice meant to guide the market toward practical, interoperable watermarking and labeling approaches a of the law’s application date.
1) What the EU AI Act actually requires under Article 50
Under the EU AI Office process tied to Article 50 of the AI Act, providers must ensure that certain AI outputs are “marked in a machine-readable format” and are detectable as AI-generated or AI-manipulated. The obligation is broad in media scope, explicitly spanning audio, image, video, and text.
These requirements are not just about visible labels. The emphasis on “machine-readable” marking signals a technical compliance layer: platforms, detectors, or downstream tools should be able to identify AI origin signals at scale, even when content is shared across services.
The AI Act also differentiates responsibilities: providers who build or offer the systems must implement output marking, while deployers, those using the systems professionally in publishing contexts, must provide disclosures in specific situations. This dual structure aims to place accountability both at creation and at distribution.
2) The Commission’s Code of Practice: voluntary now, legally relevant later
The European Commission has announced that an upcoming voluntary Code of Practice will “support the marking of AI-generated content… in machine-readable formats to enable detection.” This is intended to help industry align on approaches before Article 50 obligations apply.
The Commission has also confirmed the timeline: transparency rules for AI-generated content become applicable on 2 Aug 2026. In other words, the Code is not the law, but it is designed to reduce fragmentation and accelerate adoption before enforcement becomes real.
This mirrors a broader EU strategy already seen elsewhere in the AI Act ecosystem: create a voluntary instrument that helps organizations converge on best practices, then rely on that convergence to make compliance more feasible when the binding requirements begin.
3) Second draft (05 Mar 2026): open standards, EU icon, and a two-layer marking approach
According to the European Commission library entry dated 05 Mar 2026, the second draft Code of Practice promotes “open standards” and the use of an EU icon. It also proposes a “two-layered marking approach involving secured metadata and watermarking,” with optional measures such as fingerprinting and logging for providers under Article 50(2).
This two-layer model is important because it acknowledges real-world threat models: metadata can be stripped, while watermarks can sometimes be degraded. Using both, secured metadata plus watermarking, aims to make removal or denial harder, and detection more resilient across editing pipelines.
At the same time, research has flagged that combining provenance metadata and invisible watermarking is not always frictionless. A March 2026 arXiv paper describing an “Integrity Clash” highlights potential contradictions between C2PA provenance manifests and invisible watermarks, a reminder that “more signals” can introduce implementation conflicts unless standards are carefully aligned.
4) Providers’ duties: effectiveness, interoperability, robustness, and reliability
The Commission’s policy page (updated 05 Mar 2026) summarizes Working Group 1 (Providers) by reiterating that outputs must be marked in a machine-readable format. It also states that solutions should be “effective, interoperable, robust, and reliable,” as far as technically feasible.
Those adjectives matter because they translate legal requirements into engineering constraints. “Interoperable” suggests cross-platform detectability; “robust” suggests resistance to common transformations (compression, cropping, paraphrasing, re-encoding); and “reliable” implies measurable performance with low false positives/negatives.
Academic work has tried to map these legal terms to technical evaluation. The November 2025 arXiv paper “Watermarking Large Language Models in Europe” interprets the AI Act’s marking requirements and links them to watermark assessment criteria, pushing the conversation toward testable benchmarks rather than vague transparency claims.
5) Deployers’ duties: deepfake disclosure and certain public-interest AI text
Article 50 is not only about providers. The Commission’s policy page (updated 05 Mar 2026) also summarizes Working Group 2 (Deployers): deployers must disclose deepfakes, and must disclose AI-generated or manipulated text publications on matters of public interest unless human review or editorial responsibility applies.
EU legal text supports that direction. EUR-Lex excerpts from Regulation (EU) 2024/1689 recitals describe that deployers using AI to generate or manipulate deepfakes should “clearly and distinguishably disclose” the artificial origin via labeling, and they outline a similar disclosure expectation for AI-generated/manipulated text that informs the public, subject to the human review/editorial control carve-out.
A 2025 European Parliament Research Service briefing also summarizes Article 50(4): deployers of AI systems generating/manipulating deepfake image/audio/video “shall disclose” the artificial nature, while noting carve-outs (including satirical/fictional contexts where the obligation can be limited to disclosure rather than broader restrictions). The EU’s approach here targets the highest-risk informational contexts, not every casual use of AI.
6) Draft evolution: scope refinements and feedback timeline
The Commission published a first draft Code of Practice on 17 Dec 2025 as a foundation for refinement, covering Article 50(2) and 50(4) scope (providers and deployers). Commission news about that first draft emphasized the same core obligations: providers marking AI-generated/manipulated content in machine-readable form, and professional deployers labeling deepfakes and certain AI-text publications on public-interest matters.
By 05 Mar 2026, the second draft introduced notable adjustments. Section 2 focuses on deployers’ labeling of deepfakes and certain AI-generated text on matters of public interest (Article 50(4)), and the draft removed a taxonomy that separated “AI-generated” from “AI-assisted” content, suggesting the Commission may be trying to avoid definitions that are hard to audit in practice.
The Commission also confirmed procedural details: feedback on the second draft is open until 30 Mar 2026, while the underlying transparency rules become applicable on 2 Aug 2026. That leaves a limited window for stakeholders to influence details like technical specifications, icons, thresholds, and how “as far as technically feasible” is interpreted.
7) Industry and research caution: side-effects, edge cases, and conflicting signals
Some industry voices are urging careful tailoring. In a submission dated 18 Feb 2026, BSA warned that labeling/watermarking requirements could create technical side-effects, citing examples like code or compilation workflows where embedded signals might break expected behavior or introduce unwanted artifacts.
Research also highlights that technical solutions can collide. Beyond the “Integrity Clash” discussion about C2PA manifests versus invisible watermarks, many watermarking systems face trade-offs between robustness and quality, or between detectability and resistance to removal, especially when content is repeatedly transformed across platforms.
These concerns do not negate Article 50; they underline why the Commission’s repeated emphasis on effectiveness, interoperability, robustness, and reliability is consequential. If the Code of Practice converges on open standards and testable evaluation methods, it could prevent a patchwork of proprietary, incompatible markings that are easy to defeat and hard to verify.
8) How this fits alongside the separate GPAI Code of Practice
It is easy to confuse the transparency code under Article 50 with other EU voluntary instruments. On 10 Jul 2025, AP News reported that the EU released a separate voluntary General-Purpose AI (GPAI) Code of Practice to help comply with the AI Act, with Commission messaging stressing models should be “safe and transparent.”
A European Commission press release PDF from the same date includes a direct quote from Executive Vice-President Henna Virkkunen calling the GPAI Code of Practice “an important step” toward “safe and transparent” advanced models. The Commission policy page on the GPAI Code explains that it is designed to help providers comply with AI Act obligations for general-purpose AI models.
This matters because the Article 50 transparency work is focused specifically on marking and labeling AI-generated content outputs, while the GPAI Code is aimed at broader model governance. Together, they point to a layered EU strategy: govern the model, govern the output, and govern the publishing context, each with different tools and responsibilities.
The EU is effectively forcing the AI content ecosystem toward provenance signals that can survive modern distribution: machine-readable markings for providers, and clear disclosures by professional deployers when deepfakes or public-interest AI text are involved. With the transparency rules applicable on 2 Aug 2026, the period for building interoperable implementations is now.
Whether the approach succeeds will hinge on practical details: open standards, measurable robustness, and coordination between metadata and watermarking to avoid contradictory integrity signals. If the Code of Practice can steer the market toward solutions that are both detectable and workable across real pipelines, watermarking may become less of a branding feature, and more of a baseline compliance layer for AI-generated media in Europe.