EU AI Act
The EU AI Office published guidance on AI system labeling and codes of practice in November 2025. GPAI providers need clear documentation and transparency about model capabilities, limitations, and intended uses. This is the practical implementation of the AI Act's transparency requirements.
Editorially reviewed for factual accuracy
On , the European Commission launched the AI Office’s process to draft a code of practice for marking and labelling AI-generated content. The seven-month effort (November 2025–May 2026) will translate Article 50 transparency duties into machine-readable provenance for providers and disclosure playbooks for deployers, with enforcement expected alongside the EU AI Act’s transparency obligations in August 2026. This brief building a dedicated labelling workstream that connects provenance tooling, trust-and-safety review, and Article 53 documentation so European customers can evidence compliance. Stakeholders can track progress through the AI pillar hub, the AI governance guide, and recent briefs on EU AI Act systemic risk mitigation and Colorado readiness to coordinate multi-jurisdiction transparency.
Methodology and context
The Commission’s press release and policy explainer outline core expectations:
- Machine-readable labelling. Providers of AI systems that generate audio, image, video, and text must embed strong, interoperable signals—such as watermarking or metadata—so synthetic content is detectable. Deployers must preserve these signals across the content supply chain.
- Deepfake disclosures. When AI-generated or manipulated content resembles real people or events, deployers must visibly disclose the artificial nature of the media to end users.
- Coordination with EU AI Act. Article 50 transparency duties integrate with broader EU AI Act obligations, requiring alignment with risk management, documentation, and post-market monitoring under Articles 9 and 61.
The approach mirrors the AI Office’s iterative consultation: we map provenance dependencies across models and rendering services, design disclosure patterns per channel, test signal durability, and prepare documentation that can slot into Article 53 technical files. Each iteration is shared with customers so their legal, product, and engineering teams can adjust policies ahead of the August 2026 enforcement window.
Stakeholder impacts
- Product and content teams. Must incorporate visible labels for synthetic media in UI components, including multi-language notices for EU audiences.
- Engineering. Needs to implement and verify watermarking or metadata propagation across the content pipeline, including third-party providers.
- Trust & safety. Operates detection and escalation when labelling signals fail or deepfake content is contested.
- Legal and compliance. Maintains Article 50 interpretations, records labelling configurations, and assembles Article 53 documentation that references the code of practice.
Control setup mapping
| Requirement | Implementation | Evidence |
|---|---|---|
| Machine-readable provenance | Embed watermarking/metadata at model output, verify integrity after post-processing and CDN delivery. | Hash comparisons, metadata inspection logs, test cases across formats. |
| Visible deepfake disclosure | UI labels and captions for AI-manipulated media; bilingual (EN + local EU language) templates. | Screenshot archive with alt text, localization checklists, accessibility audits. |
| Supply chain coverage | Contractual clauses requiring third-party tools to preserve provenance; validation hooks in rendering pipeline. | Vendor attestations, integration test reports, exception tracking. |
| Documentation and monitoring | Article 53 technical file updates tying provenance controls to risk management and post-market monitoring. | Versioned technical file sections, monitoring dashboards, incident playbooks. |
Monitoring and response focus
[Model output + watermark] -> [Metadata verification] -> [UI label/deepfake banner]
| | v v v
[Content moderation] ----> [Anomaly detection] ----> [Incident response & updates]
- Signal durability testing. Run regression tests across formats (JPEG/PNG/MP4/WAV) to ensure watermark or metadata persists after compression, resizing, or platform-specific processing.
- Moderation routing. Integrate provenance checks into content moderation queues; anomalies trigger incidents within 24 hours, aligning with the urgency expected for public-interest communications.
- Consultation tracking. Monitor AI Office working-group drafts and update configuration-as-code repositories to reflect new fields or labelling standards, keeping teams synchronized during the iterative process through May 2026.
Priority actions
- Launch a cross-functional “labelling tiger team” spanning legal, engineering, product marketing, and customer success to own backlog prioritization and stakeholder updates.
- Publish deployer playbooks that clarify when deepfake exceptions apply, how to cite machine-readable provenance fields, and how to escalate contested disclosures across EU markets.
- Stage executive readouts after each AI Office plenary to align budget and staffing with upcoming code milestones and the August 2026 enforcement date.
- Coordinate with the AI incident response guide to ensure labelling failures feed post-market monitoring and remediation loops required by the EU AI Act.
Metrics and operating cadence
- Signal retention rate. Percentage of sampled assets where watermark/metadata remains intact after transformation and delivery.
- Disclosure coverage. Share of synthetic or manipulated assets that display visible labels in all supported languages and channels.
- Incident response timing. Median hours from anomaly detection to remediation and customer notification.
- Documentation freshness. Frequency of Article 53 updates reflecting code-of-practice drafts and production changes.
Regular reviews of these metrics keep the labelling backlog prioritized and show continuous alignment with the AI Office consultation.
Actionable checklist
- Inventory all generative outputs (text, image, video, audio) and map where watermarking or metadata insertion occurs; validate propagation across vendors.
- Implement bilingual visible labels for synthetic media, with accessibility testing and screenshot evidence stored for audits.
- Configure moderation queues to verify provenance signals and open incidents when anomalies are detected; include 24-hour response expectations.
- Update Article 53 technical files with provenance architecture, labelling templates, and monitoring plans connected to Article 50 duties.
- Track AI Office consultation updates and refresh playbooks monthly so customers always operate on the latest code-of-practice drafts.
Grounding labelling controls in the Commission’s November 2025 launch materials and the EU AI Act transparency timeline positions customers to show trustworthy, traceable AI-generated content before the August 2026 obligations arrive.
Channel-specific labelling patterns
Different content formats require tailored disclosure without losing the machine-readable core:
- Images and video. Apply watermarking at render time, store metadata with creator, prompt, and model version fields, and display on-screen badges or captions noting AI generation in the viewer’s language.
- Audio. Embed inaudible watermarks plus metadata tags; pair with on-screen or transcript labels for podcasts and voice assistants.
- Text. Include metadata tags in document headers or HTML, and place inline notices when text is mixed with human authorship, especially in public-interest communications.
Testing must confirm that downstream platforms (social networks, messaging clients, content delivery networks) preserve both the machine-readable signals and the visible disclosures.
Governance and alignment
EU transparency duties should plug into existing AI governance routines. Recommended::
- Policy updates. Amend content and marketing policies to require provenance checks before publication and to store evidence in Article 53 files.
- Training. Brief content, marketing, and customer success teams on the differences between generic labelling and deepfake-specific disclosures so exceptions are applied correctly.
- Board visibility. Provide quarterly updates on signal retention, incident counts, and consultation milestones, linking to the AI governance guide for broader context.
This governance rhythm ensures labelling remains a first-class control alongside risk management and post-market monitoring.
Evidence retention
Because the code of practice will inform Article 53 technical files, This brief defining an evidence backbone that stores provenance test results, screenshot archives with descriptive alt text, localization checklists, and incident timelines. Evidence is tagged by asset type, model version, and release date so customers can prove how labelling evolved alongside AI model changes.
Resourcing
Assign clear owners for provenance testing, localization, and moderation. For smaller teams, rotate responsibilities weekly but keep decision logs centralized so Article 50 interpretations stay consistent even as code-of-practice drafts evolve.
Change management
Every labelling update should pass through change control with rollback plans so provenance signals remain stable during rapid consultation iterations.
Continue in the AI pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Procurement Governance Guide
Structure AI procurement pipelines with risk-tier screening, contract controls, supplier monitoring, and EU-U.S.-UK compliance evidence.
-
AI Workforce Enablement and Safeguards Guide
Equip employees for AI adoption with skills pathways, worker protections, and transparency controls aligned to U.S. Department of Labor principles, ISO/IEC 42001, and EU AI Act…
-
AI Model Evaluation Operations Guide
Build traceable AI evaluation programmes that satisfy EU AI Act Annex VIII controls, OMB M-24-10 Appendix C evidence, and AISIC benchmarking requirements.
Documentation
- Commission launches work on a code of practice on marking and labelling AI-generated content — digital-strategy.ec.europa.eu
- Code of Practice on transparency of AI-generated content — digital-strategy.ec.europa.eu
- ISO/IEC 42001:2023 — Artificial Intelligence Management System — International Organization for Standardization
Comments
Community
We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.
No approved comments yet. Add the first perspective.