← Back to all briefings
Policy 6 min read Published Updated Credibility 91/100

European Parliament adopts AI Act negotiation mandate

EU Parliament adopted its AI Act position in June 2023 after committee work. Foundation model provisions strengthened, prohibited practices defined. The final trilogue could proceed.

Fact-checked and reviewed — Kodi C.

Policy pillar illustration for Zeph Tech briefings
Policy, regulatory, and mandate timeline briefings

The European Parliament's plenary vote on finalized the chamber's negotiating mandate for the AI Act. MEPs backed rules requiring foundation and generative AI providers to disclose training data summaries, respect copyright opt-outs, and label AI-generated content. The text also tightened limits on biometric identification, emotion recognition in workplaces and schools, and expanded prohibited practices.

Key Parliament amendments

  • Foundation model obligations: Providers must conduct model evaluations, implement risk mitigation measures, register in an EU database, and provide transparency documentation before market placement.
  • Generative AI transparency: Systems generating synthetic content must disclose AI involvement. Training data summaries must identify copyrighted materials used, with mechanisms for rights holder opt-out.
  • Biometric restrictions: Parliament sought broader bans on real-time remote biometric identification in public spaces, predictive policing, and emotion recognition in workplaces and educational settings.
  • High-risk classification: Expanded categories include AI systems affecting access to education, employment, essential services, and critical infrastructure.

Trilogue process

With the mandate adopted, rapporteurs opened trilogue negotiations with the Council's December 2022 general approach. Key differences include the scope of biometric restrictions, foundation model definitions, and copyright-related transparency requirements. The final text will emerge from these interinstitutional negotiations.

Compliance preparation

AI vendors and deployers targeting the EU must track how Parliament's transparency and copyright amendments could influence future conformity assessments, technical documentation, and downstream user obligations. If you are affected, inventory AI systems, assess classification under the risk-based framework, and prepare technical documentation processes.

Source material

Parliament Position Overview

The European Parliament adopted its negotiating position on the Artificial Intelligence Act on June 14, 2023, with overwhelming support of 499 votes in favor, 28 against, and 93 abstentions. The Parliament's amendments significantly expanded the scope and stringency of the Commission's original proposal, reflecting parliamentary concerns about AI risks while seeking to preserve innovation opportunities.

Key expansions include broader definitions of AI systems, expanded lists of prohibited practices, stricter requirements for high-risk systems, and new obligations for general-purpose AI including foundation models. The Parliament position served as the basis for trilogue negotiations with the Council and Commission to finalize the regulation.

Prohibited AI Practices

The Parliament significantly expanded prohibited AI practices beyond the Commission's original proposal. Banned applications include real-time remote biometric identification in public spaces for law enforcement (with limited exceptions), social scoring systems, predictive policing based on profiling, emotion recognition in law enforcement and education, and indiscriminate scraping of facial images for biometric databases.

Subliminal manipulation techniques and exploitation of vulnerabilities remain prohibited, with clarifications on scope addressing concerns about marketing applications. The Parliament added prohibitions on AI systems categorizing individuals based on biometric data to infer sensitive characteristics such as race, political opinions, or sexual orientation.

High-Risk AI Requirements

High-risk AI systems face full requirements including risk management systems, data governance standards, technical documentation, record-keeping, transparency, human oversight, accuracy, and robustness. The Parliament strengthened requirements in several areas, particularly around fundamental rights impact assessments and public transparency.

Registration in an EU database extends to high-risk systems in employment, essential services, and other domains affecting fundamental rights. Public authorities using high-risk AI must register systems and publish impact assessments, increasing accountability for government AI deployments.

Foundation Models and General-Purpose AI

The Parliament introduced extensive new requirements for foundation models, recognizing their unique characteristics and potential for widespread downstream impact. Providers must implement risk identification and mitigation measures, ensure training data quality and copyright compliance, and document model capabilities and limitations.

Generative AI systems including large language models face additional transparency requirements, including disclosure of AI-generated content, safeguards against generating illegal content, and publication of summaries of copyrighted training data. These requirements address concerns about misinformation, intellectual property, and deceptive content.

Enforcement and Penalties

The Parliament strengthened enforcement mechanisms, establishing an AI Office at EU level to coordinate setup and monitor foundation model compliance. Penalties for violations scale with company turnover, with maximum fines reaching 40 million euros or 7% of global annual turnover for prohibited AI practices.

Market surveillance authorities in member states retain primary enforcement responsibility for most requirements, with coordination through the AI Office and AI Board. Complaints mechanisms enable individuals and civil society organizations to report potential violations and seek remedies.

Innovation Support

The Parliament maintained regulatory sandboxes for AI innovation while expanding their scope and accessibility. Small and medium enterprises receive preferential access to sandboxes and may benefit from reduced compliance burdens for certain requirements. Support measures help startups and researchers navigate regulatory requirements without excessive administrative burden.

Wrapping up

The Parliament's negotiating position represented the most ambitious AI regulatory framework globally, setting parameters for trilogue negotiations that concluded in December 2023. Organizations developing or deploying AI in European markets should assess their systems against the framework's requirements and prepare compliance programs addressing transparency, risk management, and documentation obligations.

Foundation Model Requirements

The Parliament's position introduced thorough requirements for general-purpose AI and foundation models. Providers must document training methodologies, data sources, and evaluation results before market release. Systemic risk assessments become mandatory for models exceeding computational thresholds, triggering additional obligations for adversarial testing and incident reporting to the European AI Office.

Organizations developing or deploying foundation models should establish documentation practices capturing model capabilities, limitations, and intended use cases. Integration with existing risk management frameworks enables coherent governance across AI development lifecycle stages.

Compliance Preparation Strategies

early compliance preparation reduces regulatory uncertainty and accelerates market access. Organizations should conduct AI system inventories, classify systems by risk tier, and identify gaps requiring remediation. Legal and technical teams must collaborate on conformity assessment procedures and documentation requirements aligned with harmonized standards development.

Continue in the Policy pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

Coverage intelligence

Published
Coverage pillar
Policy
Source credibility
91/100 — high confidence
Topics
AI Regulation · Transparency · Copyright
Sources cited
3 sources (europarl.europa.eu, eur-lex.europa.eu, iso.org)
Reading time
6 min

Source material

  1. European Parliament AI Act Position — europarl.europa.eu
  2. Regulation (EU) 2024/1689 - EU AI Act — eur-lex.europa.eu
  3. ISO/IEC 42001:2023 AI Management Systems — iso.org
  • AI Regulation
  • Transparency
  • Copyright
Back to curated briefings

Comments

Community

We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.

    Share your perspective

    Submissions showing "Awaiting moderation" are in review. Spam, low-effort posts, or unverifiable claims will be rejected. We verify submissions with the email you provide, and we never publish or sell that address.

    Verification

    Complete the CAPTCHA to submit.