EU Parliament committees clear AI Act negotiation mandate
EU Parliament committee mandate for AI Act negotiations in May 2023 set the parameters for trilogue. Foundation model provisions and prohibited practices were key issues.
Accuracy-reviewed by the editorial team
Members of the European Parliament on the Internal Market (IMCO) and Civil Liberties (LIBE) committees voted on 11 May 2023 to approve their compromise text for the AI Act. The amendments tightened prohibitions on biometric categorization and real-time remote biometric identification, while introducing obligations for providers of generative AI systems to disclose training data summaries and content provenance safeguards.
The committee vote authorized Parliament to open trilogue talks with the Council, signaling that foundation models and downstream deployers will face clearer transparency and risk-management duties. Organizations developing or integrating high-risk and general-purpose AI now need to track forthcoming technical standards and prepare for conformity assessments before EU market access.
Committee Compromise Positions
The joint IMCO-LIBE position represented significant evolution from the Commission's original proposal. Committees added full provisions addressing foundation models and generative AI systems not fully anticipated in the 2021 proposal. The amendments respond to rapid commercial deployment of large language models and text-to-image generators.
Parliament's position strengthened prohibited practice definitions, expanding bans on AI systems manipulating human behavior or exploiting vulnerabilities. Real-time biometric identification in public spaces faces prohibition with narrow law enforcement exceptions, reflecting fundamental rights concerns raised throughout legislative deliberations.
Foundation Model Requirements
The committee text introduced obligations specifically targeting foundation model providers. Requirements include technical documentation of training processes, disclosure of training data sources, and measures preventing generation of illegal content. Providers must implement transparency mechanisms enabling downstream users to understand model capabilities and limitations.
Generative AI systems face particular attention with requirements for AI-generated content disclosure, technical watermarking, and safeguards against deepfake spread. The position anticipates concerns about synthetic media, disinformation, and intellectual property that emerged prominently following ChatGPT's November 2022 release.
High-Risk System Classification
Parliament's position refines high-risk system categories, clarifying which applications in areas like employment, education, credit, and public services trigger improved requirements. The amendments address definitional ambiguities that created uncertainty for developers assessing compliance obligations under the Commission proposal.
Classification criteria include intended use, severity of potential harm, and affected population vulnerability. Organizations must evaluate AI deployments against refined categories to determine applicable requirements including risk management systems, data governance practices, and human oversight mechanisms.
Conformity Assessment Framework
The position elaborates conformity assessment procedures for high-risk systems, specifying when self-assessment suffices versus when third-party certification is required. Assessment scope includes technical documentation review, quality management system evaluation, and testing of safety characteristics.
Notified bodies receive expanded authority to conduct assessments and issue certifications. The framework matches existing product safety legislation while addressing AI-specific characteristics including learning capability, opacity, and context-dependent behavior.
Trilogue Negotiation Dynamics
The committee mandate positioned Parliament for trilogue negotiations with the Council, which adopted its general approach in December 2022. Key negotiation areas include foundation model governance scope, biometric identification restrictions, and enforcement mechanisms. Parliament's stronger position on prohibitions and transparency faced Council preferences for flexibility.
The trilogue process enables compromise development through interinstitutional discussions outside public proceedings. Final text typically reflects negotiated positions between Parliament, Council, and Commission rather than either institution's initial stance.
Implementation Preparation
If you are affected, begin compliance preparation despite ongoing negotiations. Inventorying AI systems, documenting training data sources, establishing human oversight mechanisms, and developing incident reporting capabilities address requirements likely surviving trilogue compromise.
- European Parliament press release outlines the committee mandate and new rules for foundation models.
- Adopted amendments provide article-level language on prohibited practices, transparency duties, and penalties.
Continue in the Policy pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Policy Implementation Guide
Coordinate governance, safety, and reporting programmes that meet EU Artificial Intelligence Act timelines and U.S. National AI Initiative Act mandates while sustaining product…
-
Digital Markets Compliance Guide
Implement EU Digital Markets Act, EU Digital Services Act, UK Digital Markets, Competition and Consumers Act, and U.S. Sherman Act requirements with cross-functional operating…
-
Semiconductor Industrial Strategy Policy Guide
Coordinate CHIPS and Science Act, EU Chips Act, and Defense Production Act programmes with capital planning, compliance, and supplier readiness.
Coverage intelligence
- Published
- Coverage pillar
- Policy
- Source credibility
- 71/100 — medium confidence
- Topics
- AI Governance · Regulation · European Union · Generative AI
- Sources cited
- 2 sources (iso.org, crsreports.congress.gov)
- Reading time
- 5 min
Further reading
- Industry Standards and Best Practices — International Organization for Standardization
- Congressional Research Service Analysis
Comments
Community
We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.
No approved comments yet. Add the first perspective.