European Parliament Committees Adopt AI Act Negotiating Mandate — May 11, 2023
European Parliament committees backed a strengthened AI Act text with bans on intrusive biometric surveillance, new obligations for generative AI, and expanded high-risk compliance requirements.
Executive briefing: European Parliament’s Internal Market (IMCO) and Civil Liberties (LIBE) committees adopted their compromise position on the Artificial Intelligence Act on 11 May 2023, clearing the way for a plenary vote and subsequent trilogue negotiations with the Council. The vote tightens restrictions on high-risk AI, expands prohibitions on biometric surveillance, and introduces obligations for general-purpose and generative AI systems, including transparency about training data and energy usage. Organisations building or deploying AI in the EU must assess how the revised text reshapes compliance roadmaps, product design, and governance structures ahead of final adoption.
The committees backed a risk-based framework that bans biometric categorisation using sensitive traits, indiscriminate facial recognition in public spaces, predictive policing based on profiling, and untargeted scraping of facial images for training. High-risk AI—including systems impacting critical infrastructure, employment, education, law enforcement, and essential services—must implement risk management, data governance, technical documentation, logging, human oversight, and robustness testing. Foundation model providers must publish summaries of training data, conduct systemic risk assessments, and implement safeguards against generating illegal content.
Capability and compliance implications
The Parliament position introduces several capability demands:
- Expanded scope for high-risk classification. The committees added AI systems influencing voter behaviour and recommender systems used by very large online platforms to the high-risk list, requiring conformity assessments and post-market monitoring.
- Stronger transparency. Providers must disclose AI-generated content, ensure users know when they interact with emotion recognition or biometric categorisation tools, and maintain logs accessible to regulators.
- Generative AI obligations. Foundation and generative AI developers must design safeguards to prevent generation of illegal content, publish summaries of copyrighted training data, and implement energy-efficient training practices.
- Fundamental rights impact assessments. Deployers in high-risk domains must carry out impact assessments evaluating societal, environmental, and fundamental rights implications before placing systems on the market.
These capabilities require cross-functional collaboration among legal, engineering, ethics, and operational teams to maintain compliance and public trust.
Implementation roadmap
Organisations should start preparing regardless of the remaining legislative steps:
- AI inventory and classification. Catalogue AI systems, map their functions to the Act’s risk categories, and document intended purpose, training data provenance, and deployment context.
- Risk management frameworks. Implement lifecycle risk controls covering data quality, bias mitigation, robustness testing, and human oversight. Align with the NIST AI Risk Management Framework and ISO/IEC 42001 draft standards to streamline compliance.
- Technical documentation and logging. Develop documentation templates capturing model architecture, performance metrics, dataset descriptions, and post-market monitoring plans. Ensure logging infrastructure records input data, model outputs, and decision rationales.
- Fundamental rights impact assessments (FRIAs). For high-risk deployments, establish FRIA methodologies, stakeholder consultation processes, and remediation plans for identified harms.
- Generative AI guardrails. For foundation or generative systems, implement content filters, red-teaming, and provenance metadata (such as watermarking) to satisfy emerging transparency requirements.
- Supplier assurance. Update procurement questionnaires and contractual clauses so third-party AI providers supply conformity documentation, risk mitigations, and guarantees about training data provenance and IP rights.
Companies should integrate these steps into product development lifecycles and vendor management programmes, ensuring suppliers provide necessary documentation and evidence.
Responsible governance
The AI Act heightens governance expectations across the enterprise:
- Board oversight. Boards should receive regular updates on AI risk inventories, compliance progress, and regulatory engagement. Assign accountability for AI ethics to board committees or dedicated AI governance councils.
- Policy alignment. Update AI policies to reflect Parliament’s prohibitions, transparency obligations, and documentation requirements. Embed cross-border data transfer considerations and cybersecurity obligations.
- Accountability structures. Define roles for AI compliance officers, data stewards, and human oversight leads. Establish escalation paths for risk findings and regulator inquiries.
- Stakeholder engagement. Plan for public transparency reports, consultation with affected communities, and collaboration with worker councils where AI impacts employment decisions.
Governance frameworks should integrate with GDPR, DSA, DMA, and sector-specific regulations to create a coherent compliance narrative.
Legislative timeline and enforcement outlook
Following the committee vote, the Parliament will debate the text in plenary before entering trilogues with the Council and Commission. Once adopted, prohibitions apply six months after entry into force, high-risk obligations after 24 months, and general-purpose AI duties on an accelerated schedule, meaning organisations must sequence readiness over a multi-year period.
Enforcement powers include fines up to €40 million or 7% of global turnover for prohibited practices, and national supervisory authorities will coordinate through a new European AI Board to ensure consistent oversight.
Sector playbooks
- Technology platforms. Very large online platforms must extend content moderation governance to AI recommenders, implement FRIA processes, and ensure generative tools comply with copyright transparency obligations.
- Financial services. High-risk credit scoring and anti-fraud systems require rigorous data governance, stress testing, and human-in-the-loop controls, aligning AI Act requirements with EBA, ESMA, and ECB guidelines.
- Healthcare. Medical AI classified as high-risk should align with MDR/IVDR processes, integrate post-market surveillance, and document clinical validation and human oversight pathways.
- Public sector. Law enforcement and public administration must phase out prohibited biometric surveillance practices, adopt FRIA frameworks, and ensure procurement contracts require AI Act compliance.
Measurement and reporting
Prepare quantitative and qualitative metrics that evidence compliance:
- AI inventory coverage. Percentage of AI systems classified by risk tier, with documentation completeness scores.
- FRIA completion and remediation. Number of FRIAs conducted, remediation actions executed, and stakeholder feedback incorporated.
- Model performance monitoring. Drift detection, bias metrics across protected groups, and robustness test outcomes tracked over time.
- Transparency compliance. Proportion of AI interactions with user-facing disclosures, watermarking adoption for generative outputs, and log availability for regulators.
- Incident and inquiry response. Time to respond to regulator information requests, internal incident escalation, and resolution of non-conformities.
These metrics should feed board dashboards and annual transparency reports, demonstrating proactive alignment with forthcoming obligations.
Zeph Tech guides EU AI Act readiness with inventories, risk controls, and governance architectures that align parliamentary expectations with sustainable AI innovation.
Coordinate AI Act programmes with GDPR, Digital Services Act, and sectoral regimes so reporting, incident response, and transparency obligations are harmonised across regulatory touchpoints.
Continue in the AI pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Workforce Enablement and Safeguards Guide — Zeph Tech
Equip employees for AI adoption with skills pathways, worker protections, and transparency controls aligned to U.S. Department of Labor principles, ISO/IEC 42001, and EU AI Act…
-
AI Incident Response and Resilience Guide — Zeph Tech
Coordinate AI-specific detection, escalation, and regulatory reporting that satisfy EU AI Act serious incident rules, OMB M-24-10 Section 7, and CIRCIA preparation.
-
AI Model Evaluation Operations Guide — Zeph Tech
Build traceable AI evaluation programmes that satisfy EU AI Act Annex VIII controls, OMB M-24-10 Appendix C evidence, and AISIC benchmarking requirements.




