European Parliament Committees Adopt AI Act Negotiating Mandate — May 11, 2023
EU Parliament committee voted on the AI Act in May 2023, advancing the legislation. Foundation model provisions and prohibited practices were refined. The final trilogue negotiations approached.
Editorially reviewed for factual accuracy
European Parliament’s Internal Market (IMCO) and Civil Liberties (LIBE) committees adopted their compromise position on the Artificial Intelligence Act on 11 May 2023, clearing the way for a plenary vote and subsequent trilogue negotiations with the Council. The vote tightens restrictions on high-risk AI, expands prohibitions on biometric surveillance, and introduces obligations for general-purpose and generative AI systems, including transparency about training data and energy usage. Teams building or deploying AI in the EU must assess how the revised text reshapes compliance roadmaps, product design, and governance structures ahead of final adoption.
The committees backed a risk-based framework that bans biometric categorization using sensitive traits, indiscriminate facial recognition in public spaces, predictive policing based on profiling, and untargeted scraping of facial images for training. High-risk AI—including systems impacting critical infrastructure, employment, education, law enforcement, and essential services—must implement risk management, data governance, technical documentation, logging, human oversight, and robustness testing. Foundation model providers must publish summaries of training data, conduct systemic risk assessments, and implement safeguards against generating illegal content.
Capability and compliance implications
The Parliament position introduces several capability demands:
- Expanded scope for high-risk classification. The committees added AI systems influencing voter behavior and recommender systems used by very large online platforms to the high-risk list, requiring conformity assessments and post-market monitoring.
- Stronger transparency. Providers must disclose AI-generated content, ensure users know when they interact with emotion recognition or biometric categorization tools, and maintain logs accessible to regulators.
- Generative AI obligations. Foundation and generative AI developers must design safeguards to prevent generation of illegal content, publish summaries of copyrighted training data, and implement energy-efficient training practices.
- Fundamental rights impact assessments. Deployers in high-risk domains must carry out impact assessments evaluating societal, environmental, and fundamental rights implications before placing systems on the market.
These capabilities require cross-functional collaboration among legal, engineering, ethics, and operational teams to maintain compliance and public trust.
Adoption timeline
Teams should start preparing regardless of the remaining legislative steps:
- AI inventory and classification. catalog AI systems, map their functions to the Act’s risk categories, and document intended purpose, training data provenance, and deployment context.
- Risk management frameworks. Implement lifecycle risk controls covering data quality, bias mitigation, robustness testing, and human oversight. Align with the NIST AI Risk Management Framework and ISO/IEC 42001 draft standards to simplify compliance.
- Technical documentation and logging. Develop documentation templates capturing model architecture, performance metrics, dataset descriptions, and post-market monitoring plans. Ensure logging infrastructure records input data, model outputs, and decision rationales.
- Fundamental rights impact assessments (FRIAs). For high-risk deployments, establish FRIA methodologies, stakeholder consultation processes, and remediation plans for identified harms.
- Generative AI guardrails. For foundation or generative systems, implement content filters, red-teaming, and provenance metadata (such as watermarking) to satisfy emerging transparency requirements.
- Supplier assurance. Update procurement questionnaires and contractual clauses so third-party AI providers supply conformity documentation, risk mitigations, and guarantees about training data provenance and IP rights.
Companies should integrate these steps into product development lifecycles and vendor management programs, ensuring suppliers provide necessary documentation and evidence.
Responsible governance
The AI Act heightens governance expectations across the enterprise:
- Board oversight. Boards should receive regular updates on AI risk inventories, compliance progress, and regulatory engagement. Assign accountability for AI ethics to board committees or dedicated AI governance councils.
- Policy alignment. Update AI policies to reflect Parliament’s prohibitions, transparency obligations, and documentation requirements. Embed cross-border data transfer considerations and cybersecurity obligations.
- Accountability structures. Define roles for AI compliance officers, data stewards, and human oversight leads. Establish escalation paths for risk findings and regulator inquiries.
- Stakeholder engagement. Plan for public transparency reports, consultation with affected communities, and collaboration with worker councils where AI impacts employment decisions.
Governance frameworks should integrate with GDPR, DSA, DMA, and sector-specific regulations to create a coherent compliance narrative.
Legislative timeline and enforcement outlook
Following the committee vote, the Parliament will debate the text in plenary before entering trilogues with the Council and Commission. Once adopted, prohibitions apply six months after entry into force, high-risk obligations after 24 months, and general-purpose AI duties on an accelerated schedule, meaning teams must sequence readiness over a multi-year period.
Enforcement powers include fines up to €40 million or 7% of global turnover for prohibited practices, and national supervisory authorities will coordinate through a new European AI Board to ensure consistent oversight.
Tailored approaches
- Technology platforms. Very large online platforms must extend content moderation governance to AI recommenders, implement FRIA processes, and ensure generative tools comply with copyright transparency obligations.
- Financial services. High-risk credit scoring and anti-fraud systems require rigorous data governance, stress testing, and human-in-the-loop controls, aligning AI Act requirements with EBA, ESMA, and ECB guidelines.
- Healthcare. Medical AI classified as high-risk should align with MDR/IVDR processes, integrate post-market surveillance, and document clinical validation and human oversight pathways.
- Public sector. Law enforcement and public administration must phase out prohibited biometric surveillance practices, adopt FRIA frameworks, and ensure procurement contracts require AI Act compliance.
Measurement and reporting
Prepare quantitative and qualitative metrics that evidence compliance:
- AI inventory coverage. Percentage of AI systems classified by risk tier, with documentation completeness scores.
- FRIA completion and remediation. Number of FRIAs conducted, remediation actions executed, and stakeholder feedback incorporated.
- Model performance monitoring. Drift detection, bias metrics across protected groups, and robustness test outcomes tracked over time.
- Transparency compliance. Proportion of AI interactions with user-facing disclosures, watermarking adoption for generative outputs, and log availability for regulators.
- Incident and inquiry response. Time to respond to regulator information requests, internal incident escalation, and resolution of non-conformities.
These metrics should feed board dashboards and annual transparency reports, demonstrating preventive alignment with forthcoming obligations.
This brief guides EU AI Act readiness with inventories, risk controls, and governance architectures that align parliamentary expectations with sustainable AI innovation.
Coordinate AI Act programs with GDPR, Digital Services Act, and sectoral regimes so reporting, incident response, and transparency obligations are harmonized across regulatory touchpoints.
Continue in the AI pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Governance Implementation Guide
Operationalise the EU AI Act, ISO/IEC 42001, and U.S. OMB M-24-10 requirements with accountable inventories, controls, and reporting workflows.
-
AI Incident Response and Resilience Guide
Coordinate AI-specific detection, escalation, and regulatory reporting that satisfy EU AI Act serious incident rules, OMB M-24-10 Section 7, and CIRCIA preparation.
-
AI Procurement Governance Guide
Structure AI procurement pipelines with risk-tier screening, contract controls, supplier monitoring, and EU-U.S.-UK compliance evidence.
Coverage intelligence
- Published
- Coverage pillar
- AI
- Source credibility
- 92/100 — high confidence
- Topics
- AI regulation · European Union · Responsible AI
- Sources cited
- 5 sources (europarl.europa.eu, eur-lex.europa.eu, nist.gov)
- Reading time
- 5 min
Documentation
- MEPs ready for negotiations with Council on Artificial Intelligence Act — European Parliament
- EU lawmakers agree on tougher rules for generative AI — European Parliament
- Questions and answers on the Artificial Intelligence Act — European Parliament
- Proposal for a Regulation laying down harmonized rules on artificial intelligence — European Commission
- AI Risk Management Framework — National Institute of Standards and Technology
Comments
Community
We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.
No approved comments yet. Add the first perspective.