Policy Briefing — EU AI Act Council General Approach
EU telecom ministers agreed the Council’s general approach to the AI Act, adding guardrails for general-purpose systems, biometric surveillance, and governance structures that companies must operationalise ahead of trilogue negotiations.
Executive briefing: On 6 December 2022 EU telecom ministers meeting in the Transport, Telecommunications and Energy Council endorsed a 400-page general approach to the Artificial Intelligence Act. The mandate empowers the Council to open trilogue negotiations with the European Parliament and European Commission in 2023. The text reshapes definitions, adds explicit obligations for general-purpose AI (GPAI) developers, recalibrates prohibited biometric practices, and gives national market surveillance authorities broader investigative powers. Risk, compliance, and AI governance teams now have a clear picture of the Council’s expectations for lifecycle controls, fundamental rights safeguards, and supply chain diligence even before the final regulation is adopted.
The Council compromise retains the four-tier risk framework proposed by the Commission—unacceptable, high-risk, limited-risk transparency, and minimal-risk—but introduces new nuance. Real-time remote biometric identification in publicly accessible spaces remains prohibited save for narrow law-enforcement exemptions subject to judicial authorisation and record keeping. Emotion recognition and biometric categorisation for sensitive attributes are pushed into the prohibited category when used in workplaces or education. The text clarifies that social scoring bans cover both public authorities and private actors who create generalised risk assessments.
General-purpose and high-risk AI adjustments
The most consequential change for technology companies is the Council’s regime for GPAI systems. Providers of large-scale foundation models must publish detailed technical documentation, including model capabilities, limitations, risk mitigations, and data governance processes. Downstream deployers integrating GPAI components into high-risk applications are required to perform gap analyses against Annex IV technical documentation, confirm conformity assessment coverage, and implement additional safeguards where the base model’s controls are insufficient.
- GPAI transparency packs. Providers must supply a summary of training data characteristics, target performance metrics, and known bias or robustness limitations to downstream users.
- Model change logs. Council text requires providers to maintain change management records, versioning, and rollback procedures, enabling notified bodies and market surveillance authorities to audit lifecycle management.
- Fundamental rights impact triggers. Article 29a introduces a duty for public authorities and certain private operators deploying high-risk systems in areas such as credit scoring, biometric access control, or critical infrastructure to conduct a fundamental rights impact assessment (FRIA) before deployment.
- Data governance. Annex IV emphasises representativeness, bias monitoring, and data quality documentation for training, validation, and testing datasets. Providers must document data provenance, cleaning methods, and statistical properties relevant to the intended purpose.
Compliance leaders should map the Council edits against existing AI system inventories and technical risk management frameworks. Annex III still classifies employment, essential services access, creditworthiness, migration control, and critical infrastructure management as high-risk domains, but clarifies that pure cybersecurity anomaly detection and content moderation tools may fall into limited-risk if they do not produce legal or similarly significant effects. Organisations must maintain defensible rationales for each classification decision and expect regulators to question assumptions made about significant impact.
Governance, enforcement, and penalties
The Council text strengthens market surveillance. Member states must designate an AI supervisory authority with competent technical staff, access to documentation, and the ability to require corrective actions within specified timeframes. The compromise introduces graduated administrative fines: up to €35 million or 7 % of global turnover for prohibited practices, €15 million or 3 % for most other violations, and €7.5 million or 1.5 % for supplying incorrect information to authorities. SMEs and start-ups benefit from proportionality clauses but must still demonstrate active compliance programmes.
At EU level, a new AI Board composed of member state representatives will coordinate national enforcement, issue guidance, and support sandbox initiatives. The Board will cooperate with the European Data Protection Board on overlapping issues such as fundamental rights assessments, data minimisation, and transparency. The Council also opens the door for harmonised standards by CEN/CENELEC to support technical conformity, including risk management, quality management systems, and logging requirements.
Implementation priorities for companies
Organisations building or procuring AI systems should launch multi-workstream programmes ahead of trilogues to avoid compressed timelines once the Act is adopted. Key actions include:
- Inventory and classification refresh. Update AI system registries to capture intended purpose, user groups, decision impacts, reliance on GPAI components, and interface with safety-critical processes. Tie classification logic to Annex III references and keep FRIA determinations on file.
- Policy harmonisation. Align AI governance charters with Council expectations for human oversight, risk management, incident response, and post-market monitoring. Ensure accountability matrices include data protection officers, product security, legal, and ethics committees.
- Supplier diligence. Embed Council documentation demands into procurement questionnaires and contract clauses. Require GPAI vendors to furnish transparency summaries, model cards, bias testing results, and security attestations.
- Technical documentation. Build living documentation aligned with Annex IV covering data sets, performance metrics, validation protocols, cybersecurity controls, human-machine interface design, and residual risk acceptance.
- Training and awareness. Provide specialised training for product managers, ML engineers, and risk owners on the Council’s clarifications, especially around prohibited biometric practices and FRIA obligations.
Outcome testing and assurance expectations
The Council text reinforces the need for continuous outcome monitoring. Organisations should design testing regimes that evidence safety, fairness, and robustness:
- Pre-deployment evaluation. Conduct scenario-based testing that stress-tests model performance against edge cases, demographic subgroups, and adversarial inputs. Document acceptance criteria and sign-off by accountable executives.
- Bias and fairness metrics. Select quantitative fairness indicators relevant to the use case (e.g., equal opportunity difference, demographic parity) and set thresholds aligned with fundamental rights obligations. Integrate tests into automated ML pipelines.
- Human oversight drills. Simulate override and escalation workflows for human supervisors, ensuring they have adequate information, training, and time to intervene as required by Article 14.
- Incident response and logging. Implement tamper-evident logging of inputs, outputs, and operator actions. Use anomaly detection to flag drift or unexpected behaviour and rehearse notification processes to authorities within the Council’s timelines.
- Post-market surveillance. Collect user feedback, monitor key risk indicators, and schedule periodic independent audits. Feed lessons learned into model updates and documentation change logs.
Cross-regulatory alignment
The Council compromise interacts with other EU digital regulations. Organisations should coordinate compliance with the Digital Services Act, Data Governance Act, Cyber Resilience Act proposal, and sectoral laws. For instance, online platforms deploying recommender systems must align transparency reports under the DSA with AI Act logging obligations. Financial institutions must integrate AI controls with existing EBA outsourcing and model risk management frameworks. Healthcare providers need to reconcile AI conformity assessment with Medical Devices Regulation pathways where software qualifies as SaMD.
Multinationals should also monitor developments in third countries. The UK’s evolving pro-innovation AI regulation and the U.S. Blueprint for an AI Bill of Rights emphasise similar accountability principles, enabling global AI governance frameworks that reuse impact assessment templates, risk registers, and audit checklists. Harmonising approaches reduces duplication and facilitates supplier engagement.
Preparation milestones
Although trilogues may adjust the final text, the Council position signals the minimum compliance bar. Organisations should define roadmaps covering:
- 2023 Q1-Q2: perform gap assessments against Council annexes, prioritise high-risk systems, and budget for documentation and audit tooling.
- 2023 Q3-Q4: pilot fundamental rights assessments, integrate fairness testing into CI/CD pipelines, and update procurement templates for GPAI transparency obligations.
- 2024 onwards: rehearse conformity assessments with notified bodies, build cross-functional AI governance committees, and align monitoring dashboards with market surveillance reporting expectations.
Teams that start now will be able to evidence reasonable efforts and proportionality, protecting innovation while satisfying future supervisory scrutiny.
Continue in the Policy pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
Semiconductor Industrial Strategy Policy Guide — Zeph Tech
Coordinate CHIPS and Science Act, EU Chips Act, and Defense Production Act programmes with capital planning, compliance, and supplier readiness.
-
Digital Markets Compliance Guide — Zeph Tech
Implement EU Digital Markets Act, EU Digital Services Act, UK Digital Markets, Competition and Consumers Act, and U.S. Sherman Act requirements with cross-functional operating…
-
Export Controls and Sanctions Policy Guide — Zeph Tech
Integrate U.S. Export Control Reform Act, International Emergency Economic Powers Act, and EU Dual-Use Regulation requirements into trade compliance, engineering, and supplier…




