Council of Europe CAHAI Approves AI Governance Feasibility Study — December 17, 2021
CAHAI’s feasibility study endorses negotiating a Council of Europe convention on AI with mandatory rights safeguards, transparency duties, and coordinated national oversight, backed by soft-law tools for capacity building.
Executive briefing: On 17 December 2021 the Council of Europe’s Ad Hoc Committee on Artificial Intelligence (CAHAI) adopted its Feasibility Study on a Legal Framework on Artificial Intelligence, concluding two years of multistakeholder consultations.1 The study recommends negotiating a legally binding Council of Europe convention establishing common principles for trustworthy AI, complemented by non-binding guidance, capacity-building programmes, and cooperation mechanisms.12 Governments and enterprises operating in the Council’s 46 member states should prepare for convention talks that will define baseline obligations for transparency, accountability, risk management, and remedy.
Core elements of the feasibility study
- Convention scope. The proposed treaty would cover the lifecycle of AI systems, ensuring respect for human rights, democracy, and the rule of law. It would apply to both public authorities and private actors where state obligations are engaged through procurement, delegation, or oversight.1
- General principles. CAHAI outlines binding principles—human dignity, fairness, transparency, accountability, non-discrimination, privacy, and data governance—that must inform AI design, deployment, and oversight.1
- Risk-based obligations. The study recommends mandatory fundamental rights and impact assessments for high-risk AI, proportional requirements for lower-risk use cases, and outright prohibitions for AI practices incompatible with Council of Europe standards.1
- Supervision and enforcement. States would designate independent supervisory authorities with investigatory powers, establish complaint and redress mechanisms, and enable judicial review for individuals harmed by AI decisions.1
- Complementary instruments. Soft-law tools—model laws, certification schemes, technical guidelines, and cooperative networks—would support implementation and align national practices.12
Strategic implications
The CAHAI study signals the Council of Europe’s intention to shape global AI governance beyond the EU’s proposed AI Act. The convention would be open to non-European states and organisations, creating a broader normative framework anchored in the European Convention on Human Rights.1 For enterprises, this means AI risk management must satisfy both EU regulatory requirements and Council of Europe standards, particularly for applications involving biometric identification, algorithmic decision-making in justice or policing, and automated content moderation.
Member states will transition CAHAI’s mandate to the newly established Committee on Artificial Intelligence (CAI), which will negotiate the treaty text, elaborate soft-law instruments, and coordinate capacity building.1 Regulators are expected to align national AI strategies with CAHAI recommendations, including setting up national AI supervisory authorities and ensuring independent oversight of public-sector deployments.
Implementation priorities for organisations
- Governance mapping. Inventory AI systems across business functions, identify high-risk use cases (e.g., biometric identification, critical infrastructure management), and map applicable Council of Europe principles to existing governance controls.
- Human rights impact assessments. Develop structured assessment methodologies covering purpose, data quality, bias mitigation, proportionality, stakeholder consultation, and mitigation plans. Align with CAHAI’s call for ex-ante and ex-post reviews.1
- Transparency and explainability. Implement documentation and model reporting templates that articulate system objectives, training data provenance, performance metrics, and limitations. Provide user-facing explanations and contestability pathways.1
- Accountability frameworks. Assign accountable officers for AI risk management, integrate oversight into board and executive committees, and ensure internal audit reviews AI controls regularly.
- Redress mechanisms. Establish processes for individuals to challenge automated decisions, appeal outcomes, and seek remediation—reflecting the convention’s emphasis on access to remedy.1
National authority readiness
Governments should begin planning institutional frameworks that CAHAI envisions. This includes designating lead ministries, empowering data protection or digital regulators with AI supervision mandates, and creating multidisciplinary advisory bodies involving civil society, academia, and industry.1 Capacity-building programmes should train judges, procurement officials, and public administrators on AI ethics and risk assessments, using the soft-law toolkits CAHAI proposes.12
Controls and metrics
- Key risk indicators. Track the number of AI systems lacking documented impact assessments, unresolved bias findings, or unaddressed human rights complaints.
- Key performance indicators. Measure completion rates of AI transparency documentation, percentage of high-risk projects reviewed by ethics committees, and implementation of mitigation actions from impact assessments.
- Oversight metrics. Monitor supervisory authority engagement, audits completed, and cross-border cooperation requests under the future convention framework.
- Capacity metrics. Record training hours delivered to developers, compliance teams, and public officials on CAHAI-aligned governance practices.
Sector-specific considerations
- Public sector and justice. Courts and law enforcement agencies should prepare for stringent oversight of predictive policing, risk assessment algorithms, and biometric systems, ensuring transparency, oversight, and remedy options.1
- Healthcare. Health systems must evaluate diagnostic AI for fairness, safety, and informed consent, coordinating with medical device regulators and ethics boards.
- Financial services. Banks and insurers should integrate CAHAI principles into credit scoring, fraud detection, and anti-money-laundering AI, aligning with existing fairness and explainability obligations.
- Platform economy. Online platforms should review content moderation algorithms, recommender systems, and targeted advertising models for compliance with human rights standards, especially freedom of expression.
Programme risks and mitigations
- Regulatory uncertainty. Mitigation: engage in CAI consultations, monitor draft treaty provisions, and contribute industry expertise to soft-law development.
- Resource constraints. Mitigation: prioritise high-risk AI systems for immediate assessment, leverage shared risk management toolkits, and partner with academic institutions for methodological support.
- Data and model opacity. Mitigation: adopt model governance platforms, require suppliers to provide documentation, and enforce contractual obligations for transparency and access to model internals.
- Cross-border compliance. Mitigation: harmonise AI controls across EU and Council of Europe frameworks to avoid duplicative assessments, and create central compliance repositories accessible to global teams.
Forward look
The CAI is expected to draft the convention text in 2022–2023, with potential adoption by the Committee of Ministers thereafter.1 Parallel soft-law instruments—guidelines on impact assessments, conformity assessment schemes, and cooperation networks—will provide operational detail.2 Organisations that align early with CAHAI’s blueprint can influence negotiations, demonstrate leadership in human-centric AI, and reduce compliance costs once the convention enters into force.
Sources
- 1 Council of Europe: Feasibility Study on a Legal Framework on Artificial Intelligence.
- 2 Council of Europe press release on CAHAI feasibility study adoption.
Zeph Tech helps organisations operationalise Council of Europe AI safeguards through impact assessment design, governance frameworks, and stakeholder engagement planning.
Organisations should also monitor how CAI coordinates with other global initiatives, such as the OECD’s AI work and the G7/G20 trustworthy AI principles, to ensure governance programmes remain interoperable across jurisdictions.
Continue in the AI pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Workforce Enablement and Safeguards Guide — Zeph Tech
Equip employees for AI adoption with skills pathways, worker protections, and transparency controls aligned to U.S. Department of Labor principles, ISO/IEC 42001, and EU AI Act…
-
AI Incident Response and Resilience Guide — Zeph Tech
Coordinate AI-specific detection, escalation, and regulatory reporting that satisfy EU AI Act serious incident rules, OMB M-24-10 Section 7, and CIRCIA preparation.
-
AI Model Evaluation Operations Guide — Zeph Tech
Build traceable AI evaluation programmes that satisfy EU AI Act Annex VIII controls, OMB M-24-10 Appendix C evidence, and AISIC benchmarking requirements.




