EU AI Act and Coordinated Plan Update — April 21, 2021
The European Commission’s 2021 Coordinated Plan on AI, issued 21 April 2021, pressed Member States to align investment, skills, and regulatory sandboxes with the proposed AI Act—demanding national governance councils, data-space buildouts, and cross-border testing programmes.
Executive briefing: Alongside the draft Artificial Intelligence Act, the European Commission released the updated Coordinated Plan on Artificial Intelligence on 21 April 2021. The non-binding plan renews commitments between the Commission and EU/EEA Member States to mobilise at least €20 billion of combined public-private investment annually this decade, create EU-wide testing facilities and data spaces, and align national AI strategies with fundamental rights safeguards. Governments and enterprises operating in the EU must treat the plan as the implementation blueprint for the AI Act, mapping how research funding, public-sector adoption, and skills programmes converge with the legislation’s risk-based obligations.
Strategic pillars
- Investment acceleration. The plan encourages Member States to channel Recovery and Resilience Facility (RRF) grants—20% earmarked for digital—to AI infrastructure, testing, and SME support. It outlines Commission co-investment in Testing and Experimentation Facilities (TEFs) across healthcare, agri-food, manufacturing, smart cities, and edge AI, plus a pan-European network of AI Digital Innovation Hubs (DIHs). Organisations should map eligibility for Horizon Europe, Digital Europe Programme, and InvestEU funding streams.
- Fostering excellence from the lab to market. Member States are urged to coordinate doctoral networks, attract global researchers, and expand the European AI-on-Demand platform. The plan prioritises access to high-quality datasets via Common European Data Spaces (health, mobility, energy, finance) and promotes IP licensing frameworks to commercialise research while respecting open science principles.
- Ensuring trustworthy AI. The plan prepares ecosystems for AI Act compliance through regulatory sandboxes, standardisation mandates via CEN/CENELEC and ETSI, and support for conformity assessment bodies. Fundamental rights impact assessments, cybersecurity certification alignment (ENISA), and sectoral guidelines—especially for healthcare and law enforcement—are expected outcomes.
- Adoption across the economy and public sector. Measures include AI procurement guidelines, govtech challenges, and cross-border pilots (e.g., autonomous mobility corridors, personalised medicine). Public administrations should integrate AI risk management into data governance, referencing the European Interoperability Framework.
Governance expectations for Member States
The plan asks each Member State to refresh or adopt national AI strategies by Q1 2022, designate national AI coordinators, and report annually through the revised AI Watch portal. Countries must establish multi-stakeholder governance councils that include industry, academia, and civil society to monitor progress against shared indicators (skills, investment, adoption). Transparency obligations include publishing expenditure tracking, open datasets released, and sandbox participants. Governments should align public procurement rules with the forthcoming AI Act, embedding risk classification in tender evaluation.
Implementation roadmap 2021–2027
The annex details actions grouped into short-term (2021–2022), medium-term (2022–2024), and longer-term (2025+) horizons. Immediate steps include launching TEFs, establishing at least one sandbox per Member State, and harmonising curricula for specialised AI master’s programmes. Medium-term objectives encompass cross-border data sharing agreements, expansion of AI testing facilities to cover robotics and agriculture, and integration of AI impact assessments into public-sector project gates. By 2027 the plan targets widespread deployment of AI in climate resilience, energy grids, and public services, plus comprehensive measurement of AI’s environmental footprint.
Data spaces and infrastructure
The plan positions Common European Data Spaces as foundational. Programmes must deploy federated cloud-to-edge infrastructure compliant with GAIA-X principles, enforce data interoperability standards, and implement secure data-sharing agreements. Organisations should engage with sectoral alliances (e.g., European Health Data Space preparatory actions) to influence governance models, consent frameworks, and anonymisation techniques. Cybersecurity requirements tie into NIS Directive obligations and ENISA certification schemes.
Skills and talent pipelines
- Education. Member States are encouraged to integrate AI into primary and secondary curricula, sponsor vocational reskilling, and create European Masters in AI scholarships. Enterprises should partner with DIHs to deliver apprenticeships and micro-credential programmes.
- Workforce transition. Social partners must address algorithmic management, labour rights, and equality impacts. Governance policies should include consultation with trade unions, adherence to GDPR automated decision-making safeguards, and proactive upskilling budgets.
SME and start-up enablement
The plan stresses support for SMEs via regulatory sandboxes, venture capital instruments (European Innovation Council), and procurement innovation. Corporations should anticipate collaboration requirements when applying for funding, including data sharing and ethics commitments. Participation in European Digital Innovation Hubs can grant access to technical expertise, test-before-invest resources, and matchmaking with investors.
Alignment with the AI Act
While the AI Act provides legal obligations, the Coordinated Plan details how Member States and the Commission operationalise them. Regulatory sandboxes outlined in Article 53 of the proposal depend on national authorities establishing selection criteria, data governance safeguards, and redress mechanisms. Conformity assessment infrastructure (Notified Bodies, accredited laboratories) will be co-funded through the plan. Standardisation requests under Article 40 require industry input—organisations should participate in technical committees to shape requirements for risk management, data quality, human oversight, and robustness testing.
Monitoring and indicators
AI Watch, run by the Joint Research Centre (JRC), will publish the AI Index tracking R&D expenditure, patent filings, skills indicators, and adoption metrics. Member States must supply data for annual progress reports. Enterprises receiving public funding may need to contribute key performance indicators such as number of AI deployments meeting ethical guidelines, greenhouse gas reductions, or SMEs supported. Prepare reporting pipelines and audit trails to satisfy funding agreements.
Cross-border cooperation
The plan reinforces cooperation beyond the EU: partnerships with the U.S. Trade and Technology Council, OECD AI Policy Observatory, Council of Europe, and Global Partnership on AI. Organisations with multinational footprints should harmonise compliance strategies, aligning EU requirements with global AI governance frameworks. Participation in international sandboxes can accelerate mutual recognition of testing outcomes.
Action checklist for organisations
- Map corporate AI initiatives to the Coordinated Plan’s priority areas (e.g., climate, health, mobility) and identify eligible funding streams.
- Engage national digital authorities and DIHs to join regulatory sandboxes, ensuring readiness to meet AI Act transparency, data governance, and human oversight requirements.
- Invest in data management frameworks that support Common European Data Space participation—metadata catalogues, semantic interoperability, and secure sharing protocols.
- Establish governance structures linking ethics boards, compliance officers, and R&D leads to monitor AI Act legislative negotiations and standardisation outputs.
- Develop workforce plans emphasising AI literacy, diversity, and inclusion, tracking metrics requested by AI Watch and national coordinators.
Zeph Tech partners with European organisations to translate the Coordinated Plan into execution roadmaps that secure funding, operationalise sandboxes, and align AI programmes with forthcoming AI Act obligations.
Continue in the AI pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Workforce Enablement and Safeguards Guide — Zeph Tech
Equip employees for AI adoption with skills pathways, worker protections, and transparency controls aligned to U.S. Department of Labor principles, ISO/IEC 42001, and EU AI Act…
-
AI Incident Response and Resilience Guide — Zeph Tech
Coordinate AI-specific detection, escalation, and regulatory reporting that satisfy EU AI Act serious incident rules, OMB M-24-10 Section 7, and CIRCIA preparation.
-
AI Model Evaluation Operations Guide — Zeph Tech
Build traceable AI evaluation programmes that satisfy EU AI Act Annex VIII controls, OMB M-24-10 Appendix C evidence, and AISIC benchmarking requirements.




