European Commission Launches AI Innovation Package — September 13, 2023
The European Commission’s 13 September 2023 AI innovation package couples an AI Factory supercomputing proposal with GenAI4EU funding and SME vouchers, requiring governance committees to align oversight with the AI Act, delivery teams to execute phased adoption plans, and privacy offices to embed DSAR safeguards in new AI services.
Executive briefing: On 13 September 2023 the European Commission unveiled an AI innovation package to accelerate trustworthy artificial intelligence across the EU. The package pairs a proposal to amend the EuroHPC Joint Undertaking Regulation—creating “AI Factories” that offer start-ups priority access to Europe’s petascale and exascale supercomputers—with fresh funding instruments such as the GenAI4EU initiative, InvestEU guarantees, and innovation vouchers delivered through European Digital Innovation Hubs. It complements the near-final EU AI Act by ensuring European developers can train models on sovereign infrastructure while complying with forthcoming governance and transparency rules. Boards should treat the package as a trigger to refresh AI strategies, inventory generative AI pilots, and document how DSAR safeguards and human oversight will extend to new use cases enabled by these programmes.
Governance implications
The Commission emphasised that access to public supercomputers and funding would be contingent on adherence to EU values and the AI Act’s risk-based requirements. Corporate governance committees therefore link innovation roadmaps with compliance frameworks: they incorporate AI risk appetite statements into enterprise policies, mandate that AI Factories usage agreements flow through legal review, and align programme KPIs with ethical AI principles adopted by the organisation. Because the AI Act will introduce obligations around transparency, logging, and human oversight, boards expect to see these requirements baked into project charters for any initiative benefiting from Commission support.
Risk committees broaden third-party oversight as well. Participation in GenAI4EU pilots or European Digital Innovation Hub (EDIH) testbeds often entails collaboration with research institutions and SMEs. Governance frameworks must define data-sharing agreements, model governance standards, and exit criteria for pilots that fail to demonstrate compliance. Privacy officers and DPOs are added to steering committees to ensure DSAR and data minimisation considerations are represented when selecting datasets for training or fine-tuning.
Implementation roadmap
90-day actions: Organisations start by mapping existing and planned AI initiatives, identifying which could leverage EU supercomputing resources or funding. They prioritise use cases aligned with GenAI4EU’s focus sectors—healthcare diagnostics, energy optimisation, climate modelling, public administration, manufacturing, and creative industries. Portfolio offices prepare applications for the AI Factory programme by compiling technical requirements (model size, dataset volumes), compliance documentation (data protection impact assessments, algorithmic impact assessments), and resource plans detailing how supercomputing slots will be consumed.
IT and data teams review the Commission’s proposal to extend the EuroHPC mandate, noting obligations around data sovereignty, cybersecurity, and fair access. They draft architectural patterns for connecting enterprise environments to EuroHPC supercomputers via secure virtual private networks or federated learning gateways. Concurrently, finance and grants teams track upcoming calls under Horizon Europe, Digital Europe, and the European Innovation Council to secure co-funding for AI projects.
Medium-term (6–12 months): Once accepted into AI Factories or EDIH programmes, delivery teams integrate Commission-provided toolkits into MLOps pipelines. They refactor data ingestion workflows to run on HPC nodes, manage scheduling via Slurm or similar job managers, and implement secure data transfer solutions (S3-compatible gateways, Globus, or encrypted tunnels). Programme managers establish checkpoints to assess progress against GenAI4EU objectives, ensuring pilots produce reusable components (datasets, model weights, evaluation frameworks) that can scale into production.
Longer-term (12+ months): Organisations incorporate lessons learned into enterprise AI platforms. They build reusable governance artefacts—model cards, risk registers, transparency reports—that align with AI Act obligations. Business units develop operational support plans for AI-enabled services, including change management, model monitoring, and DSAR response procedures. Partnerships formed through EDIHs may evolve into joint ventures or supplier arrangements, requiring refreshed procurement oversight.
Funding and partnership execution
The innovation package mobilises multiple funding streams: Horizon Europe clusters, the Digital Europe Programme, the European Innovation Council’s AI and blockchain work programme, InvestEU guarantees via the European Investment Bank, and national co-financing from Member States participating in EuroHPC. Finance teams catalogue relevant calls, track eligibility (SMEs, mid-caps, research organisations), and assemble consortiums where required. Legal counsel reviews grant agreements to ensure intellectual property, data usage rights, and confidentiality align with corporate policies.
Where SMEs or start-ups apply for innovation vouchers through EDIHs, larger enterprises supporting them as partners must articulate collaboration frameworks. These include data-sharing agreements, security baselines for access to enterprise sandboxes, and exit clauses covering IP ownership. Corporate venture arms evaluate whether to co-invest alongside InvestEU-backed funds, balancing strategic access to trustworthy AI solutions with governance controls that prevent conflicts of interest.
DSAR and data protection safeguards
Generative AI and high-performance computing intensify privacy considerations. Data protection officers insist on DPIAs for every project using AI Factory resources, documenting data provenance, retention, and lawful bases. When training data contains personal information, teams must establish anonymisation or synthetic data strategies consistent with guidance from the European Data Protection Board. They define DSAR workflows for AI systems: logging dataset contributions, mapping personal data transformations, and ensuring that subject access or deletion requests can be fulfilled even when models are trained on aggregated datasets.
Privacy engineering teams integrate model registries with DSAR tooling so that rights requests trigger impact analysis: if a data subject asks for deletion, the organisation evaluates whether retraining or applying machine unlearning techniques is required. They also ensure that prompts, outputs, and model audit logs are retained according to retention schedules, enabling transparency obligations under the AI Act and GDPR. Communication teams prepare DSAR response templates explaining how generative models use personal data, referencing Commission expectations for trustworthy AI.
Security, compliance, and assurance
Accessing EuroHPC supercomputers introduces new cybersecurity requirements. Security architects implement zero-trust connectivity, multi-factor authentication, and privileged access management for HPC credentials. They review the EuroHPC security policy, aligning incident response runbooks with Joint Undertaking notification procedures. Where data traverses cross-border links to HPC facilities located in Finland, Spain, Italy, or France, legal teams confirm compliance with GDPR international transfer rules and national data localisation laws.
Compliance teams map Commission funding conditions to internal controls. For example, GenAI4EU emphasises adherence to the EU Code of Practice on Disinformation; organisations embedding generative models in content workflows implement guardrails to prevent synthetic misinformation and maintain audit logs for regulator review. Environmental, social, and governance (ESG) reporting teams track energy consumption associated with HPC usage, responding to Commission encouragement to pursue energy-efficient AI practices.
Integration with AI governance frameworks
Many enterprises already operate AI governance councils or ethics boards. The innovation package necessitates updates to charter documents, adding responsibilities for evaluating public funding opportunities, overseeing AI Factory engagements, and monitoring compliance with AI Act obligations. Councils define go/no-go criteria for projects seeking Commission support, requiring evidence of bias testing, explainability, and DSAR readiness before endorsing applications.
To maintain traceability, organisations extend model lifecycle management platforms with metadata capturing EU funding sources, HPC usage logs, and regulatory commitments. This ensures that audits can trace each AI asset back to its supporting programme and confirm that obligations—such as reporting progress to the Commission or providing open access to certain research outputs—are fulfilled.
Training and workforce development
The package complements the EU’s Digital Decade targets by highlighting the Deep Tech Talent Initiative, which aims to train one million people in advanced technologies by 2025. Human capital teams align corporate learning plans with these opportunities, encouraging staff to enrol in Commission-sponsored AI courses, EDIH workshops, and EuroHPC training academies. Privacy and compliance officers join the curriculum design process to ensure modules address DSAR obligations, algorithmic accountability, and responsible data use.
Change management programmes target business stakeholders who will consume AI insights generated via Commission-backed projects. Product owners learn to interpret AI model cards, understand limitations, and recognise when human override is required. Customer support teams prepare to answer data subject questions about AI-driven services, reinforcing transparency commitments.
Metrics and reporting
Governance dashboards evolve to capture innovation package participation: number of projects applying for AI Factory access, approval rates, compute hours consumed, and proportion of pilots reaching production. Compliance metrics track completion of DPIAs, DSAR response times for AI-driven services, and audit findings related to Commission funding conditions. Financial reporting monitors grants received, matched funding commitments, and capital expenditures tied to HPC connectivity.
Executives also monitor societal impact metrics promoted by the Commission, such as carbon footprint per training run and gender diversity within AI teams. These metrics feed ESG reports and help demonstrate alignment with EU strategic priorities.
Next steps
In the short term, organisations should watch the legislative process for the EuroHPC amendment, anticipated to conclude in 2024, and prepare to respond to calls for proposals as soon as they open. They must coordinate with national contact points for Horizon Europe and Digital Europe to stay informed about eligibility criteria. Simultaneously, privacy, legal, and security teams should draft reusable templates—DPIA addenda, DSAR response language, AI oversight checklists—that accelerate compliance when new AI projects launch under the Commission’s auspices.
By proactively aligning governance, implementation, and DSAR operations with the AI innovation package, enterprises can harness EU support for cutting-edge AI while safeguarding individual rights and maintaining regulator trust.
Continue in the AI pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Workforce Enablement and Safeguards Guide — Zeph Tech
Equip employees for AI adoption with skills pathways, worker protections, and transparency controls aligned to U.S. Department of Labor principles, ISO/IEC 42001, and EU AI Act…
-
AI Incident Response and Resilience Guide — Zeph Tech
Coordinate AI-specific detection, escalation, and regulatory reporting that satisfy EU AI Act serious incident rules, OMB M-24-10 Section 7, and CIRCIA preparation.
-
AI Model Evaluation Operations Guide — Zeph Tech
Build traceable AI evaluation programmes that satisfy EU AI Act Annex VIII controls, OMB M-24-10 Appendix C evidence, and AISIC benchmarking requirements.




