U.S.-EU Trade and Technology Council Advances AI Roadmap — May 16, 2022
The US-EU TTC’s May 2022 AI roadmap outlines shared priorities for trustworthy AI risk management, standards, and measurement, prompting organisations to align governance, documentation, and supplier oversight across both jurisdictions.
Executive briefing: On 16 May 2022, the US-EU Trade and Technology Council (TTC) released a joint roadmap on evaluation and measurement tools for trustworthy artificial intelligence (AI). The document outlines a shared vision for AI risk management, standardisation, and research collaboration, building on NIST’s AI Risk Management Framework and the EU’s forthcoming AI Act. Organisations developing or deploying AI systems across both jurisdictions should align governance, documentation, and assurance practices with the roadmap’s priorities to remain competitive and compliant.
Roadmap pillars
The roadmap focuses on four pillars: (1) advancing risk management approaches for trustworthy AI; (2) promoting the development and adoption of international standards; (3) supporting joint research on AI measurement science; and (4) fostering monitoring, evaluation, and risk assessment mechanisms. It emphasises characteristics such as transparency, fairness, accountability, robustness, and privacy. The TTC intends to produce shared tools, taxonomies, and metrics that can be integrated into regulatory and voluntary frameworks on both sides of the Atlantic.
Risk management alignment
The roadmap encourages harmonisation between NIST’s AI Risk Management Framework (AI RMF) and the EU’s AI Act, recommending consistent terminology and risk categorisation. Organisations should map AI system inventories against the AI RMF’s functions (Govern, Map, Measure, Manage) and the EU AI Act’s risk tiers (unacceptable, high-risk, limited, minimal). Develop crosswalks between US and EU documentation requirements, ensuring that conformity assessment artefacts—data sheets, algorithmic impact assessments, model cards—can serve both regimes.
Implement governance structures that oversee AI lifecycle risk. Establish AI ethics committees or responsible AI councils with representation from legal, compliance, engineering, product, and affected stakeholders. Document policies covering data governance, model development, validation, monitoring, and incident response. Align these policies with sector-specific rules (financial services model risk management, healthcare safety standards) and with broader ESG commitments.
Standards and interoperability
The TTC roadmap calls for joint leadership in international standard bodies (ISO/IEC JTC 1/SC 42, IEEE, ETSI). Organisations should monitor emerging standards on AI risk management, bias mitigation, robustness testing, and explainability. Participate in standards development or industry consortia to influence requirements and anticipate adoption timelines. Align internal taxonomies with standards such as ISO/IEC TR 24028 (trustworthiness), ISO/IEC 23894 (risk management), and IEEE 7000-series guidelines.
Interoperability extends to datasets and benchmarks. The roadmap envisions transatlantic collaboration on open datasets, synthetic data generation, and privacy-preserving techniques. Evaluate data governance practices, ensuring lawful bases for cross-border data sharing, anonymisation standards, and federated learning capabilities. Incorporate privacy-enhancing technologies (differential privacy, homomorphic encryption) into model pipelines when handling sensitive data.
Measurement and assurance
The TTC commits to developing shared evaluation tools for AI system characteristics, including robustness, bias, and interpretability. Organisations should adopt multi-metric evaluation frameworks, combining statistical tests, adversarial robustness assessments, and human-in-the-loop reviews. Build model validation environments that simulate real-world conditions, measure drift, and detect unintended outcomes. Maintain traceability of training data, feature engineering, and model versions to support audits.
Implement continuous monitoring for deployed models. Track performance degradation, fairness metrics across protected attributes, and anomaly rates. Integrate monitoring outputs with incident response procedures to pause or roll back models when thresholds are breached. Document corrective actions and feed lessons learned into model governance committees.
Documentation and transparency
The roadmap underscores the importance of documentation artefacts. Develop model cards, data sheets, and system impact assessments that describe intended use, performance, limitations, and risk mitigation measures. Ensure documentation is accessible to regulators, customers, and internal auditors. For high-risk AI systems under the EU AI Act, prepare conformity assessment files containing risk management processes, data governance controls, testing results, and post-market monitoring plans.
Transparency also extends to user communication. Provide clear disclosures when AI systems influence decisions impacting individuals (credit, employment, healthcare). Offer appeal mechanisms and human oversight to comply with EU AI Act requirements and US sectoral regulations.
Joint research and innovation
The roadmap identifies opportunities for collaborative research on measurement science, benchmarking, and cybersecurity for AI. Organisations can partner with academia and research labs participating in TTC initiatives to pilot new evaluation techniques. Engage with NIST and EU research calls, contributing datasets or participating in challenge programs to stress-test AI systems.
Invest in R&D for trustworthy AI tooling—bias detection, explainable AI, formal verification, secure multi-party computation. Align investments with the roadmap’s focus areas to benefit from transatlantic knowledge sharing and potential funding opportunities.
Compliance and regulatory outlook
Although the roadmap is non-binding, it signals convergence of US and EU expectations. Monitor developments in the EU AI Act, including obligations for high-risk systems (risk management, data governance, technical documentation, human oversight) and potential bans on unacceptable practices. In the US, track the NIST AI RMF release, sectoral guidance (FTC, CFPB, EEOC, DOJ statements on algorithmic fairness), and state-level regulations. Organisations operating globally should design compliance programmes that satisfy the strictest requirements to streamline product deployment.
Prepare for audits by building repositories of evidence: training data provenance, validation results, bias mitigation steps, and monitoring dashboards. Establish escalation paths for regulatory inquiries and customer assurance requests, including responses to due diligence questionnaires and contractual obligations.
Implementation roadmap for organisations
Near term (0–6 months): Conduct an AI system inventory, classify models by risk, and benchmark current governance against NIST AI RMF and EU AI Act expectations. Identify gaps in documentation, monitoring, and human oversight. Launch training programmes for developers and product teams on trustworthy AI principles.
Medium term (6–18 months): Build or refine responsible AI frameworks, including standard operating procedures for data collection, model evaluation, and deployment approvals. Invest in tooling for bias testing, explainability, and model monitoring. Engage with standards bodies and industry groups aligned with the TTC roadmap to stay informed about emerging requirements.
Long term (18+ months): Embed trustworthy AI practices into enterprise risk management and audit cycles. Establish metrics for board reporting (number of high-risk AI systems, audit findings, corrective actions). Participate in TTC pilots or research collaborations, contributing to shared measurement tools and benefiting from harmonised best practices.
Sourcing and ecosystem considerations
Evaluate AI vendors and partners for alignment with the roadmap. Update procurement questionnaires to assess suppliers’ risk management processes, documentation artefacts, and monitoring capabilities. Include contractual clauses requiring transparency, audit access, and adherence to emerging standards. For cloud AI platforms, confirm availability of features supporting responsible AI (bias dashboards, explainability APIs, model governance workflows).
Coordinate with legal and privacy teams to ensure data processing agreements and cross-border transfer mechanisms (Standard Contractual Clauses, Binding Corporate Rules) support transatlantic AI projects. Implement data localisation strategies where necessary while leveraging federated analytics to minimise transfers.
Stakeholder engagement
Engage internal stakeholders—executives, developers, compliance, customer success—to align on trustworthy AI objectives. Provide external communications to customers and regulators highlighting adherence to TTC roadmap principles. Participate in public consultations and workshops hosted by the TTC, NIST, or the European Commission to shape future guidance.
The TTC AI roadmap signals a coordinated transatlantic approach to trustworthy AI. Organisations that integrate these priorities into governance, standards engagement, and product development will be better positioned to meet evolving regulatory expectations and market demands.
Continue in the AI pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Workforce Enablement and Safeguards Guide — Zeph Tech
Equip employees for AI adoption with skills pathways, worker protections, and transparency controls aligned to U.S. Department of Labor principles, ISO/IEC 42001, and EU AI Act…
-
AI Incident Response and Resilience Guide — Zeph Tech
Coordinate AI-specific detection, escalation, and regulatory reporting that satisfy EU AI Act serious incident rules, OMB M-24-10 Section 7, and CIRCIA preparation.
-
AI Model Evaluation Operations Guide — Zeph Tech
Build traceable AI evaluation programmes that satisfy EU AI Act Annex VIII controls, OMB M-24-10 Appendix C evidence, and AISIC benchmarking requirements.




