Regulatory landscape
Compliance teams cannot treat AI governance as a single jurisdictional project. The EU, U.S., and Singapore regimes establish overlapping—but distinct—expectations for risk classification, conformity assessments, documentation, and transparency. Understanding the baseline requirements, their enforcement timelines, and the supervisory authorities involved is the starting point for a durable governance roadmap.
European Union: AI Act obligations
The AI Act establishes four risk tiers—unacceptable, high, limited, and minimal—and enforces them through a mix of prohibitions, mandatory controls, and transparency requirements.Regulation (EU) 2024/1689 Article 5 prohibits categories such as social scoring by public authorities, predictive policing based on profiling, and biometric categorisation that infers sensitive traits. Article 9 requires providers of high-risk AI systems (Annex III) to implement a risk management system that spans design, development, and post-deployment monitoring. Articles 10 through 15 mandate high-quality data governance, technical documentation, record-keeping, transparency, human oversight, robustness, accuracy, and cybersecurity safeguards. Article 53 adds general-purpose AI obligations, including technical documentation, model evaluation summaries, and energy use disclosures for models with systemic risk potential.
Implementation timing is staggered. Prohibited practices become enforceable on 2 February 2025. General-purpose AI providers must comply with transparency and systemic risk mitigation duties by 2 August 2025. High-risk AI obligations start 2 August 2027, except for Annex III point 5 (critical infrastructure) and point 6 (education) which receive targeted transition support via delegated acts.Regulation (EU) 2024/1689 Providers must prepare for conformity assessments under Articles 43 and 44, using harmonised standards or common specifications once adopted. The European AI Office will coordinate cross-border enforcement, publish templates for transparency obligations, and maintain the EU database for high-risk AI systems. Supervisory authorities in member states retain audit powers and can request technical documentation, training data descriptions, and logs on demand.
Organisations deploying AI in the EU must therefore identify whether they act as “providers”, “deployers”, “importers”, or “distributors” under Article 3 definitions. Providers bear the broadest obligations, including quality management systems, technical documentation, and CE marking. Deployers must ensure human oversight, maintain logs, and conduct post-market monitoring. Importers and distributors must verify that CE-marked systems remain compliant and that instructions are available in the appropriate language. Cross-functional mapping between AI systems and these roles ensures the right evidence is prepared for each supervisory request.Regulation (EU) 2024/1689
2025 general-purpose AI enforcement window
The first EU AI Act enforcement milestone for general-purpose AI (GPAI) providers arrives on 2 August 2025, when Article 53 transparency, systemic risk mitigation, and technical documentation duties become mandatory. Providers must publish system cards that describe model capabilities, limitations, energy usage, and foreseeable misuses while supplying regulators with training and evaluation documentation.Regulation (EU) 2024/1689European Commission GPAI system card guidance
European AI Office implementation notices emphasise that GPAI providers should operationalise Article 73 serious-incident reporting alongside transparency measures. The Commission’s consultation closing 7 November 2025 introduced draft reporting templates, 24-hour notification expectations, and post-incident remediation evidence requirements that will apply to both high-risk AI deployers and GPAI providers once the regulation is fully in force.Article 73 consultationEuropean AI Office implementing notice
- Publish model system cards. Track the Commission’s GPAI templates, align disclosures with ISO/IEC 42001 documentation, and stage collateral for harmonised standards adoption once CEN-CENELEC publishes references.European Commission GPAI system card guidanceISO/IEC 42001:2023
- Instrument serious-incident response. Wire monitoring, legal, and policy teams to the Article 73 templates so that 24-hour notifications and subsequent seven- and thirty-day reports can be filed without delay.Policy Briefing — November 7, 2025Article 73 consultation
- Reconcile global assurance. Map GPAI transparency artefacts to U.S. procurement requirements under OMB M-24-10 and Singapore’s Veritas Toolkit so leadership sees a single evidence chain across jurisdictions.OMB M-24-10MAS Veritas Toolkit 2.0
United States: OMB M-24-10
OMB M-24-10 operationalises Section 4 of Executive Order 14110 by mandating that federal agencies inventory safety-impacting and rights-impacting AI use cases, manage associated risks, and report serious incidents promptly.Executive Order 14110OMB M-24-10 Section 4.1 requires every agency to designate a Chief AI Officer (CAIO) with authority over AI governance, coordinate with the agency Chief Information Officer, Chief Data Officer, and Chief Information Security Officer, and submit implementation plans to OMB. Section 5 directs agencies to log AI use cases in the government-wide inventory managed by the General Services Administration, including descriptions of intended purpose, data inputs, model ownership, safeguards, and impact assessments. Section 6 instructs agencies to conduct risk assessments, focusing on safety, civil rights, civil liberties, and privacy, before deploying AI. Section 7 sets incident response expectations: serious AI incidents must be reported to OMB and the National AI Initiative Office within 24 hours, followed by seven- and thirty-day reports.OMB M-24-10
OMB’s memorandum also highlights procurement guardrails. Section 8 requires that contracts for AI systems include performance guarantees, transparency provisions, and access for evaluation. Section 9 emphasises public transparency by directing agencies to publish annual reports describing AI use cases, risk mitigations, and waiver requests. Agencies that cannot fully comply may seek alternative measures via Section 10, but they must demonstrate equal or greater protections.OMB M-24-10
Private-sector organisations that sell to the U.S. government or align voluntarily with federal guidance should mirror these controls. Maintaining a CAIO-equivalent role, using the government inventory schema, and preparing 24-hour incident reporting workflows will improve procurement readiness and customer confidence.OMB M-24-10 Zeph Tech’s OMB M-24-10 briefing includes templates for inventory submissions, risk assessment checklists, and contracting clauses.
Singapore: Veritas Toolkit 2.0
Singapore’s Monetary Authority launched the Veritas Initiative to translate the nation’s AI governance principles into actionable assessments for the financial sector. Version 2.0 of the Veritas Toolkit, released in 2024, expands beyond credit risk to cover wealth management, insurance, and fraud detection scenarios.MAS Veritas Toolkit 2.0 The toolkit provides quantitative and qualitative fairness metrics, data quality diagnostics, explainability testing, and human oversight controls aligned with Singapore’s Model AI Governance Framework. Institutions are expected to apply these assessments at the model development stage, during pre-deployment reviews, and as part of ongoing monitoring.
MAS supervisors have signalled that regulated entities should document governance arrangements, board oversight, and accountability structures for AI and data analytics solutions. The Veritas Toolkit includes governance playbooks, roles and responsibilities matrices, and incident escalation workflows that align with Singapore’s Model AI Governance Framework and related supervisory guidance.Model AI Governance Framework Adopting these templates helps institutions demonstrate that they manage model bias, robustness, and explainability risks proactively. Zeph Tech’s Singapore GenAI governance update summarises the supervisory expectations communicated during 2024 industry briefings.
Multinational organisations should integrate the Veritas assessments into their global governance programs, especially if they operate in regulated financial markets. The toolkit’s scenario-based fairness tests complement the EU AI Act’s Annex III financial services risk categories and provide evidence for U.S. fair lending compliance. Harmonising outputs across jurisdictions reduces duplicative testing and positions teams to respond to MAS, EU, and U.S. regulators with a consistent narrative.