← Back to all briefings
Policy 6 min read Published Updated Credibility 40/100

AI Act compliance update

The AI Act sets comprehensive rules for high‑risk AI, GPAI providers and risk‑based obligations; this update summarises recent guidelines, proposed simplifications and key deadlines, with practical recommendations for compliance.

Single-point timeline showing the publication date sized by credibility score.
Publication date and credibility emphasis for this briefing. Source data (JSON)

Context and recent developments

The EU’s Artificial Intelligence (AI) Act is the world’s first comprehensive law regulating AI. It entered into force on 1 August 2024 and will be fully applicable on 2 August 2026, but certain obligations come into effect earlier: prohibited AI practices and AI literacy requirements apply from 2 February 2025, and general-purpose AI (GPAI) transparency obligations and governance rules apply from 2 August 2025【109655055543575†L274-L283】. The law adopts a risk‑based approach, imposing stringent requirements on providers of high‑risk AI systems and minimal obligations for low‑risk uses. In July 2025 the Commission released guidelines and a voluntary code of practice for GPAI providers, along with a template for disclosing training‑data characteristics, to clarify obligations and reduce administrative burden【109655055543575†L225-L241】. In November 2025 the Commission proposed a Digital Package on Simplification to adjust and streamline implementation; this package centralises oversight through an AI Office, simplifies obligations for small and mid‑sized companies, and introduces regulatory sandboxes and real‑world testing to support innovation【109655055543575†L290-L307】.

Rights and obligations

The AI Act sets out different obligations depending on risk levels. High‑risk AI systems (e.g., biometric identification, critical infrastructure management, employment and education decisions) must comply with strict requirements: providers must conduct risk management and impact assessments, ensure the quality and representativeness of data sets, implement logging and record‑keeping, prepare detailed technical documentation, ensure human oversight, and design systems to be robust, secure and resilient【109655055543575†L170-L178】. Users of high‑risk systems must adopt appropriate risk management measures and monitor system performance. Some sectors (regulated products under Annex I) are given longer to comply: high‑risk AI embedded in products subject to existing EU safety legislation have until 2 August 2027【109655055543575†L274-L283】. Under the July 2025 GPAI guidance, providers of general‑purpose AI models must publish comprehensive summaries of the content used to train their models, implement policies for detecting and mitigating system misuse, and offer technical documentation for downstream deployers【109655055543575†L225-L241】.

Implementation challenges and proposed amendments

The Commission’s Digital Package on Simplification proposes readiness‑based application of the high‑risk rules: obligations for Annex III systems would apply six months after the Commission confirms that necessary standards and guidelines are in place or, at the latest, by 2 December 2027; for Annex I systems, the deadline becomes 12 months after the decision or 2 August 2028【155527830248549†L153-L160】. Generative AI providers already on the market before the AI Act’s full application have a six‑month grace period (until 2 February 2027) to comply with detectability and watermarking obligations【155527830248549†L162-L165】. The proposal also expands privileges for small and mid‑cap companies, eliminates mandatory AI literacy obligations for all operators, and reduces some registration requirements【155527830248549†L171-L188】. Centralised oversight by the AI Office and enhanced cooperation with fundamental rights authorities are introduced; the Office is responsible for supervising certain GPAI systems and very large online platforms【155527830248549†L204-L216】. To streamline conformity assessment and encourage innovation, new provisions allow for regulatory sandboxes, real‑world testing, and centralised application processes for notified bodies【155527830248549†L252-L260】.

Implications and recommended actions

For organisations developing or deploying AI, the AI Act and its amendments mean that compliance planning must start immediately. Providers should map their AI systems to the Act’s risk categories and identify which obligations apply, focusing on high‑risk areas such as biometric identification, recruitment, healthcare and education. Developers of general‑purpose AI models must prepare comprehensive training‑data summaries, implement risk management policies, and collaborate with users to ensure downstream compliance. The proposed Digital Package introduces flexibility by tying deadlines to the availability of standards; however, companies should monitor Commission decisions and national transpositions closely, because obligations may commence as early as six months after standards are ready. Prohibited practices—such as social scoring, subliminal manipulation and exploitation of vulnerabilities—must be eliminated immediately. Organisations should invest in AI literacy training for staff, even if it becomes non‑mandatory, to support responsible deployment. Under the amendments, SMEs may benefit from simplified obligations and increased access to regulatory sandboxes and real‑world testing programmes【109655055543575†L290-L307】.

Zeph Tech analysis

The AI Act sets a global precedent for regulating AI technologies. While the risk‑based approach provides clarity, the evolving implementation timelines and amendments create uncertainty. Providers and deployers should not wait for final standards; early adoption of robust risk management frameworks, documentation practices and transparency measures will position organisations for compliance and reduce business disruption. Zeph Tech can play a crucial role by producing briefs and guides that translate complex regulatory requirements into practical technical checklists, helping engineers implement data‑quality controls, robust logging and human‑in‑the‑loop mechanisms. We should monitor the Commission’s AI Office for guidance and prepare for upcoming codes of practice on AI‑generated content and transparency【109655055543575†L248-L261】. By engaging with regulatory sandboxes and real‑world testing programmes early, companies can influence standards, demonstrate compliance and gain competitive advantage.

Implementation timeline

Organizations should establish clear milestones for addressing the requirements introduced by this development. Key phases typically include:

  • Immediate (0-30 days): Conduct gap analysis comparing current capabilities against new requirements. Brief executive leadership and board members on obligations and potential compliance paths. Identify internal stakeholders who will own implementation workstreams.
  • Near-term (1-3 months): Update policies, procedures, and technical controls to align with new standards. Designate accountable roles and begin staff training. Engage external advisors where specialized expertise is required.
  • Medium-term (3-12 months): Complete implementation of required changes, conduct internal audits, and establish ongoing monitoring mechanisms. Document lessons learned and refine processes based on initial operational experience.
  • Long-term (12+ months): Integrate requirements into regular compliance cycles, update vendor contracts, and participate in industry working groups to track evolving interpretations. Plan for periodic reassessments as regulatory guidance matures.

Organizations with mature governance programs may accelerate these timelines by leveraging existing control frameworks and cross-functional teams. Those building capabilities from scratch should budget additional time for foundational work and stakeholder alignment.

Compliance considerations

Legal and compliance teams should assess how this development interacts with other regulatory obligations. Key areas to evaluate include:

  • Regulatory overlap: Identify where requirements overlap with existing frameworks (e.g., data protection laws, sector-specific regulations) and establish unified control implementations. Map common controls to reduce duplication and streamline audit evidence collection.
  • Documentation requirements: Determine what evidence will satisfy auditors and regulators. Develop templates for required documentation and establish retention policies. Implement version control and change management procedures for compliance artifacts.
  • Third-party assurance: Evaluate whether external certifications or attestations will strengthen compliance posture and facilitate customer trust. Consider industry-recognized frameworks that provide portable evidence across multiple regulatory contexts.
  • Cross-border implications: For multinational organizations, assess how requirements apply across different jurisdictions and whether harmonized or jurisdiction-specific approaches are necessary. Monitor regulatory cooperation agreements that may affect enforcement coordination.

Regular consultation with external counsel may be warranted as enforcement practices and regulatory guidance evolve. Organizations should establish clear escalation paths for novel compliance questions that arise during implementation.

Future Outlook and Considerations

Organizations should monitor developments in this area and prepare for potential evolution of requirements, practices, or technologies. Understanding the broader trajectory helps inform strategic planning and investment decisions.

Industry engagement through working groups, standards bodies, and peer networks provides early insight into emerging expectations and best practices. Active participation can influence outcomes and ensure organizational interests are considered in future developments.

Single-point timeline showing the publication date sized by credibility score.
Publication date and credibility emphasis for this briefing. Source data (JSON)

Continue in the Policy pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

Back to curated briefings

Comments

Community

We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.

    Share your perspective

    Submissions showing "Awaiting moderation" are in review. Spam, low-effort posts, or unverifiable claims will be rejected. We verify submissions with the email you provide, and we never publish or sell that address.

    Verification

    Complete the CAPTCHA to submit.