AI Briefing — EU proposes AI Liability Directive
The European Commission proposed an AI Liability Directive on 28 September 2022 to make it easier for people harmed by AI systems to seek compensation, introducing rebuttable presumptions of causality and disclosure obligations for high-risk systems.
The European Commission issued a draft AI Liability Directive on 28 September 2022 alongside updates to product liability rules. The proposal lowers evidentiary barriers for claimants harmed by AI-enabled products or services by creating rebuttable presumptions of causality when providers withhold technical evidence or fail to meet transparency obligations.
Member States would need to transpose the directive into national law, aligning fault-based civil liability with the AI Act’s risk tiers. High-risk system providers could be compelled to disclose technical documentation during litigation, increasing the compliance burden on organizations deploying automated decision-making in critical domains.
Product, legal, and data science teams should review logging, model documentation, and human oversight controls to prepare for discovery requests. Companies operating in the EU will need to map AI use cases against the AI Act and anticipate liability exposure if safety, governance, or transparency duties lapse.
- European Commission press release summarizes the rationale and expected impact of the directive.
- Official proposal COM(2022) 496 details scope, burden-of-proof changes, and interactions with the AI Act.
Continue in the AI pillar
Return to the hub for curated research and deep-dive guides.
Latest guides
-
AI Workforce Enablement and Safeguards Guide — Zeph Tech
Equip employees for AI adoption with skills pathways, worker protections, and transparency controls aligned to U.S. Department of Labor principles, ISO/IEC 42001, and EU AI Act…
-
AI Incident Response and Resilience Guide — Zeph Tech
Coordinate AI-specific detection, escalation, and regulatory reporting that satisfy EU AI Act serious incident rules, OMB M-24-10 Section 7, and CIRCIA preparation.
-
AI Model Evaluation Operations Guide — Zeph Tech
Build traceable AI evaluation programmes that satisfy EU AI Act Annex VIII controls, OMB M-24-10 Appendix C evidence, and AISIC benchmarking requirements.




