← Back to all briefings
AI 6 min read Published Updated Credibility 71/100

EU proposes AI Liability Directive

The European Commission proposed an AI Liability Directive on 28 September 2022 to make it easier for people harmed by AI systems to seek compensation, introducing rebuttable presumptions of causality and disclosure obligations for high-risk systems.

Editorially reviewed for factual accuracy

AI pillar illustration for Zeph Tech briefings
AI deployment, assurance, and governance briefings

Proposal Context and Objectives

The European Commission published its proposal for an AI Liability Directive on 28 September 2022, addressing civil liability rules for damages caused by artificial intelligence systems. The proposal complements the AI Act's regulatory requirements with liability mechanisms that ensure injured parties can obtain compensation when AI systems cause harm.

Recognizing that AI characteristics like opacity, complexity, and autonomy create evidentiary challenges under existing fault-based liability frameworks, the directive introduces procedural innovations that rebalance burdens between AI deployers and injured parties. The proposal represents part of the Commission's full approach to AI governance combining ex ante regulation with ex post liability mechanisms.

Rebuttable Presumption of Causality

A central innovation sets up a rebuttable presumption of causal link between defendant's fault and AI system output when certain conditions are met. Courts can presume causality if the plaintiff shows that the defendant failed to comply with relevant duty of care, such as AI Act requirements, and that this non-compliance made the harm more likely. Defendants can rebut the presumption by demonstrating alternative causes for the harm. This mechanism addresses evidentiary difficulties that often prevent successful AI-related claims under general product liability rules, where plaintiffs struggle to show how specific AI system behaviors caused their injuries.

Disclosure of Evidence Obligations

The directive enables courts to order AI deployers and providers to disclose relevant evidence about their AI systems when plaintiffs present facts supporting plausibility of liability claims. This addresses information asymmetries where injured parties cannot access technical details about AI system functioning needed to substantiate claims. Disclosure orders remain subject to proportionality assessments and protections for confidential business information, trade secrets, and legal privilege. The mechanism provides judicial tools for obtaining evidence while maintaining appropriate safeguards against fishing expeditions or inappropriate disclosure of competitive information.

Relationship with AI Act Requirements

The liability directive creates strong connections to AI Act compliance, effectively making regulatory violations evidence of fault for liability purposes. High-risk AI system operators who fail to meet AI Act obligations face heightened liability exposure under the presumption mechanisms. This linkage incentivizes AI Act compliance beyond regulatory enforcement, as deployers face civil liability consequences from failures to implement required risk management, transparency, human oversight, and quality management measures. Legal teams should coordinate AI Act compliance and liability risk management given these interdependencies.

Scope and Application

The directive applies to civil law claims against persons deploying AI systems for their professional activities, covering both physical injuries and damage to property. It applies regardless of AI system risk classification under the AI Act, though high-risk systems face improved presumption mechanisms. The directive complements rather than replaces existing EU product liability rules, which apply to defective products including AI components. The dual framework means AI-related damages may give rise to claims under both strict product liability and fault-based frameworks depending on circumstances. If you are affected, understand both pathways and their distinct requirements.

National Transposition and Harmonization

As a directive rather than regulation, the AI Liability Directive requires national transposition by member states, potentially creating variation in setup across jurisdictions. The Commission chose directive form to allow integration with existing national fault-based liability frameworks that differ across member states. However, this approach may result in divergent national setups that complicate compliance for organizations operating across multiple EU jurisdictions. Following transposition deadlines, legal teams should monitor national implementing legislation to understand jurisdiction-specific requirements and procedural variations.

Insurance and Risk Management Implications

The directive's liability framework has significant implications for AI-related insurance products and enterprise risk management practices. Insurance carriers may develop AI-specific coverage addressing liability risks from AI system deployment, potentially incorporating AI Act compliance verification as underwriting criteria.

Organizations deploying AI systems should evaluate liability exposure, consider insurance coverage needs, and implement risk management practices that show appropriate duty of care. Documentation of AI governance frameworks, testing procedures, and compliance measures becomes important evidence for defending against liability claims or rebutting causation presumptions.

Documentation

Continue in the AI pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

Coverage intelligence

Published
Coverage pillar
AI
Source credibility
71/100 — medium confidence
Topics
AI Governance · Liability · Regulation · European Union
Sources cited
2 sources (iso.org, nist.gov)
Reading time
6 min

Documentation

  1. Industry Standards and Best Practices — International Organization for Standardization
  2. NIST AI Risk Management Framework
  • AI Governance
  • Liability
  • Regulation
  • European Union
Back to curated briefings

Comments

Community

We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.

    Share your perspective

    Submissions showing "Awaiting moderation" are in review. Spam, low-effort posts, or unverifiable claims will be rejected. We verify submissions with the email you provide, and we never publish or sell that address.

    Verification

    Complete the CAPTCHA to submit.