← Back to all briefings
AI 5 min read Published Updated Credibility 85/100

EU AI Liability Directive (AILD) overview and current status

Explains the European Commission’s proposal for an AI Liability Directive, covering its purpose, scope, key provisions (fault‑based liability, evidence disclosure and a rebuttable presumption of causality), criticisms and proposals for improvement, the legislative process and eventual withdrawal in 2025.

Accuracy-reviewed by the editorial team

AI pillar illustration for Zeph Tech briefings
AI deployment, assurance, and governance briefings

Explains the European Commission’s proposal for an AI Liability Directive, covering its purpose, scope, key provisions (fault‑based liability, evidence disclosure and a rebuttable presumption of causality), criticisms and proposals for improvement, the legislative process and eventual withdrawal in 2025.

Background and Purpose

The European Commission tabled the AI Liability Directive (AILD) on 28 September 2022 to adapt non‑contractual civil liability rules to artificial intelligence. In its 2020 report on the safety and liability implications of AI, the Commission highlighted how existing tort rules struggle with the complexity, autonomy and opacity of AI systems. A 2022 DLA Piper analysis notes that an IPSOS survey found liability rules to be one of the top three barriers to AI adoption for European companies; 43 % of companies that have not yet adopted AI identified liability concerns as a primary obstacle. To build trust and encourage investment, the Commission proposed two complementary instruments: a revision of the Product Liability Directive (for strict producer liability) and the AI Liability Directive for fault‑based tort claims. While the AI Act focuses on risk management and prevention, the AILD aims to harmonize national liability regimes and ensure that victims of AI‑caused harm can obtain compensation.

Scope and Approach

The proposal covers non‑contractual, fault‑based civil liability claims for damage caused by an output or failure to produce an output from any AI system. It adopts a two‑step approach: first, Member States would adapt and coordinate their tort law to ease the burden of proof for victims; second, the Commission would evaluate the directive’s effectiveness after five years and consider additional measures such as strict liability regimes or compulsory insurance. By choosing a directive rather than a regulation, the Commission sought to leave flexibility for national legal traditions while requiring minimum harmonization across the EU.

Key Provisions

  • Fault‑based liability and procedural support. Under the AILD, claimants must still prove damage, the liable person’s fault and a causal link. However, the directive aims to help claimants by requiring disclosure of relevant evidence and mandating access to information about high‑risk AI systems. National courts could order providers or users of high‑risk AI systems to disclose evidence when victims have made reasonable efforts to obtain it. Courts would also have the power to preserve evidence and must balance disclosure with trade secret protections. If a defendant refuses to comply, courts may presume non‑compliance with a relevant duty of care.
  • Rebuttable presumption of causality. To ease the claimant’s burden of proving causation, the directive introduces a rebuttable presumption that an AI system’s output (or failure to produce an output) caused the damage. This presumption applies only if the claimant has shown the defendant’s fault (for example, breach of a duty of care under the AI Act), that it is reasonably likely the fault affected the AI system’s output, and that the output caused the damage. The presumption does not fully reverse the burden of proof; rather, it shifts the causal inference to the defendant, who may rebut it by demonstrating that the claimant had sufficient evidence. Different regimes apply to high‑risk AI systems, low‑risk systems and personal‑use situations.
  • Relationship with high‑risk AI systems. The presumption of causality is limited to cases where a provider or user of a high‑risk AI system fails to comply with obligations under the AI Act. For personal, non‑professional use, the presumption applies only if the user materially interfered with the AI system or controlled its operating conditions.
  • Protection of trade secrets. When courts order disclosure, they must consider the legitimate interests of all parties and may adopt measures—such as limiting access to documents or redacting sensitive parts of rulings—to preserve trade secrets.
  • Evaluation and future strict liability. After transposition, the Commission would assess the directive’s results and could propose stricter liability regimes (such as strict liability for operators of certain AI systems) or mandatory insurance.

Continue in the AI pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

Coverage intelligence

Published
Coverage pillar
AI
Source credibility
85/100 — high confidence
Topics
AI governance · EU law · liability · risk management
Sources cited
3 sources (ec.europa.eu, eur-lex.europa.eu, digital-strategy.ec.europa.eu)
Reading time
5 min

Further reading

  1. European Commission: AI Liability Directive proposal
  2. EUR-Lex: Proposal for an AI Liability Directive
  3. European Commission: AI Liability
  • AI governance
  • EU law
  • liability
  • risk management
Back to curated briefings

Comments

Community

We publish only high-quality, respectful contributions. Every submission is reviewed for clarity, sourcing, and safety before it appears here.

    Share your perspective

    Submissions showing "Awaiting moderation" are in review. Spam, low-effort posts, or unverifiable claims will be rejected. We verify submissions with the email you provide, and we never publish or sell that address.

    Verification

    Complete the CAPTCHA to submit.