← Back to all briefings
AI 5 min read Published Updated Credibility 40/100

EU AI Liability Directive (AILD) overview and current status

Explains the European Commission’s proposal for an AI Liability Directive, covering its purpose, scope, key provisions (fault‑based liability, evidence disclosure and a rebuttable presumption of causality), criticisms and proposals for improvement, the legislative process and eventual withdrawal in 2025.

Single-point timeline showing the publication date sized by credibility score.
Publication date and credibility emphasis for this briefing. Source data (JSON)

Background and purpose

The European Commission tabled the AI Liability Directive (AILD) on 28 September 2022 to adapt non‑contractual civil liability rules to artificial intelligence. In its 2020 report on the safety and liability implications of AI, the Commission highlighted how existing tort rules struggle with the complexity, autonomy and opacity of AI systems【418205920317536†L135-L152】. A 2022 DLA Piper analysis notes that an IPSOS survey found liability rules to be one of the top three barriers to AI adoption for European companies; 43 % of companies that have not yet adopted AI identified liability concerns as a primary obstacle【572790371536795†L25-L35】. To build trust and encourage investment, the Commission proposed two complementary instruments: a revision of the Product Liability Directive (for strict producer liability) and the AI Liability Directive for fault‑based tort claims【572790371536795†L25-L43】. While the AI Act focuses on risk management and prevention, the AILD aims to harmonise national liability regimes and ensure that victims of AI‑caused harm can obtain compensation【572790371536795†L58-L63】.

Scope and approach

The proposal covers non‑contractual, fault‑based civil liability claims for damage caused by an output or failure to produce an output from any AI system【54922471201161†L125-L131】. It adopts a two‑step approach: first, Member States would adapt and coordinate their tort law to ease the burden of proof for victims【572790371536795†L118-L123】; second, the Commission would evaluate the directive’s effectiveness after five years and consider additional measures such as strict liability regimes or compulsory insurance【572790371536795†L125-L129】【572790371536795†L217-L223】. By choosing a directive rather than a regulation, the Commission sought to leave flexibility for national legal traditions while requiring minimum harmonisation across the EU【572790371536795†L107-L116】.

Key provisions

  • Fault‑based liability and procedural support. Under the AILD, claimants must still prove damage, the liable person’s fault and a causal link【54922471201161†L125-L131】. However, the directive aims to help claimants by requiring disclosure of relevant evidence and mandating access to information about high‑risk AI systems【54922471201161†L135-L139】. National courts could order providers or users of high‑risk AI systems to disclose evidence when victims have made reasonable efforts to obtain it【572790371536795†L180-L188】. Courts would also have the power to preserve evidence and must balance disclosure with trade secret protections【572790371536795†L189-L200】. If a defendant refuses to comply, courts may presume non‑compliance with a relevant duty of care【572790371536795†L210-L213】.
  • Rebuttable presumption of causality. To ease the claimant’s burden of proving causation, the directive introduces a rebuttable presumption that an AI system’s output (or failure to produce an output) caused the damage【418205920317536†L151-L154】. This presumption applies only if the claimant has shown the defendant’s fault (e.g., breach of a duty of care under the AI Act), that it is reasonably likely the fault affected the AI system’s output, and that the output caused the damage【572790371536795†L149-L162】. The presumption does not fully reverse the burden of proof; rather, it shifts the causal inference to the defendant, who may rebut it by demonstrating that the claimant had sufficient evidence【572790371536795†L164-L170】. Different regimes apply to high‑risk AI systems, low‑risk systems and personal‑use situations【572790371536795†L164-L178】.
  • Relationship with high‑risk AI systems. The presumption of causality is limited to cases where a provider or user of a high‑risk AI system fails to comply with obligations under the AI Act【572790371536795†L164-L170】. For personal, non‑professional use, the presumption applies only if the user materially interfered with the AI system or controlled its operating conditions【572790371536795†L172-L178】.
  • Protection of trade secrets. When courts order disclosure, they must consider the legitimate interests of all parties and may adopt measures—such as limiting access to documents or redacting sensitive parts of rulings—to preserve trade secrets【572790371536795†L189-L207】.
  • Evaluation and future strict liability. After transposition, the Commission would assess the directive’s results and could propose stricter liability regimes (such as strict liability for operators of certain AI systems) or mandatory insurance【572790371536795†L217-L223】.

Critiques and proposals for improvement

The Future of Life Institute (FLI) welcomed the AILD but argued that it does not go far enough to address the “black‑box” nature of AI. FLI notes that the directive establishes a fault‑based framework for all AI systems and introduces procedural mechanisms but still requires victims to navigate complex and opaque AI models【54922471201161†L125-L147】. They warn that claimants may struggle to gather explainable evidence, particularly for advanced general‑purpose AI systems (GPAIS) such as foundation models【54922471201161†L153-L167】. To address these shortcomings, FLI recommends: (1) adopting a strict liability regime for general‑purpose AI systems to account for informational asymmetries and non‑reciprocal risks【54922471201161†L172-L180】【54922471201161†L203-L213】; (2) expanding the directive’s scope to explicitly include GPAIS in the definitions and subject them to strict liability【54922471201161†L186-L236】; (3) introducing joint liability between upstream developers and downstream deployers and requiring model cards to ensure transparency【54922471201161†L233-L244】; and (4) recognising systemic and immaterial harms that may not be captured by fault‑based claims【54922471201161†L153-L167】.

Legislative process and status

After the proposal’s publication in 2022, debates continued in the European Parliament. Discussions were suspended until the AI Act was adopted, and the Commission sent an updated version aligned with the AI Act in July 2024【418205920317536†L170-L173】. A study by the European Parliamentary Research Service in September 2024 recommended extending the directive’s scope to general‑purpose and other high‑impact AI systems and even transforming it into a software liability regulation【418205920317536†L175-L180】. The Committee on Internal Market and Consumer Protection (IMCO) considered the directive premature and unnecessary and urged the responsible committee to reject it【418205920317536†L190-L195】. In its 2025 work programme, the Commission announced plans to withdraw the proposal; the AI Liability Directive was officially withdrawn in October 2025【418205920317536†L202-L203】. This withdrawal underscores ongoing debates about whether separate AI‑specific liability rules are needed or whether future legislation should focus on broader software liability frameworks.

Significance and outlook

The AI Liability Directive sought to harmonise EU tort law, close gaps created by autonomous and opaque AI systems, and foster trust in AI technologies【572790371536795†L58-L63】. Its key innovations—requiring evidence disclosure and introducing a rebuttable presumption of causality—would have made it easier for victims to bring claims. However, critics argue that a fault‑based framework remains insufficient for highly complex AI and that only strict liability can properly allocate risks. The directive’s withdrawal indicates that EU policymakers may pursue alternative approaches, such as a general software liability regulation or updates to the Product Liability Directive. Organisations developing or deploying AI should continue to monitor EU legislative developments, maintain robust documentation of their AI systems, and ensure compliance with the AI Act’s risk management obligations. Even without the AILD, courts and regulators are likely to expect transparent evidence trails, proactive risk mitigation and accountability mechanisms from AI providers and users.

Single-point timeline showing the publication date sized by credibility score.
Publication date and credibility emphasis for this briefing. Source data (JSON)

Continue in the AI pillar

Return to the hub for curated research and deep-dive guides.

Visit pillar hub

Latest guides

  • AI governance
  • EU law
  • liability
  • risk management
Back to curated briefings