New Directive on Product Liability Brings AI Under Its Scope: A Game-Changer for Innovation and Accountability

New Directive on Product Liability Brings AI Under Its Scope: A Game-Changer for Innovation and Accountability

The Product Liability Directive, published in the Official Journal on November 18, 2024, marks a transformative milestone for businesses navigating the complex landscape of Artificial Intelligence (AI). By extending its scope to include software, the Directive enables injured parties to seek compensation for damages caused by AI systems—a crucial step in aligning liability frameworks with technological advancements.

Below is an article that digs into the main contents of the Directive and its impact on artificial intelligence drafted with my DLA Piper teammate Federico Toscani . I hope you'll enjoy it!

A Long-Awaited Reform: Addressing AI and Software Gaps

For decades, it was unclear whether the original 1985 Product Liability Directive encompassed intangible products like software. The rapid evolution of AI and its associated risks prompted legal experts to advocate for extending liability coverage to software, given the inequity of distinguishing between tangible and intangible products.

The new Directive resolves this ambiguity, explicitly including AI software within its remit. Effective for products placed on the market after December 9, 2026, it fills a critical gap, ensuring robust protections against damages caused by intangible innovations.

Expanding the Scope of Liability: What Businesses Need to Know

The Directive establishes a framework of joint and several liability across economic operators involved in an AI system's lifecycle. Key liable parties include:

  • Manufacturers of Defective Products: defined as: (i) any party that develops, produces, or manufactures a product. If the product is composed of multiple components, this includes the party responsible for the final assembly. This provision is relevant where an AI system is embedded in a physical product, making the final assembler liable even if it did not directly develop the AI system. (ii) any party that markets a product under their own name or brand, even if it did not manufacture the product. In the context of the AI Act, this would be the provider of the AI system; and (iii) any party that develops, produces, or manufactures a product for its own use.
  • Component Manufacturers: Liability applies if a defective component, such as embedded AI software, causes a product malfunction.
  • Modifiers: Entities significantly altering a product may also face liability, aligning with the AI Act's reclassification of such entities as providers.
  • Importers and Representatives: Liability extends to importers or logistics providers when no EU-based representative is available.

AI in Focus: Broader Definitions and Responsibilities

Under Article 4, the Directive broadens the definition of "product" to include not only tangible goods but also electricity, digital files, and software—including AI. A notable exception applies to open-source software, provided it remains a non-commercial endeavor.

The Directive also tightens liability in AI-related scenarios by introducing explicit standards for assessing product safety under Article 7, which presumes defectiveness in cases of:

  1. Non-compliance with EU safety requirements.
  2. Obvious product malfunctions.

New Protections for Psychological and Data-Driven Damages

Article 6 introduces expanded compensation rights, covering:

  • Death or personal injury, including psychological harm potentially caused by harmful chatbot interactions.
  • Damage to property and non-professional data integrity, such as corruption or loss of personal files due to AI system errors.

Additionally, Article 5 extends the right to claim compensation beyond consumers, allowing any harmed individual to seek redress.

Tackling the "Black Box" Challenge: Proving AI Defectiveness

The Directive addresses the complexity of proving defectiveness and causation in AI systems, often opaque due to their reliance on advanced algorithms. To mitigate this:

  • Disclosure mechanisms compel defendants to share relevant evidence if a claimant presents a plausible case.
  • Refusal to disclose evidence triggers a presumption of defectiveness.

These provisions aim to balance claimant rights with technological realities while stopping short of reversing the burden of proof entirely.

Exemptions and Manufacturer Obligations

The Directive introduces exemptions tailored to the AI landscape, including:

  • Proof that defects didn’t exist when the product entered the market, except for manufacturer-controlled software updates.
  • Demonstration that compliance with legal standards or the limits of scientific knowledge at the time prevented defect detection.

Implications for AI Governance: A Strategic Layer of Compliance

By integrating seamlessly with the AI Act, the Directive creates a cohesive regulatory ecosystem for AI development and deployment. Companies now face heightened accountability, emphasizing the importance of adhering to sectoral standards, including cybersecurity, to minimize liability risks.

Conclusion: Pioneering Liability Standards for an AI-Driven Era

The Directive represents a pivotal step in modernizing liability frameworks to match the capabilities and risks of AI technology. Businesses must prepare for the operational and legal implications, ensuring compliance with both the Directive and the AI Act. Together, these regulations shape a future where innovation thrives alongside accountability, fostering trust in AI-driven solutions. You can read other articles on the main legal challenges of artificial intelligence HERE.


Below are the other articles of the week from my team at #DLAPiper

Artificial Intelligence

First draft of the General-Purpose AI Code of Practice published

A group of independent experts has presented the first draft of the General-Purpose AI Code of Practice. It aims to detail the AI Act rules for providers of general-purpose AI models and general-purpose AI models with systemic risks. Read more

Intellectual Property

New opportunities for designs: EU reform that changes the rules of the game

2025 will mark a turning point in design protection in the EU with the entry into force of the EU Designs Regulation 2024/2822 and the EU Directive 2024/2823 on the Legal Recognition of Designs. Companies and designers will benefit from new opportunities, but will have to rethink their IP strategies to maximise the benefits of this reform. Read more

Technology Media and Telecommunication

Infratel publishes report on the progress of the National Ultra-Broadband Plan

In a press release dated 24 October 2024, Infratel announced the publication of a report on the progress of the National Ultra-Broadband Plan, updated to 30 September 2024. Read more

Life Sciences

Combatting medicine counterfeiting: New measures in Italy by 2025

By 9 February 2025, Italy must align national provisions with the Delegated Regulation (EU) 2016/161 (Regulation), which establishes specific rules to combat the counterfeiting of medicines. Until now, Italy has benefited from a derogation, as the Regulation became applicable almost six years ago, on 9 February 2019. Read more

To view or add a comment, sign in

More articles by Giulio Coraggio

Insights from the community

Others also viewed

Explore topics