🎲 Unpredictable AI: Legal Implications

🎲 Unpredictable AI: Legal Implications

AI Governance Must-Reads | Edition #156

🙏 A quick request: You’re currently subscribed to this newsletter on LinkedIn, and I'd like to invite you to switch your subscription to my personal domain here (free and paid options available). It takes only a few seconds, and by subscribing directly, you’ll receive new editions earlier, gain access to additional features, and support my work. Once you receive the welcome email, you can unsubscribe from this LinkedIn version. Thank you!


👋 Hi, Luiza Jarovsky here. Welcome to the 156th edition of this newsletter on the latest developments in AI policy, compliance & regulation. It's read by 42,500+ subscribers across 160+ countries. I hope you enjoy reading it as much as I enjoy writing it.

💎 In this week's AI Governance Professional Edition, I’ll present some of the existing characterizations of Artificial General Intelligence (AGI) and examine the potential legal challenges that could arise if AGI is ever successfully achieved. Paid subscribers will receive it on Thursday. Not a paid subscriber yet? Upgrade your subscription to receive two weekly editions (this free newsletter + the exclusive paid edition), gain full access to all previous and upcoming analyses, and stay ahead in the rapidly evolving field of AI governance.

🛣️ Step into 2025 with a new career path! In January, join me for a 3-week intensive AI Governance Training (8 live lessons; 12 hours total), already in its 16th cohort. Join over 1,000 professionals who have benefited from our programs: don't miss it! Students, NGO members, and professionals in career transition can request a discount.


🎲 Unpredictable AI: Legal Implications

Three days ago, Ilya Sutskever, co-founder of OpenAI, stated at a conference that "superintelligent AI" will be unpredictable. What most people haven't realized yet is that this assumption could have major legal consequences. Here are some of them:

➡️ When an AI system is unpredictable by design, essential liability issues emerge. Why?

➡️ Often, product liability laws are based on what is the "normal" and the "defective" functioning of a certain product. Take the new EU Product Liability Directive (PLD), which entered into force last week and applies to AI. Here's what it says in Art. 7(2):

"In assessing the defectiveness of a product, all circumstances shall be taken into account, including:

(a) the presentation and the characteristics of the product, including its labelling, design, technical features, composition and packaging and the instructions for its assembly, installation, use and maintenance;

(b) reasonably foreseeable use of the product;

(c) the effect on the product of any ability to continue to learn or acquire new features after it is placed on the market or put into service;

(d) the reasonably foreseeable effect on the product of other products that can be expected to be used together with the product, including by means of inter-connection;

(e) the moment in time when the product was placed on the market or put into service or, where the manufacturer retains control over the product after that moment, the moment in time when the product left the control of the manufacturer;

(f) relevant product safety requirements, including safety-relevant cybersecurity requirements;

(g) any recall of the product or any other relevant intervention relating to product safety by a competent authority or by an economic operator as referred to in Art. 8;

(h) the specific needs of the group of users for whose use the product is intended;

(i) in the case of a product whose very purpose is to prevent damage, any failure of the product to fulfil that purpose."

➡️ If an AI system is expected to be unpredictable, and the unpredictability leads to harm, it will be difficult for the victim to prove that this was a defect.

➡️ Yes, there is the potential presumption of defectiveness if it's excessively complex to prove it (Art. 10). However, it must be declared by a court, and the AI company will have the right to rebut any of the presumptions.

➡️ We are already witnessing legal challenges arising from unpredictability: lawsuits against AI companions/characters will often have to prove the existence of a defect and sometimes also negligence by the AI company. When the unpredictable nature of these AI chatbots is the rule, legal challenges emerge from a liability perspective.

➡️ There are, of course, other legal grounds to hold AI companies accountable. However, liability is an important tool, and it would be problematic to have it undermined from the start.


⚖️ Another Lawsuit Against Character AI

Do you think AI chatbots are inoffensive? Think again. Everyone should read pages 1-2 of the lawsuit against CharacterAI and understand why they should be heavily regulated, especially when children are involved.

➡️ A brief comment about liability. In the lawsuit, the Plaintiffs state:

"Plaintiffs bring claims of strict liability based on Defendants’ defective design of the CharacterAI product, which renders CharacterAI not reasonably safe for ordinary consumers, particularly youth. It is manifestly feasible to design generative AI products that substantially decrease both incidence and amount of harm to minors arising from the foreseeable use of such products with negligible, if any, increase in production and distribution cost."

➡️ Liability laws indeed might play a key role in making sure that these AI systems are built safer, especially when liability laws follow a strict liability approach or when they allow a judge to adopt the presumption of liability when it's too difficult for the victim to prove that the AI system was "defective."

➡️ The new EU Product Liability Directive has recently entered into force in the EU, and we're still waiting for the AI Liability Directive to be approved. From an EU perspective, these two instruments are essential additions to the EU AI Act and will help ensure that AI developers are held accountable when their products cause harm.

👉 If you know anyone who uses these AI systems (or have kids who do), share the lawsuit with them and encourage them to read the main allegations. Many people don't realize these AI systems are not as harmless as they are often marketed to be.


🛣️ Step Into 2025 with a New Career Path

If you are dealing with AI-related challenges at work, don't miss our acclaimed live online AI Governance Training—now in its 16th cohort—and start 2025 ready to excel.

This January, we’re offering a special intensive format for participants in Europe and Asia-Pacific: all 8 lessons (12 hours of live learning) condensed into 3 weeks, helping participants catch up with recent developments and upskill. Another cohort is available in February for the Americas and Europe

→ Our unique curriculum, carefully curated over months and constantly updated, focuses on AI governance's legal and ethical topics, helping you elevate your career and stay competitive in this emerging field.

→ Over 1,000 professionals from 50+ countries have benefited from our programs, and alumni consistently praise their experience—check out their testimonials. Students, NGO members, and people in career transition can request a discount.

→ Are you ready? Register now to secure your spot before the cohort fills up:

*If this is not the right time, join our Learning Center to receive AI governance professional resources and updates on training programs and live sessions.


🎬 Watch the Recording: AI in the Workplace

If you missed my conversation with Prof. Ifeoma Ajunwa about AI in the workplace, the recording is here! Why you should watch it:

➡️ Prof. Ajunwa is an award-winning law professor, the author of the groundbreaking book The Quantified Worker, and a global expert on the ethical governance of workplace technologies.

➡️ We spoke about some of the most pressing issues surrounding AI in the workplace, including:

- Worker surveillance, quantification, and exploitation;

- How existing AI tools in the workplace are making things worse;

- Existing policies and laws on AI in the workplace;

- How the EU AI Act approaches the topic;

....and more

➡️ Why it’s essential:

AI is everywhere, including the workplace, yet workers remain vulnerable, and current laws may not go far enough. Additionally, while the AI regulation debate has gained traction worldwide, we don't often see discussions focused specifically on the workplace, considering the power asymmetry we encounter in this context and the difficulty in making sure that workers' autonomy, dignity, and wellness are respected.

➡️ Applying for a job? Pay attention:

We spoke about new challenges in the context of applying for a job in the age of AI. For example, when you draft your CV, you might have to consider not only the human recruiter who will potentially read it but also AI systems that might perform the initial screening and disqualify you due to a glitch during the training phase or a discriminatory bias. You should pay attention to the case Prof. Ajunwa shared in this context.

➡️ Are you an employer? Take note:

Rushing to adopt new AI-powered workplace technologies might lead to a lack of trust and lower employee performance. Prof. Ajunwa shared interesting insights in this context, too.

👉 Watch the recording, share it with friends & family, and help raise awareness about AI-powered workplace surveillance.


🏛️ The New Product Liability Directive Is Here

The new EU Product Liability Directive entered into force on 8 December 2024, and it applies to AI systems as well. Here are 10 quick facts that everyone in AI should know:

1️⃣ The Directive applies to products placed on the market after 9 December 2026 (Article 2)

2️⃣ The new directive expressly acknowledges that AI—and the need to compensate victims of AI-related harm—was one of the factors that made it necessary to update the old product liability directive (Recital 3)

3️⃣ AI providers—defined as such according to the EU AI Act—should be treated as manufacturers (Recital 13)

4️⃣ "Where a substantial modification is made (...) due to the continuous learning of an AI system, the substantially modified product should be considered to be made available on the market or put into service at the time that modification is actually made." (Recital 40)

5️⃣ "in a claim concerning an AI system, the claimant should, for the court to decide that excessive difficulties exist, neither be required to explain the AI system’s specific characteristics nor how those characteristics make it harder to establish the causal link [between the damage and the defectiveness](...)" (Recital 48)

6️⃣ Failing to comply with the EU AI Act might create a presumption of defectiveness against the company under the new Product Liability Directive (Article 10)

7️⃣ "People will be able to bring a claim for damages against the manufacturer if the defective product has caused death, personal injury, including medically recognised psychological harm, damage to property or data loss.” Source: EU Commission

8️⃣ We still need the EU AI Liability Directive for various types of AI-related harm [the EU AI Liability Directive has not been approved yet]

9️⃣ It's a Directive (not a Regulation like the EU AI Act), so EU Member States must enact laws, regulations, and administrative provisions necessary to comply with the new Directive. They must do that by 9 December 2026 (Article 22)

🔟 Learn more about the new EU Product Liability Directive in my article about the topic.


🏛️ The EU Cyber Resilience Act Is Also Here

The EU Cyber Resilience Act entered into force and it also applies to AI. Here are 10 highlights that everyone in Information Security and AI should know:

1️⃣ The Cyber Resilience Act builds on the 2020 EU Cybersecurity Strategy and the EU Security Union Strategy. It complements other legislation in the field, such as the NIS2 Directive.

2️⃣ The Cyber Resilience Act is an EU regulation (like the GDPR and the AI Act), and as such, it's directly applicable to all EU Member States.

3️⃣ It's focused on ensuring cybersecurity throughout the product's lifecycle. According to the EU Commission: "The Cyber Resilience Act introduces mandatory cybersecurity requirements for manufacturers and retailers, governing the planning, design, development, and maintenance of such products. These obligations must be met at every stage of the value chain. The act also requires manufacturers to provide care during the lifecycle of their products. Some critical products of particular relevance for cybersecurity will also need to undergo a third-party assessment by an authorised body before they are sold in the EU market."

4️⃣ Similarly to the EU AI Act, the Cyber Resilience Act relies on conformity assessments and the EU declaration of conformity.

5️⃣ Products will bear the CE marking to indicate compliance with the Cyber Resilience Act's requirements.

6️⃣ How do maximum penalties compare to other EU laws, such as the EU AI Act? "Non-compliance with the essential cybersecurity requirements set out in Annex I and the obligations set out in Articles 13 and 14 shall be subject to administrative fines of up to 15 million euros or, if the offender is an undertaking, up to 2,5 % of its total worldwide annual turnover for the preceding financial year, whichever is higher" (Article 64).

7️⃣ One of the interesting intersections with the EU AI Act is covered here: "For the products with digital elements and cybersecurity requirements referred to in paragraph 1 of this Article, the relevant conformity assessment procedure provided for in Article 43 of [the EU AI Act] shall apply. For the purposes of that assessment, notified bodies which are competent to control the conformity of the high-risk AI systems under Regulation [the EU AI Act] shall also be competent to control the conformity of high-risk AI systems which fall within the scope of this Regulation with the requirements set out in Annex I to this Regulation, (...)" (Article 12, second paragraph).

8️⃣ According to Article 71, the Cyber Resilience Act will apply from 11 December 2027 (except for Article 14 and Chapter IV).

9️⃣ The EU Commission is setting up the Cyber Resilience Act Expert Group (CRA Expert Group), which will advise it on issues regarding implementing the Cyber Resilience Act.

🔟 To read my in-depth analyses on AI compliance & regulation topics, check out the AI Governance Professional Edition of this newsletter (paid subscriber-only).


🏭 AI Factories Have Arrived in the EU

╰┈➤ These will be the first 7 AI factories:

🇪🇸 Barcelona, Spain: “BSC AIF” at the Barcelona Supercomputing Centre

🇮🇹 Bologna, Italy: “IT4LIA” at CINECA - Bologna Tecnopolo

🇫🇮 Kajaani, Finland: “LUMI AIF” at CSC

🇱🇺 Bissen, Luxembourg: “Meluxina-AI” at LuxProvide

🇸🇪 Linköping, Sweden: “MIMER” at the University of Linköping

🇩🇪 Stuttgart, Germany: “HammerHAI” at the University of Stuttgart

🇬🇷 Athens, Greece: “Pharos” at GRNET

╰┈➤ Here's what the EU Commission announced:

"Funded with €1.5 billion of national & EU funding, the AI Factories will significantly boost the AI computing power of Europe’s world-class network of EuroHPC supercomputers and will offer access to the computing power that start-ups, researchers and industry need for training their AI models and systems."

╰┈➤ Why AI factories?

The main goal is to boost the AI ecosystem in the EU, with a focus on startups and SMEs. According to the EU Commission, these are the key strategic sectors:

→ health and life sciences

→ manufacturing

→ climate and environment

→ finance

→ automotive and autonomous systems

→ cybersecurity

→ agri-tech and agrifood

→ education

→ arts and culture

→ green economy

→ space


📚 AI Book Club: What Are You Reading?

📖 More than 1,900 people have already joined our AI Book Club and receive our bi-weekly book recommendations.

📖 The 15th recommended book was “AI Snake Oil: What AI Can Do, What It Can’t, and How to Tell the Difference” by Arvind Narayanan and Sayash Kapoor.

📖 Ready to discover your next favorite read? See our previous reads and join the book club here.


🇧🇷 Principle-Based AI Governance

If you are interested in AI, you can't miss the latest AI Governance Professional Edition, where I discuss interesting aspects of Brazil's proposed AI law, and how its principle-based approach could offer important insights to lawmakers, policymakers, and AI governance professionals worldwide.

👉 Read the preview here. If you're not a paid subscriber, upgrade your subscription to access all previous and future analyses in full.


🔥 Job Opportunities in AI Governance

Below are 10 new AI Governance positions posted in the last few days. This is a competitive field: if it's a relevant opportunity, apply today:

  1. 🇬🇧 JLR: Head of Data and AI Governance and Trust - apply
  2. 🇧🇪 Planet Pharma: R&D Data and AI Governance Lead - apply
  3. 🇺🇸 Stryker: AI Governance Engineer - apply
  4. 🇮🇹 BIP: AI Governance Specialist - apply
  5. 🇺🇸 Nike: Senior Principal, AI Governance - apply
  6. 🇩🇪 Redcare Pharmacy: AI Governance Lead - apply
  7. 🇨🇭 EY: Consultant, Financial Services Risk, AI Governance - apply
  8. 🇨🇦 Dropbox: Senior AI Governance Program Manager - apply
  9. 🇬🇧 Deliveroo: AI Governance Lead - apply
  10. 🇳🇱 Fugro: Data & AI Governance Manager - apply

🔔 More job openings: subscribe to our AI governance and privacy job boards to receive weekly job opportunities. Good luck!


🙏 Thank you for reading!

AI is more than just hype—it must be properly governed. If you found this edition valuable, consider sharing it with friends and colleagues to help spread awareness about AI policy, compliance, and regulation. Thank you!

Have a great day.

Luiza



ROBERT TA

Product Leader & Entrepreneur | Dog Dad

1d

AI isn’t just advancing...it’s also outpacing our understanding at times. These times are very interesting.

Like
Reply
Jean Ng 🟢

AI Changemaker | AI Influencer Creator | Book Author | Promoting Inclusive RAI and Sustainable Growth | AI Course Facilitator

1d

This is a complex legal landscape that's rapidly evolving. It's encouraging to see the EU taking steps to address AI liability, but we need ongoing dialogue and collaboration between legal experts, AI developers, and policymakers to ensure effective regulation. We need to find ways to balance innovation with safety and hold AI companies accountable for the potential risks their products pose.

Like
Reply

To view or add a comment, sign in

Explore topics