AI, Accountability, and Accidents: Who’s to Blame When Machines Make Mistakes?

AI, Accountability, and Accidents: Who’s to Blame When Machines Make Mistakes?

This article was inspired by the seminar "Safe and Healthy Work in the Digital Age," held on November 5th at the AX Palace Hotel in Sliema, Malta. With keynote speeches from OHSA CEO Dr. Josianne Cutajar and insightful presentations by Dr. Luke Fiorini from the Centre for Labour Studies UM and Mr. Silvio Farrugia a Senior Manager at the OHSA, the event highlighted how digitalisation and AI are transforming workplace safety and underscored the pressing need for different and updated legislation to keep pace. This article explicitly addresses the accountability challenges introduced by AI-driven systems in Occupational Health & Safety (OHS).

The Foundation: Employer Responsibility in OHS Law

Under the European Framework Directive 89/391/EEC, EU member states, including Malta, state that the employer is ultimately responsible for ensuring the health and safety of employees and anyone impacted by their work. Malta’s current OHSA Act 27 of 2000 and the upcoming Health and Safety at Work Act, Cap. 646, further reinforces the employer’s duty to prevent workplace hazards, provide adequate training, and maintain safe working conditions. This responsibility includes ensuring that machinery, including any operated by employees, conforms to safety standards, is CE-marked, undergoes regular preventive maintenance, and receives the necessary inspections and periodic certifications as stipulated by the Machinery Directive and the Work Equipment Directive.

In scenarios where a human operator makes a mistake despite adequate training and maintenance, the law is generally clear: if the employer has fulfilled all OHS obligations, liability may shift to the employee in cases of proven negligence. But, in my humble opinion, the introduction of autonomous AI machinery raises a new challenge.

This brings us to the question: how do we assign accountability when an autonomous AI machine, a forklift truck, for instance, makes a mistake? Even if the employer ensures all protocols are followed, maintains CE-certified equipment, performs preventive maintenance, conducts regular inspections, and secures the necessary certifications, what happens when the AI system makes an independent error? Is it far-fetched? Is it possible? Recent history shows that it is. Unlike human operators, machines cannot be held directly accountable. So, who should bear the liability in these cases?

Case Studies in AI-Driven Accidents

Since 2015, there have been several notable incidents where AI-operated systems made critical errors, leading to serious accidents. These cases illustrate the complexity of assigning accountability when machines act autonomously.

  1. Tesla Autopilot Crashes (2021-2022): The National Highway Traffic Safety Administration (NHTSA) reported that Tesla’s Autopilot was involved in 273 crashes between July 2021 and May 2022. In one fatal case in 2022, a Tesla Model S in Full Self-Driving mode struck and killed a motorcyclist in Seattle. Even though the vehicles met certification and safety standards, the system’s misjudgment contributed to the tragic outcome. This raises the question: when autonomous technology makes critical errors, is the manufacturer, user, or another party accountable?
  2. Volkswagen Factory Robot Accident (2015): In a Volkswagen plant in Germany, a certified industrial robot malfunctioned, grabbing a worker and crushing him against a metal plate. Despite meeting all safety requirements, the robot’s programming led to a fatal error, highlighting the risks associated with AI-operated machinery.
  3. Waymo and Cruise Autonomous Vehicle Accidents (2021): Between 2021 and 2022, Waymo and Cruise vehicles were involved in low-speed collisions, often due to the AI’s misinterpretation of human drivers’ behaviour. Although minor, these incidents illustrate the unpredictability of AI-driven machines, even when they operate within expected parameters.
  4. Kawasaki Industrial Robot Incident (2015): In Japan, a maintenance engineer was killed by a robot that mistook him for an object in its operating area. This incident demonstrates how slight programming or sensor errors can result in life-threatening situations.

Current Legal Gaps: A Challenge for OHS Law

EU laws such as the Machinery Directive (2006/42/EC) and the Work Equipment Directive (2009/104/EC) aim to mitigate risks by requiring that machinery is safe, regularly certified, and well-maintained. However, these laws assume human operators, not AI systems. The Artificial Intelligence Act proposed by the European Commission seeks to establish compliance requirements for AI, but this legislation is still in progress and lacks specific provisions for assigning liability in autonomous machinery errors.

In Malta, Cap. 646 obligates employers to create a safe work environment, but it in my opinion, similar to the EU's Framework Directive, it falls short of addressing accountability when autonomous AI machines make unpredictable decisions. Simply stating that the law is that the employer is responsible may not hold in a Court of Law if the employer proves that he did everything possible under his control to avoid the accident. Academic research, such as Pagallo and Durante (2016) in the European Journal of Risk Regulation, highlights the need for shared accountability frameworks that distribute responsibility between employers and AI developers in such cases.

Similarly, Veale and Binns (2017) argue in Nature Machine Intelligence for a “regulatory sandbox” approach, allowing AI to be tested under controlled conditions with strict oversight. This model could aid in preventing incidents by highlighting flaws before full deployment, but it is not yet widely adopted.

Practical Implications for Employers

In the digital age, employers must take additional steps to fulfil their OHS obligations when implementing autonomous AI systems:

  1. Continuous Monitoring: Employers should go beyond traditional inspections and apply continuous monitoring for autonomous systems, including software updates, error-checking, and regular recalibrations.
  2. Documentation and Transparency: Keeping thorough records of all AI system updates, maintenance, and any anomalies is essential to demonstrating compliance and enabling accountability in the event of an accident.
  3. Implementing Fail-Safes: Employers should consider adding manual override capabilities or human supervision, particularly in high-risk environments where an AI decision error could have severe consequences.
  4. Contracts with AI Developers: Employers can protect themselves by specifying in contracts that liability may be shared with AI suppliers in cases where the machine operates beyond the employer’s control. This approach, however, is more likely to hold in a civil case rather than a criminal one, given the current legislation. Criminal liability generally requires proof of negligence or misconduct beyond what can be contractually assigned, and under current OHS law, criminal responsibility for workplace safety typically remains with the employer.

Legislative Reforms: Essential for the Future of OHS

To address these challenges, it is imperative that OHS law evolves in line with technological advancements. Possible reforms could include:

  1. Joint Liability Models: Legislative bodies could introduce frameworks that assign shared accountability between employers and AI developers, helping clarify responsibility in cases where AI acts autonomously.
  2. AI-Specific OHS Regulations: The EU and member states might consider expanding existing OHS laws to address autonomous systems, setting specific standards for machine behaviour and employer responsibilities in maintaining and monitoring AI safety.
  3. Mandatory AI Transparency Reports: AI manufacturers could be required to produce transparency reports detailing machine decision-making processes, making investigations clearer and ensuring that the AI acted within programmed limits.
  4. Establishing AI Oversight Bodies: Specialised regulatory agencies focused on high-risk AI applications could ensure laws remain relevant, giving legislators a way to rapidly adapt to new technological developments.

Conclusion: Future-Proofing OHS in the Digital Age

As underscored by Mr. Silvio Farrugia at the seminar, the discussion on the digital age and the integration of AI in the workplace may raise more questions than it answers. While employers retain a significant share of responsibility, AI’s potential unpredictability introduces a level of complexity that traditional OHS laws may not adequately address. Developing a shared accountability model will be essential to maintaining a safe and compliant workplace, even when autonomous technology makes independent decisions.

As AI becomes an integral part of workplaces worldwide, lawmakers, industry leaders, and OHS professionals must collaborate to ensure our legal frameworks keep pace. In doing so, we can help guarantee that workplace safety is as adaptable to machine learning as it is to human knowledge, making safety both human-proof and future-proof.

References

Highly recommended

Like
Reply
Luke Fiorini

Director and lecturer, Centre for Labour Studies at University of Malta

2mo

An interesting article, which as you noted there aren't always clear answers. One important facet is that where systems work side by side with people, they are designed where humans retain overall control and as you noted have transparency of how the system is functioning.

To view or add a comment, sign in

More articles by George Steve Darmanin MSc OHSEM CMIOSH

Insights from the community

Others also viewed

Explore topics