AI, Accountability, and Accidents: Who’s to Blame When Machines Make Mistakes?
This article was inspired by the seminar "Safe and Healthy Work in the Digital Age," held on November 5th at the AX Palace Hotel in Sliema, Malta. With keynote speeches from OHSA CEO Dr. Josianne Cutajar and insightful presentations by Dr. Luke Fiorini from the Centre for Labour Studies UM and Mr. Silvio Farrugia a Senior Manager at the OHSA, the event highlighted how digitalisation and AI are transforming workplace safety and underscored the pressing need for different and updated legislation to keep pace. This article explicitly addresses the accountability challenges introduced by AI-driven systems in Occupational Health & Safety (OHS).
The Foundation: Employer Responsibility in OHS Law
Under the European Framework Directive 89/391/EEC, EU member states, including Malta, state that the employer is ultimately responsible for ensuring the health and safety of employees and anyone impacted by their work. Malta’s current OHSA Act 27 of 2000 and the upcoming Health and Safety at Work Act, Cap. 646, further reinforces the employer’s duty to prevent workplace hazards, provide adequate training, and maintain safe working conditions. This responsibility includes ensuring that machinery, including any operated by employees, conforms to safety standards, is CE-marked, undergoes regular preventive maintenance, and receives the necessary inspections and periodic certifications as stipulated by the Machinery Directive and the Work Equipment Directive.
In scenarios where a human operator makes a mistake despite adequate training and maintenance, the law is generally clear: if the employer has fulfilled all OHS obligations, liability may shift to the employee in cases of proven negligence. But, in my humble opinion, the introduction of autonomous AI machinery raises a new challenge.
This brings us to the question: how do we assign accountability when an autonomous AI machine, a forklift truck, for instance, makes a mistake? Even if the employer ensures all protocols are followed, maintains CE-certified equipment, performs preventive maintenance, conducts regular inspections, and secures the necessary certifications, what happens when the AI system makes an independent error? Is it far-fetched? Is it possible? Recent history shows that it is. Unlike human operators, machines cannot be held directly accountable. So, who should bear the liability in these cases?
Case Studies in AI-Driven Accidents
Since 2015, there have been several notable incidents where AI-operated systems made critical errors, leading to serious accidents. These cases illustrate the complexity of assigning accountability when machines act autonomously.
Current Legal Gaps: A Challenge for OHS Law
EU laws such as the Machinery Directive (2006/42/EC) and the Work Equipment Directive (2009/104/EC) aim to mitigate risks by requiring that machinery is safe, regularly certified, and well-maintained. However, these laws assume human operators, not AI systems. The Artificial Intelligence Act proposed by the European Commission seeks to establish compliance requirements for AI, but this legislation is still in progress and lacks specific provisions for assigning liability in autonomous machinery errors.
In Malta, Cap. 646 obligates employers to create a safe work environment, but it in my opinion, similar to the EU's Framework Directive, it falls short of addressing accountability when autonomous AI machines make unpredictable decisions. Simply stating that the law is that the employer is responsible may not hold in a Court of Law if the employer proves that he did everything possible under his control to avoid the accident. Academic research, such as Pagallo and Durante (2016) in the European Journal of Risk Regulation, highlights the need for shared accountability frameworks that distribute responsibility between employers and AI developers in such cases.
Similarly, Veale and Binns (2017) argue in Nature Machine Intelligence for a “regulatory sandbox” approach, allowing AI to be tested under controlled conditions with strict oversight. This model could aid in preventing incidents by highlighting flaws before full deployment, but it is not yet widely adopted.
Recommended by LinkedIn
Practical Implications for Employers
In the digital age, employers must take additional steps to fulfil their OHS obligations when implementing autonomous AI systems:
Legislative Reforms: Essential for the Future of OHS
To address these challenges, it is imperative that OHS law evolves in line with technological advancements. Possible reforms could include:
Conclusion: Future-Proofing OHS in the Digital Age
As underscored by Mr. Silvio Farrugia at the seminar, the discussion on the digital age and the integration of AI in the workplace may raise more questions than it answers. While employers retain a significant share of responsibility, AI’s potential unpredictability introduces a level of complexity that traditional OHS laws may not adequately address. Developing a shared accountability model will be essential to maintaining a safe and compliant workplace, even when autonomous technology makes independent decisions.
As AI becomes an integral part of workplaces worldwide, lawmakers, industry leaders, and OHS professionals must collaborate to ensure our legal frameworks keep pace. In doing so, we can help guarantee that workplace safety is as adaptable to machine learning as it is to human knowledge, making safety both human-proof and future-proof.
References
--
2moHighly recommended
Director and lecturer, Centre for Labour Studies at University of Malta
2moAn interesting article, which as you noted there aren't always clear answers. One important facet is that where systems work side by side with people, they are designed where humans retain overall control and as you noted have transparency of how the system is functioning.