Love, Cybersecurity & Hacked-Robots - Can Robot Manufacturers Be Held Liable For Murder Perpetrated By Hacked Sexbots?
Industry Insights | February 2021 | The Hong Kong Lawyer
Note to readers: This article was first published with The Hong Kong Lawyer, the Official Magazine of The Law Society of Hong Kong. To read the article at the Hong Kong Lawyer’s website, click here.
“Mark my words - A.I. is far more dangerous than nukes” - Elon Musk
Introduction
Whilst the idea of a human apocalypse being brought about by artificial intelligence (“A.I.”) has been in Hollywood films for decades, cybersecurity experts warned that it isn’t just military A.I. that poses a threat to humanity, unassuming sex robots are equally dangerous!
“She understands COVID-19…” - Brick Dollbanger, Robot Beta Tester
The COVID-19 crisis has seen the leapfrogging of technology across every aspect of humanity. With physical inter-human interactions turning “deadly”, it is no surprise that the sex robot industry has received a boom. Yet, as with all infant technology, cybersecurity experts warned that these new robots can be a grave threat to humanity.
Warning!
“Hacking into many modern-day robots, including sexbots, would be a piece of cake compared to more sophisticated gadgets like cellphones and computers…
Hackers can hack into a robot or a robotic device and have full control of the connections, arms, legs and other attached tools like in some cases knives or welding devices…
Once hacked, they could absolutely be used to perform physical actions for an advantageous scenario or to cause damage…” - Dr Nick Patterson
The recent decision in HKSAR v. Mak Wan-ling [2020] HKCFI 3069 highlighted the increasing number of medical manslaughter cases in Hong Kong , raising awareness within the medical community about criminal liability in medical negligence. However, such retrospective self-reflection is, for all intents and purposes, too little too late.
The same can be said regarding the tech industry. Hopefully, coders will pay more attention to vulnerabilities on their platform and be mindful that their inventions can cause danger during the development process, rather than a retrospective reflection.
Vulnerabilities in cybersecurity resulting in death have already been documented. On September 11, 2020, a patient died after hackers disabled the computer systems at Düsseldorf University Hospital. What had begun as a routine transfer turned deadly when inter-hospital logistics became crippled by the cybercrime. This attack triggered Germany’s first cybersecurity manslaughter investigation (different from medical manslaughter).
Manslaughter by Cybersecurity Negligence
As reaffirmed in Mak Wan-ling, the chief elements of manslaughter by gross negligence includes:
1. The defendant owed an existing duty of care to the deceased;
Recommended by LinkedIn
2. The defendant negligently breached that duty of care;
3. It was reasonably foreseeable that the breach of that duty gave rise to a serious and obvious risk of death;
4. The breach of that duty caused the death; and
5. The circumstances of the breach were truly exceptionally bad and so reprehensible as to justify the conclusion that it amounted to gross negligence and required criminal sanction.
In the present scenario, physical interaction with a robot may entail certain health risks (e.g. heart attacks, muscle strains, etc.). Any glitch in a robot’s operating system may cause serious harm to an end user.
Furthermore, any operating system can be compromised. Where the program is interactive, personal data of the end user will be processed. It is therefore crucial for manufacturers to make appropriate safeguards.
As such, risks of using a robot platform is foreseeable and any break in ensuring its security will be negligence. Where the harm is both foreseeable and unmitigated, the manufacturer may be at risk of being grossly negligent and be liable to the consequential harm that a user may suffer.
Pre-Emptive Mitigation
Whilst liability from manslaughter by cybersecurity negligence is a danger for tech developers, developers can pre-emptively protect themselves. The most traditional approach is using risk disclaimer. Robot developers can require prospective users confirm their understanding of risks associated with the use of their product before its activation.
That said, the best way to protect oneself will still be to ensure delivery of quality product. For example, in Mak Wan-ling, the patient was duly advised of the risks associated with the procedure. What had ultimately doomed the defendant’s practice was that the quality of care delivered was so sub-standard that any reasonable practitioner would have found offending.
Conversely, developers ought to make certain that their platforms are more protected than devices such as cell phones. Unfortunately, that may not be the case yet with existing sex robot developers.
Conclusion
In an age where Artificial Intelligence (“A.I.”) too have such ability to physically manipulate surroundings, developers should remember:
Perfect your product! The premature launch of a product with backdoor may attract serious liability. Where a life is involved, great care is to be taken.
Ensure that the A.I. will do no harm to other sentient beings. The ability to wield tools meant they have great power, and with great power comes great responsibility.
Ensure that users are aware of risks of the product! Back to the basics, make sure your product is legally certified before launch. Retrospective reflection will be too late.
Good article. You should follow up with “what happens when the robot you badly abused becomes sentient”