The dark side of Artificial Intelligence
Many people will be looking at artificial intelligence (AI) as a way of making efficiencies in what they do allowing them to use their current or potential skills more effectively, however, there is a much darker side to AI, which most people won’t see, that causes much p[AI]n and creates widespread unf[AI]rness.
The recently published book, ‘Code Dependent’, written by Madhumita Murgia , provides an excellent insight into living in the shadow of AI, and I would strongly recommend it to those interested in obtaining a broader understanding of the dangers around AI.
Something that struck me early in the book was how AI systems are trained to identify content that users shouldn’t be exposed to, for example, pictures of dead and injured people, sex and physical abuse, etc.; such vetting has to be done by people, but who are they and how are they helped with the post-traumatic stress disorders (PTSD) that come with doing this?
The book describes how people from third world countries are coaxed into taking what they think are long-term career enhancing IT jobs, but they are in fact jobs that just involve ‘labelling’ extreme content; “they see the worst so we don’t have to”!
'Labellers' are employed and subject to strict contracts and confidentiality agreements stopping them from speaking up about what they are doing, and are unable to question what their employer is doing; in effect they are just seen as cheap labour and are muzzled by non-disclosure agreements; anyone raising PTSD issues are dismissed and replaced with the many others waiting for their ‘dream job’!
Most people will already be aware of the use of ‘revenge porn’, but AI now allows pictures and videos stored on the internet to be manipulated so they show innocent people, mainly females, in compromising positions on pornographic websites. The book provides a number of case studies detailing how females have had their identities manipulated and the difficulties they encountered in trying to get ‘big tech’ companies to remove them; the laws around the improper manipulation of images on the internet are very limited and therefore it is currently very difficult to protect an individual’s rights.
Recommended by LinkedIn
Another area of concern around the use of AI is in relation to facial recognition, where there have been high error rates with people of colour compared to white people, for example, there was a 35% error rate with the former compared to a less than 1% error rate with the latter; this is of real concern when CCTV images are being widely used for crime prevention purposes. Because AI learns from historic data there is a real risk of systems being developed using biases and with discrimination built into the design.
AI has been adopted to try and predict where policing resources should be allocated, but it has been shown that some systems employed racist algorithms, which concluded that people of colour were seen as a high risk of re-offending but in fact weren’t, whereas white people were seen as a low risk of re-offending but in fact went on to re-offend more.
AI has also been adopted by some countries to try and predict which girls were likely to become pregnant in 5-6 years’ time using their social and parental circumstances; it is clear that those in vulnerable communities (low income worker, migrants, patients, people of colour, etc.) are seen as good test-beds for AI systems!
As we have seen in a number of recent cases, users had thought that AI systems could be considered to be 100% correct and therefore didn’t need human interaction to check on the answers being given, for example, a lawyer in the US quoted a number of court cases in support of their case, which had been produced by ChatGPT, which were found not to exist; the lawyer told the court that he thought that everything the system produced was completely correct and didn’t need checking by a human!
AI, if developed and used ethically, can be of real benefit to humanity, but as a leading AI-ethicist said, “if left unchecked, it [AI] could be unjust and dangerous for social peace and social good”.
Hope you enjoy the book!
Rosemary Hood DVM Emerita
9mo🎀 Matters of privacy, consent, identity, ... 🇨🇦 SIN all that stuff. 🎀
Test Consultant at Fujitsu
9moThank you for sharing. 🙏🙏