Algorithmic Bias: Concealed Threats and Dangers within Risk, Safety and Security Analysis or Assessments
Algorithmic Bias: Concealed Threats and Dangers within Risk, Safety and Security Analysis or Assessments. Tony Ridley, MSc CSyP MSyI M.ISRM

Algorithmic Bias: Concealed Threats and Dangers within Risk, Safety and Security Analysis or Assessments

Formulas, mathematical calculations and algorithms are increasingly becoming 'the norm' within risk, safety and security assessments or analysis.

Moreover, these 'black box' calculations are becoming even more secretive, with individuals, companies and governments concealing the precise calculations, rigour and formulas that go into seemingly infallible and the unassailable 'science' that informs non-human calculations such as AI, Machine Learning and decision making.

However, in reality and practice, all algorithms carry human bias, preference, concealed preferences or are otherwise elusively judgement laden, which in turn distorts and modifies calculations of risk, safety, threat, and security.

To what degree and in what way remains the challenge and persistent question for practitioners, risk professionals, auditors and governments.

No alt text provided for this image
Many assertions, decisions and regulatory control measures have been imposed in the name of algorithmic purity or assurance.

In sum, a growing number of errors, scams, bias, prejudice, crime and distortion are concealed within algorithms informing risk, safety, crime and security assessments or analysis.

Greater transparency, understanding and scrutiny is required by those affected and those constructing such utopian purities asserted through 'non-human' distortions in mathematical and statistical accuracy.

Ironically, the more detached humans, science and scrutiny are removed from the calculation of safety, crime, harm, threat and security vulnerabilities informing 'risk', the greater the likelihood or bias and uncertainty.

The Flaw of Averages

"If you want to teach yourself to get a better grasp on #uncertainty and #risk, you have to recognize two very different types of learning: intellectual and experiential. (p.2) The goal of this book is to help you make better judgments involving #uncertainty and #risk, both when you have the leisure to deliberate and, more importantly, when you don’t. (p.4) " - Savage, S. (2009) The Flaw of Averages: Why we underestimate risk in the face of uncertainty, Wiley & Sons,

The Flaw of Averages
The statistician drown in a river than was 'on average' 1 meter deep

Numbers and Nerves: Information, Emotion, and Meaning in the World of Data

"#Riskmanagement in the modern world relies on two forms of thinking. #Risk as feelings refers to our instinctive and intuitive reactions to danger. #Risk as analysis brings logic, reason, quantification, and deliberation to bear on hazard management. Compared with analysis, reliance on feelings tends to be a quicker, easier, and more efficient way to navigate in a complex, uncertain, and dangerous world. Hence, it is essential to rational behavior. Yet it sometimes misleads us. In such circumstances we need to ensure that reason and analysis also are employed." - Slovic, P., & Slovic, S. (2015). Numbers and nerves: Information, emotion, and meaning in a world of data. Oregon State University Press.p.27

Numbers and Nerves
Numbers influence numbers... especially if you are on the wrong side of the calculation

What Happened to Frank and Fearless?

"Public servants ought to ensure that whatever facts are presented in the media are accurate, but can generally be expected to remain silent when countervailing facts are omitted. It is improper—a breach of the Code of Conduct—to seek to do otherwise in public, and there is an ‘understandable reluctance of public servants to #risk penalties (including jail) for revealing how advice has been manipulated...there are #risks in the system to APS policy advising and implementation at an operational level. " - - MacDermott, K. (2008). What happened to frank and fearless? The impact of the new public management on the Australian Public Service, The Australian National University E Press, p.37 & 49

Frank and Fearless
Public Service: Speak up, or expose us all to unnecessary risk

What Money Can't Buy: The Moral Limits of Markets

"The era of market triumphalism has come to an end. The financial #crisis did more than cast doubt on the ability of markets to allocate #risk efficiently.... The most fateful change that unfolded during the past three decades was not an increase in greed. It was the expansion of markets and market values, into spheres of life where they don't belong". - Sandel, M. J. (2012). What money can't buy: the moral limits of markets. Macmillan.p.7

Read More...

No alt text provided for this image
Economic manipulation of society, morals and 'maths' for the benefit of providers not communities/people

Human Bias, Errors, Risk & Failures: Algorithms

"The algorithm – if you can call it that – was of such poor quality that the court would eventually rule it unconstitutional. There are two parallel threads of human error here. First, someone wrote this garbage spreadsheet; second, others naïvely trusted it. The ‘algorithm’ was in fact just shoddy human work wrapped up in code."

Fry, Hannah. Hello World (pp. 17-18). Transworld. Kindle Edition. 

Algorithmic Bias and Failures: Risk Management
The warning(s) were clear, loud and repeated often. Ignore at your own peril.... and liability.

(Updated: 8 Nov 22)

Algorithmic Liability and Artificial Intelligence

"This paper introduces the growing notion of AI algorithmic risk, explores the drivers and implications of algorithmic liability, and provides practical guidance as to the successful mitigation of such risk to enable the ethical and responsible use of AI."

Link to White Paper

ARTIFICIAL INTELLIGENCE AND ALGORITHMIC LIABILITY
ARTIFICIAL INTELLIGENCE AND ALGORITHMIC LIABILITY

AI Risk Management Framework: NIST

"Managing AI risk is not unlike managing risk for other types of technology. Risks to any software or information-based system apply to AI, including concerns related to cybersecurity, privacy, safety, and infrastructure. Like those areas, effects from AI systems can be characterized as long- or short-term, high- or low-probability, systemic or localized, and high- or low-impact. However, AI systems bring a set of risks that require specific consideration and approaches. AI systems can amplify, perpetuate, or exacerbate inequitable outcomes. AI systems may exhibit emergent properties or lead to unintended consequences for individuals and communities. A useful mathematical representation of the data interactions that drive the AI system’s behavior is not fully known, which makes current methods for measuring risks and navigating the risk-benefits tradeoff inadequate. AI risks may arise from the data used to train the AI system, the AI system itself, the use of the AI system, or interaction of people with the AI system."

Read More...

Using artificial intelligence to make decisions: Addressing the problem of algorithmic bias

"Governments are starting to use AI to make decisions in welfare, policing and law enforcement, immigration, and many other areas. Meanwhile, the private sector is already using AI to make decisions about pricing and #risk, to determine what sorts of people make the ‘best’ customers... In fact, the use cases for AI are limited only by our imagination.

However, using AI carries with it the #risk of algorithmic bias. Unless we fully understand and address this risk, the promise of AI will be hollow."

- Edward Santow, Human Rights Commission, Australian Human Rights Commission, 2020

Link to Technical Paper

Using artificial intelligence to make decisions: Addressing the problem of algorithmic bias Technical Paper
Using Artificial (Human Bias Constructs) to make decisions


(Updated 9 Nov 22)


Tony Ridley, MSc CSyP FSyI SRMCP

Security, Risk & Management Sciences

Risk, Security, Safety, Resilience & Management Sciences

Risk Management Security Management

Ollencio D'Souza

Managing Director at TechnologyCare

3y

What you are seeking to do is be "God-like" - which will never happen. The best bet is to try and correct algorithms when they show biases and there are tests to ensure this for a particular case - you can define and ensure the algorithm does exactly what it is supposed to do. I do not think there can be a general algorithm with no bias - the algorithm has to be tested for each application.

Yhon. B.

Senior Manager Security

3y

Greetings, Mr. Tony, just as the machine can never be made in the image and likeness of man. The calculation of risk through the use of the algorithm will always be loaded with bias and will depend on the motivation or intention in the shadow of the data (black box) that is handled and its creative and thinking sources. Regards

Like
Reply

To view or add a comment, sign in

More articles by Tony Ridley, MSc CSyP FSyI SRMCP

Insights from the community

Explore topics