Managing Ethical Dilemmas: A Human-Centric Approach to AI Adoption
Issue 194, January 9, 2025
It has become clear that the acceleration of AI applications has the potential to upend life as we have known it. It is impossible to deny the power of AI manifestations (platforms, chatbots, interfaces) to improve and influence our personal and professional lives. Many individuals are already diving deeply and embracing the use of AI for most if not all tasks. Others are holding out after experiencing AI delusions and disappointments. There is a long list of organizations that are deeply into and infusing their future core AI strategies with its promise hinging on future success and growth.
AI and Transformation
Organizational change, transformation efforts, plans and strategies will be impacted and influenced by AI adoption and its business applications. In our book, The Truth About Transformation, we emphasized that meaningful change and transformation begin and end with the human factor. This principle is especially relevant in AI adoption where ethical decision-making directly impacts organizational success as well as societal well-being (customers, workforce and stakeholders).
Our embrace of technological prowess is always focused on positive outcomes and enhancements. We often leap in without pausing to consider the consequences, particularly the ethical consequences and dilemmas that stem from rapid adoption. We typically do not recognize or consider the biases that may influence us, AI and its outputs. Without recognizing the influence of our biases, we can potentially run up against some real-life ethical dilemmas.
So, how are organizations to manage AI? How do they identify and address the ethical concerns resulting from AI implementation and prevent ethical dilemmas and unintended consequences? Are there tactics and ethical frameworks we can put in place to mitigate the consequences?
Truth and Consequences
Tristan Harris is one of the most passionate voices for the ethical use of AI. His appearance in the film The Social Dilemma put him on the map as an advocate for the humane use of technology. He is the Executive Director of the Center for Humane Technology.
Harris has stated that AI has given us superpowers. “Whatever our power is as a species, AI amplifies it to an exponential degree.” His concern is how to design technology to be humane to the systems we rely on. “If we don’t understand the risks, we won’t get to a positive future.” And these risks are masked by the complexities of the world. We know that AI has invaded social media, caused cyberattacks, made banking vulnerable with voice cloning, and provided a wealth of online misinformation. Harris asks if our ability to respond to and govern AI on a par with its rapid development. Are we vulnerable to a lack of applied wisdom to manage a system that transcends our “paleolithic brains and medieval institutions,” according to biologist E.O. Wilson? We are living in a transitional time when our technology is ahead of the average individual’s capability to understand it. As Ajeya Cotra AI safety expert at Open Philanthropy states, “AI is like 24th century tech crashing down on 20th century governance.”
AI at Work
Social media is how most people interact with AI today. Whether they realize it or not, the apps are designed to feed the need for community, friendship, and connection with the intent to lengthen individual immersion. Then add to this the magic of GenAI which seemingly makes us more efficient, allows us to code faster, and solve problems that confound our human minds. Personally and professionally, AI has insinuated itself into our lives.
Our recent newsletter explored the death of brands and the rise of the individual as an influencer. Given this scenario, it’s tempting to rely on AI to work around traditional marketing strategies. And it could work to an organization’s advantage. We can customize communications to the personal level. We can shape individual goals for the workforce. We can create social campaigns that are magnets for attracting new customers. We can ask GPT to write our business plans and stakeholder updates. We can use AI programs to create simulated podcasts. We can transform our marketing pitches into clever audio messaging. We can use AI to seek out ways to cut costs and build a more profitable P/L. HR can field job applicants. In some ways, the sky is the limit. Having an ethical North Star and embracing ethical frameworks can ensure we experience all the promises of AI and not its dangers. For AI to be a positive partner, we must face our very humanness and how it plays into deploying AI ethically and without bias.
AI and Human Bias in Decision-Making
We often write about human biases and how they can result in ethical dilemmas. Broadly defined, human biases are cognitive shortcuts shaped by our experiences, culture, and emotions, which can subtly or overtly influence judgments, decisions, reactions, and actions. Biases infuse our thoughts, values and views in how we see and interpret the world around us, consciously, subconsciously and even unconsciously. When these biases are encoded into AI systems, often inadvertently, they can cause ethical challenges that both individuals and organizations face. What types of bias can be compromising. There are four behaviors we often see in our work with organizational transformation that have AI ethical outcomes.
· Confirmation Bias
Confirmation bias occurs when individuals favor information that conforms to their pre-existing beliefs, dismissing contradictory evidence. If an AI system is trained on biased decision-making patterns, it may base its predictions and outcomes on the biases. There are case studies of hiring managers who unconsciously favor candidates with similar backgrounds to their own. If these human preferences (and unstated biases) are embedded in the training of an AI hiring tool, the system may mirror and amplify this bias.
· Anchoring Bias
By nature, as decision makers we often rely heavily on the first piece of information presented when making decisions. AI systems that are trained on this model risk reinforcing initial, potentially flawed assumptions. For example, anchoring bias might cause both human managers and AI algorithms to disproportionately weigh initial market prices, ignoring evolving competitive landscapes over the long term.
· Bias in Training Data
Human-curated datasets often reflect historical inequities or skewed perspectives, even when analysts are trying to be objective. It is all too common to see or derive meaning in data that isn’t based on fact. Data is generated from fixed input, which can be manipulative if the input is skewed in the first place. AI models trained on these datasets can perpetuate and even exacerbate biases. In law enforcement, for example, an AI-based predictive policing tool that is trained on historical crime data may unfairly target certain communities due to biased reporting or past enforcement practices. A predictive tool like this is limited by being informed only of crimes that are reported. It believes it is making sound and objective decisions and recommendations but is only analyzing part of the picture.
· Technology Bias
Humans have a disproportionate trust in technology believing it to be objective, even when its recommendations or decisions are flawed. This blind trust can amplify existing biases, as users fail to question or validate AI outcomes. We are overly trusting in tech, and AI presents even more of a challenge to discern what’s true and what’s fake.
We continue to beat the drum for the power of critical thinking to reveal how human bias and our penchant for trust can result in AI bias and lead to ethical dilemmas. Harris is clear in his warning that we want the promise of AI without the peril. He says, “If we care about the future we create and want, we have to see the risks so we can make the right choices.” We need to be more thoughtful, critical and objective as we travel the road ahead.
Potential Organizational Ethical Issues
At face value, each of the below concerns may seem limited in consequences but separately or aggregated they impact organizational management, compromise team efforts and influence the amount of energy required to change and transform. And above all, they can generate ethical issues.
· Privacy Concerns
AI thrives on data, but the collection, processing, and storage of personal information often come at the expense of privacy. In the latest race for personal engagement, “tech companies are competing to upgrade chatbots like ChatGPT into AI agents to not only offer answers but also to take control of a computer to take action on a person’s behalf. Experts in artificial intelligence and cybersecurity warn the technology will require people to expose much more of their digital lives to corporations, potentially bringing new privacy and security problems,” reports the Washington Post. Open AI CEO Sam Altman has said, “People will ask an agent to do something for them that would have taken a month, and it will finish in an hour.” Sounding the alarm for guardrails, Dario Amodei, chief executive of Anthropic AI, warns, “It can say all kinds of things on my behalf, it can take actions, spend money, or it can change the internal state of my computer.” The Post adds, “additional privacy risks come from the way some proposed-use cases for agents involve the software ‘seeing’ by taking screenshots from a person’s computer and uploading them to the cloud for analysis.” With such rapid development of AI tools, organizations need to be prepared and navigate the fine line between innovation and overreach, especially when sensitive data is involved.
· Transparency and Accountability
The complexity of AI systems often leads to the “black box” problem, where decisions are made by AI without human oversight and without clear explanations. This lack of transparency can erode trust, making it essential for organizations to prioritize transparency in AI.
· Job Displacement
Automation driven by AI has the potential to significantly displace human workers with programmed technology. Although much of US manufacturing work has been automated, the white-collar office/hospital/banking/retail worker is at risk of being replaced by AI-powered tools. A shift in workforce skills and the workplace itself is in for a reset.
· Ethical Drift
As organizations scale AI usage, the pressure to maximize efficiency and profitability can lead to ethical compromises. This ethical drift is particularly dangerous in industries where AI influences public opinion or societal norms. Social media is an example of ethical drift that has already had significant side effects.
Ethical Frameworks
To navigate ethical challenges, organizations must adopt structured frameworks that prioritize human-centered principles. Here are key strategies to guide responsible AI integration and ensure a strong ethical framework.
· Ethical Guidelines
Define a comprehensive set of ethical principles that guide every stage of AI development and deployment. These guidelines should reflect internal values as well as align with broader societal norms and global standards. Layer this ethical framework onto existing policies to build comprehensive guidelines.
· Bias Auditing
Regularly audit AI systems for potential biases. Use diverse datasets and involve multidisciplinary teams in the design and testing phases. Consider the examples we cited as examples of the ways our biases manifest.
· Transparency
Ensuring transparency is key to building trust and accountability in AI systems. Design AI applications that provide understandable, accessible insights into how decisions are made. Stakeholders, including AI users, should understand how decisions are formulated, where AI has been used and the information set used to make the decision.
· Privacy by Design
Integrate robust data privacy measures from the outset into AI development. Design customer management systems that adhere to federal and state standards and policies. Strengthen your business with first-party data that you control and can manage within the guidelines.
· Workforce Reskilling
Addressing job displacement requires a forward-looking investment strategy to prepare employees for emerging roles in AI-centric industries. Prioritize continuous education and create pathways for employees to transition into high-demand roles through reskilling training programs and redefining job roles and responsibilities.
· Ethical Culture
Cultivate a culture where ethics are part of daily operations. Encourage employees to voice concerns about AI practices without fear of retribution and allow them the ability to challenge and openly ask questions.
A Positive AI Future
We are in for a double-edged sword future as AI promises to help us at every turn to enhance our humanity…but also offers us challenges that can fundamentally redefine our society, organizations and ourselves. As Harris said, focus on the positives of what AI offers us and avoid the perils through systematic design and implementation of AI programs that are ethical and humane.
Get “The Truth about Transformation”
The 2040 construct to change and transformation. What’s the biggest reason organizations fail? They don’t honor, respect, and acknowledge the human factor.
We have compiled a playbook for organizations of all sizes to consider all the elements that comprise change and we have included some provocative case studies that illustrate how transformation can quickly derail.