Ethics and Decision-Making in the Use of Generative AI
Introduction
How we met and started our conversation on AI as it was affecting both of our work areas.
JR: As an educator and administrator in Higher Education, I constantly look for ways to improve how we do our work. I look at Generative AI as an extension of this effort, the latest incarnation of automation to increase our productivity. I am also at the forefront of helping our students become independent learners and thought leaders in their own fields of interest. Once again, Generative AI could substantially influence how we approach learning. These motivations of mine led to striking, intriguing conversations with Valerie, whose expertise lies in career coaching and workforce development.
Valerie: When we started discussing Generative AI, I thought about how it would affect the workforce. There are so many positions that will be affected negatively and some that will be positively affected. Even more compelling will be how it affects majors and research in higher education as they prepare a large swath of folks for the workforce. What meaning do work and education have in the future? Coming from a place of being cautiously optimistic, I wanted to approach it from a realistic and ethical perspective.
As we harness AI's potential in our daily lives, it is crucial to acknowledge its power to bridge or increase existing disparities. The careful deployment of AI systems can serve as a tool to mitigate biases and narrow inequality gaps. It is critical to think through an ethical framework for leadership decisions.
With AI's ability to analyze vast amounts of personal data, questions about privacy arise.
We must establish stringent protocols to safeguard individuals' privacy rights, a crucial step in making our audience feel secure and valued. Transparency in data collection, processing, and storage is paramount to maintaining trust and protecting our most valuable asset—personal information.
JR: To make its outcome more useful and customized, we need to feed a substantial amount of personal data to Generative AI solutions. An image generation tool like Midjourney (https://www.mymidjourney.ai/) consumes raw image files to produce desired effects on them. We don’t know where our files end up. This scenario presents a much higher stake than providing a few text prompts. What if a malicious third party obtains access to my headshot and uses it to produce a fake ID? Would the Generative AI service provider be responsible for the mishap?
Valerie: During our discussion, the use of generative AI to create headshots for LinkedIn profiles was a topic of interest. This raised two ethical concerns for me. Firstly, the usage of these images needs to be clearly defined to ensure privacy and fair use of the uploads. Secondly, many of these images are the work of professional photographers, so it's important to establish how they are being credited. Also, with many customized platforms built off of large open-source platforms, where does one attribute someone's likeness?
The inner workings of AI algorithms can often be complex and opaque, making it difficult to understand how decisions are made.
This lack of transparency can erode trust and hinder accountability. By promoting openness in AI development and deployment, we can create systems that are explainable, interpretable, and accountable.
JR: AI algorithms have inherent opaqueness. Their creators often don’t even know how AI arrived at an answer because of the black-box nature of AI-based decision-making. This limitation is troubling because of the mission-critical scenarios where AI takes over and creates a life-or-death situation, as in autonomous driving. By now, you must have heard horror stories about cars driving themselves into life-threatening situations like rail crossings. To address this troubling aspect of AI, scholars started working on eXplainable AI (XAI), focusing on AI algorithms that are more transparent. As a computer scientist, I know for a fact that it’s impossible to prove the safety of a software application, and AI is no exception.
Valerie: A troubling revelation came to light in a recent research study. Recruiting platforms that employed generative AI to identify candidates exhibited a negative bias towards older applicants. The most concerning part was the lack of a clear explanation for this discrimination, raising serious ethical questions about using such technology. If you are not checking for bias and are unaware of the pattern of decision, how will the platform be fixed?
Preventing Disparities: We need a multi-faceted approach to prevent unchecked AI from perpetuating existing disparities.
It starts with diverse and inclusive teams driving AI development, ensuring a broad range of perspectives are considered. Rigorous testing and ongoing monitoring can help identify and address biases and inequities. Collaboration between policymakers, industry leaders, and advocacy groups is vital to establishing clear regulations and ethical guidelines for AI applications.
JR: Just like any other software systems, Generative AI applications require rigorous testing before being released for wider use. However, the software industry seems to be launching a solution and fixing problems as they go. As a software engineering researcher, I have seen this phenomenon occurring numerous times in our industry. If the software companies cannot police themselves to prevent disparities manifested in AI, we as users should be more vigilant about biases emerging in these Generative AI programs. Otherwise, the situation will perpetuate itself or become even worse.
Valerie: Gender equity jumped immediately to my mind. This bias is systemic, often subtle, and pervasive in many online writings. For example, I tested one coaching platform for bias. I asked the same question in three ways to learn about conflict management. First, I asked for advice on conflict management in general. The platform provided me with conflict advice and recommended a conflict management course. On the second request, I gave my identity as a female with a male supervisor needing help with conflict management. The platform gave me the advice I needed to be "strategic" and recommended a course explaining that I was overwhelmed and not incompetent. I changed the third request to a female supervisor. In this example, the advice was to be more "professional" and recommended the course to explain that I was overwhelmed and not incompetent. In each example, the wording was subtle in making suggestions. For instance, when talking to a woman, you need to be professional, but would I not be? When talking to a man, I need to be strategic – why not professional? The recommendation was not so subtle that I was overwhelmed and not incompetent. The platform only knew that I was a woman. I could have been dealing with sexual harassment, a critical negotiation, a friendship gone awry – so many reasons; however, the platform jumped to incompetent.
Leader in Career and Coaching Theory, LinkedIn Learning Instructor
5moThank you Jungwoo Ryoo for a fascinating conversation! I'm keen to continue our collaboration on this topic.
Vice-President, Board Trustee at Insight PA Cyber Charter School
5moVery interesting and well thought out. I will be recommending this to others to read.