Keep Humans At The Center Of AI Decision Making
#LeadershipInTheAgeOfAI
In this era of humans working with machines, being an effective leader with AI takes a range of skills and activities. Throughout this series, I'm providing an incisive roadmap for leadership in the age of AI, and an important part of leading effectively today is making sure your people are at the center of decision-making.
In popular discussions on artificial intelligence, there can be a sense that the machine stands alone, distinct from human intelligence and capable of functioning independently, indefinitely. It has led to some consternation around the mass elimination of jobs and the unfounded fear that the future of business is in replacing humans with machines. This is wrongheaded, and in fact, holding this assumption may actually limit potential value and trust in AI applications.
The reality is that behind every AI model and use case is a human workforce. Humans do the hard, often-unsung work of creating and assembling the data and enabling technologies, using the model to drive business outcomes, and establishing governance and risk mitigation to support compliance. Put another way, without humans, there can be no AI.
Yet, while the human element is a key to unlocking valuable, trustworthy AI, it is not always given the attention and investment it is due. The imperative today is to orient AI programs to focus on humans working with AI, not simply alongside it. The reason is that it can have a direct impact on AI ethics and business value.
Two areas of AI development and use are illustrative of the way in which data is curated and the importance of validating AI outputs.
The Risks In Data Annotation
AI models are largely trained on annotated data. Annotating text, images, sentiments and other data at scale is a time-consuming, highly manual effort. With this, human workers follow instructions from engineers to label data in a particular way, according to whatever is needed for a given model. There are matters of trust and ethics that grow out of this. Are the human annotators injecting bias into the training set by virtue of their personal biases? For example, if an annotator is color blind and asked to annotate red apples in a set of images, they might fail to label the image correctly, thus leading to a model that is less capable of spotting red apples in the real world.
Separately, what are the ethical implications for the humans engaged in this work? While red apples are innocuous, some data might contain disturbing content. If a model is intended to assess vehicle damage-based accident photos, human annotators might be asked to scrutinize and label images that contain things better left unseen. In this, organizations have an obligation to weigh the benefits of the model against the repercussions for the human workforce. Whether it is red apples or crashed cars, the insight is to keep humans at the center of decision-making and account for risks to the employee, the enterprise, the model and the end user.
Recommended by LinkedIn
The Importance Of Output Validation
With machine learning and other types of more traditional AI, model management requires ongoing attention to outputs to account and correct for things like model drift and brittleness. With the emergence of generative AI, the importance of validating outputs becomes even more critical for risk mitigation and governance.
Generative AI, such as large language models (LLMs), has rightly created excitement and urgency around how this new type of AI can be used across myriad use cases, both complementing the existing AI ecosystem with upstream deployments and enabling downstream use cases, such as natural language chatbots and assistive summaries of documents and datasets. Generative AI creates data that is (usually) as coherent and accurate as real-world data. If a prompt for an LLM asks for a review of supply chain constraints over the past month, a model with access to that data could output a tight summary of constraints, suspected causes and remediation steps. That summary provides insight that the user relies on to make decisions, such as changing a supplier that regularly encountered fulfillment issues.
But what if the summary is incorrect and the LLM has (without any malicious intent) cited a constraint that does not exist and, even worse, invents a rationalization for why that “hallucination” is valid? The user is left to make decisions based on false information, which has cascading business implications. This exemplifies why output validation is necessary for generative AI deployments.
To be sure, not all inaccuracies bring the same level of risk and consequence. If using generative AI to write a marketing e-mail, the organization might have a higher tolerance for error, as faults or inaccuracies are likely to be fairly easy to identify and the outcomes are lower stakes for the enterprise. When it comes to other applications that concern mission-critical business decisions, however, the tolerance for error is low. This makes a “human in the loop” who validates model outputs more important than ever before. Generative AI hallucination is a technical problem, but it requires a human solution.
Deloitte, where I'm the Global Head of the AI Institute, calls this the "Age of With," an era characterized by humans working with machines to accomplish things neither could do independently. The opportunity is limited only by the imagination and the degree to which risks can be mitigated. Recognizing and prioritizing the human element throughout the AI lifecycle can help organizations build AI programs they can trust.
Responsible AI for Product & Design Leaders - Trustworthy AI drives customer adoption & retention
6moBuilding trust in AI is crucial to adoption and “work with” focus of the next few decades. The second hurdle is trust in the company culture - having freedom to experiment, fail and learn only come about when the company gives clear governance framework for safe and ethical AI use coupled with space to experiment.
AI & Ethics | Digital Experience | Advanced technologies & Quantum Computing
7moThese points about keeping humans in the AI loop appear absolutely right. Humans definitely play an important role in making sure AI is ethical and reliable, especially when it comes to validating outputs in critical decision making systems. Generative AI can sometimes hallucinate, so having subject matter experts check its work might be safe and reasonable. The challenge for companies is to better integrate human oversight to improve AI governance and standards. Preserving and optimizing collaboration between humans and AI seems to me essential to building trust in AI systems.