Why Do Leaders Make Bad Decisions?
The underlying problem is well-known: Human judgment is unreliable, and people have biases and make systematic errors in reasoning. This is a problem for everyone, but it's especially problematic for leaders who are making strategic decisions. Strategic decisions are high stakes, so the cost of a wrong decision is higher than usual. But strategic decision-making is also more complex than tactical decision-making, which means people have more opportunities to make errors so that mistakes can be more expensive. For example, bias could cause a hiring manager to ignore the nuanced details of an applicant's qualifications, instead of relying on superficial "gut feeling," and this decision may have billions of dollars in consequences, such as when a hedge fund manager makes a bad hiring decision and loses money for investors.
There's no such thing as truly objective judgment. That is the thesis of a provocative new paper by researchers at Northwestern University and the Federal Reserve Bank of Chicago. Though it hasn't drawn much attention in the business world, it should. It offers a relatively simple answer to a question that has stumped management scholars for decades: Why do leaders make bad decisions? The reason, the authors find, is that biases and errors in reasoning cloud their judgment. Leaders are especially prone to these errors because they are decision-makers who have "greater access to information and resources," says Colin Camerer, a professor of behavioural economics at the California Institute of Technology who has read the study. "When they make mistakes, they can be bigger than when others make mistakes."
People tend to be overconfident in their judgment, but that's not the end of the story. Research shows that people tend to ignore information that contradicts their beliefs. This is known as confirmation bias. It's a natural human tendency: our brains are hardwired to recognise patterns, and we notice things that confirm what we already believe to be true while ignoring information that doesn't fit our mental model. Confirmation bias is at play when you meet someone and instantly dislike them — then you find reasons to justify your initial impression — or when you hold a negative attitude toward a candidate for office until they give you an excuse to change your mind about them. These two tendencies — overconfidence and confirmation bias — mean that it's difficult for leaders to objectively assess their own judgment in the face of complex problems. Often, leaders who are sure they are right are wrong — and they don't know it until they make a bad decision.
There are a lot of cognitive biases that can affect the quality of people's judgment. As far as we know, there are so many that no one has even attempted to list them all. Essentially, if you have a tendency to see patterns or relationships where none exist — like predicting the future by looking at the alignment of the stars — then you might be experiencing a cognitive bias. It makes it hard to identify and name these biases because they're not necessarily bad or "wrong." For example, confirmation bias means that people tend to favour information that confirms their beliefs over knowledge that challenges their assumptions. If you're confident in your views and someone says something that contradicts your worldview, it might feel like they're attacking you. So confirmation bias can negatively affect group dynamics, but it's not always a bad thing. Likewise, thinking about how the world works is often helpful when building products — but sometimes, those cognitive models aren't accurate. In fact, the underlying problem is well-known: Human judgment is unreliable. People have biases and make systematic errors in reasoning. This is a problem for everyone, but it's especially problematic for leaders who are making strategic decisions. Strategic decisions are high stakes, so the cost of a bad decision
The Mediating Assessments Protocol, or MAP, is a way to reduce mistakes in strategic decision-making. It was developed by Gary Klein, a psychologist and expert on decision-making, who has spent years studying how experts make decisions in high-stakes situations. MAP is designed to help leaders assess the risks of different courses of action, and it does this using two tools. The first tool is a method for defining a problem statement. A problem statement is what we want to achieve or avoid — in other words, the desired outcome. It's not the same thing as the mission statement you have on your wall at work (that typically describes why you're doing something), but it's related. An effective mission statement clarifies how an organisation intends to achieve its goals, which can be useful when developing a problem statement. A good problem statement describes both what will happen if we do something (the positive consequences of that option) and what will happen if we don't (the negative consequences). If possible, it also describes the things we want to avoid — what I call "meditating on the negatives." The second tool is a method for assessing risk. Risk assessments are estimates of how likely it is
Recommended by LinkedIn
The MAP is a three-step process that helps people make decisions by proposing and evaluating multiple scenarios at once. It's based on the idea that most strategic problems are actually made up of several underlying problems — a combination of trade-offs that need to be weighed against each other. So instead of focusing on just one scenario, MAP helps leaders consider a set of scenarios at once. It builds a series of underlying assumptions, or mediators, for each scenario. And because it requires people to see the logic behind multiple scenarios, it also helps them get past their biases and see things from a different perspective, making it easier for them to accept some level of uncertainty instead of treating decisions as black-and-white choices.
The protocol is simple. First, you ask the experts to submit their recommendations. Then you ask them to rate the quality of their own recommendations. Finally, you ask them to rate the quality of each other's recommendations. How do people evaluate the quality of someone else's recommendation? They look at how similar that recommendation is to their own. This is what researchers call "motivated reasoning" — people use the information to justify decisions they have already made. If I admire your courage and decisiveness, I will be more likely to accept your recommendation, even if it's wrong. So we score the recommendation quality as a function of how much it matches our own views. That way, even if we have no good reason for recommending a particular option, we give it high marks — because our biased minds tell us that it's a good decision and we don't want to be wrong.*
The question is: How do you sort through all this information? One answer to that question is: Get the perspectives of other people. Organisations often use a decision-making technique called "groupthink" to get different opinions on their options. Groupthink doesn't mean relying on consensus or majority opinion. It means considering a diverse range of opinions to ensure you're not missing key information or falling prey to biases. According to research, one way to improve your decision-making is to make sure you have at least five people involved in making strategic decisions. Those five people represent multiple viewpoints. The more diverse the group, the better decision-making will be. The more homogenous the group, the greater the risk that everyone will think and choose in similar ways — and miss important information or fall for common mistakes and biases. There's no perfect formula for creating an effective decision-making team. You can start by choosing people from different disciplines who are good at thinking differently from each other.