How to Judge a Strategy
Judging a strategy sounds pretty simple: Take the results and subtract the target. If you get a positive number, the results beat the target. If you get a negative number, the results missed the target. And if you get zero, the results matched the target.
But I will argue that’s wrong. That’s how you measure results versus targets, and obviously we care about results, but it is not how to judge a strategy or a strategist.
How not to judge a strategy
We should not judge a strategy by seeing how close it got to our targets.
Here’s why: Of course what we do affects our results, but we do not control our results. We are not the only ones developing strategies and taking action. We face competitors, for example. And not only competitors. Customers, government regulators, suppliers, distributors, the economy, the weather, and politicians affect our outcomes too. No amount of data from the past can eliminate that uncertainty because the future is not limited to what happened in the past.
We estimate outcomes before we choose; that’s how we choose. We choose the strategy that we think will give us the best outcome.
But we cannot know the outcome. Competitors, customer needs, the economy, etc.
Playing the odds
Consider the two fictitious strategies below. Strategy 1, shown in the blue columns, has an 82% probability of a not-good outcome and an 18% probability of a good outcome. Strategy 2, the green columns, is the reverse: 18% / 82%.
Strategy 2 is clearly the better choice. It is possible, though, that it will produce a not-good outcome, and the odds are 18% of that, but it was a good decision nonetheless. It is better to have an 82% likelihood of success than to have 18%.
The graph below is from the ongoing Top Pricer Tournament™ that I’ve run for 15 years. More than 2,000 people have entered it (you can too), and seven of my Harvard Business Review digital articles have focused on insights from the Tournament. (Links at the end.)
Source: the Top Pricer Tournament™. Both strategies simulated in the same 2,015,028 futures.
Those two strategies, from actual human beings, have the same 82/18 and 18/82 probabilities we saw, just broken up into more levels of not-good. That chart summarizes each strategy’s performance in over two million possible futures.
And both of those people thought they had chosen the best strategy. Otherwise, they would have chosen something else. And yet, look at the big differences in their results.
So how should we judge a strategy?
This is the definition I use in my Cyborg Strategy™ technology, including the Top Pricer Tournament.
Expected value: What you want
You want to maximize what’s called expected value.
Say you flip a fair coin 100 times. If you get $1 for each time you get heads, your expected value is $50. You might get more or less, but $50 is what you can reasonably expect. Getting more would mean you had good luck and getting less would mean you had bad luck.
Expected value can involve a single metric, like profitability, market share or market share, or it can combine several metrics. That’s what economists call “utility”.
Robustness: As little risk as possible
A single coin toss is high risk in the sense that you have no idea how it will turn out and the only possibilities are the extremes, heads or tails (excluding the extremely unlikely edge).
The opposite of risky is robust. You can be confident that a robust strategy will perform close to its expected value. It’s like being a customer with a satisfaction guarantee.
Remember the archery target with all the holes scattered around? That shows a lack of robustness, and the presence of risk, because the results are so scattered. The multiple holes on the target remind us that there are multiple possible outcomes. That, by the way, is why competitive strategy cannot be solved with forecasting. Think of it this way: what our competitors do affects us, and we don’t know what they will do, and even they don’t know what they will do because what they do depends on what we do, and on and on.
Recommended by LinkedIn
Dominance: Better than the alternatives
A strategy is dominant when there is no other strategy known to you that would be better according to the measure(s) of success that you care about. That implies that you need a way to estimate or simulate the outcomes of the strategy options available to you.
“Better” might be squishy. You might prefer dominance in which one strategy always outperforms alternatives or in which one strategy usually (but not necessarily always) outperforms alternatives. You might make tradeoffs in measures of success, when an improvement in one metric might outweigh a decline in another metric. You might make tradeoffs in expected value and risk, such as government-guaranteed bonds versus speculative stocks.
In my research based on the Top Pricer Tournament, I have not yet seen a single strategy that always outperforms others — see the graph of strategy 285 and strategy 1350, above — but some strategies are clearly more dominant (strategy 1350) than others (strategy 285).
And the best strategy is…
It depends.
I’m sorry. I wish I knew, but I don’t. I would tell you if I knew, after I pocketed a few billion dollars.
It depends on what you want, what your competitors want, what your competitors do, and external events.
Plenty of people will tell you what the best strategy is, and they might be sincere, but they don’t know either. No human does. That’s because, like the archery target with all the holes, there are many possible outcomes and we cannot possibly sort through them in our heads. That’s something I learned from running billions of simulations and from war-gaming scores of real companies around the world.
But let’s state it a different way: there is hope, because we can do a better job of judging strategies for our businesses. It might not be perfect, but it doesn’t have to be perfect to be better than conventional approaches.
Simple and free: Brainstorm futures. A new competitor will disrupt our industry. Our main factory is disabled by a flood. Our top product gets hit with a scandal. Their top executive gets hit with a scandal. Ask yourself these three questions about each future: How likely is it, how much impact would it have, and how unprepared are we.
A little deeper, and still free: Switch from confirming to disconfirming. What would make our strategy fail? It doesn’t mean the strategy will fail. It just means you anticipate, and you prevent mishaps or switch strategies.
Deeper still, now with a price tag: Competitive intelligence. CI specifically looks for what competitors are doing. I recommend that you also look at how competitors make decisions, because that will help you know how they might respond to you.
Even deeper still: Qualitative business war games. “What do you think they will do?” is a relatively wimpy question. Here’s an energizing, thought-provoking question: “If you were them, what would you do?” It can be scary, but it’s good for you, like broccoli.
Best of all: Quantitative business war games and strategy simulation. That’s where you can calculate expected value, robustness, and dominance.
A strategy is a bet
Your strategy is a bet because you don’t control the future and therefore you don’t know the future. You influence your future, to be sure — for example, you decide which bet to place — but you cannot command the outcome.
All you can do, and you (and I) do this all the time, is place bets.
The trick is to place good bets. The way you do that is by making high-quality decisions. And the way you make high-quality decisions is that you think like the casino, not the gambler. The gambler hopes to get lucky. The casino understands the odds.
A good bet is one that’s likely to get you what you want, with as little risk as possible, and is better than the alternatives.
A good strategy raises the odds of success. That’s what makes it a good strategy. And that’s why the best way to judge a strategy is before you commit.
Do you prefer the text + images format or regular text? Please let me know. Thank you!
A spoken version of this article appears on YouTube.
Links to Harvard Business Review digital articles based on the Top Pricer Tournament:
Image credits, all on Pixabay.com except for coin toss:
Principal at McCarthy Analytic Consulting LLC
2yExcellent summary! In my experience, a good forecasting process should complement and inform strategic decision making. Having a robust and consistent process for generating a naive forecast, along with a forecast reflecting expected outcomes at the base expected conditions is the starting point. If the economy and quantitative scenarios for external conditions are properly reflected in the modeling, it then becomes possible to understand how different the outcomes could be, and what external conditions pose the greatest threat to the company. Of course there is still the all important judgemental layer to consider competitor responses, and external events not captured in past experience. Strategy, Risk Management, and Forecasting processes should at a minimum be loosely coupled, and the practitioners need to talk to, and listen to each other. Unfortunately forecasts are often just converted to targets, which are subject to all sorts of internal politics and incentives, and cease to be informative about what the world could look like in the future.
Board Director | Executive Advisor | Corporate Governance | Entrepreneur & Impact Investor
2yinsightful perspectives Mark Chussil
The Decision-Making Maverick™ Life, Leadership & Business Coach, Competition and Strategy Specialist, Author - Improving your life, decision-making and the competitiveness of your business.
2yInsightful as always and great suggestions for improving strategic thinking and strategic choice. I have found, after all these years as a strategy consultant, that most managers really don't understand what strategy is all about. Choices, minimising risks and looking at alternatives. Sigh!!