PD Fallacy #6: Experimentation has no role in product development
In most of the companies I work with, the decision to develop a new product is made based on an assessment of the likely revenue and margin the product will create. This assumes a clear specification of the intended functionality of the product as a basis for the cost estimation associated with the development of the product. Typically, this means product management working hard on creating a case for the product including the differentiating functionality, the position in the market, the intended customer, etc.
Although this all makes sense in theory, in practice this approach is simplistic and fails to deliver on expectations in most situations that I am aware of. One of the main factors that I have written about in several earlier posts is the inability to make accurate long-term predictions. Many claim that their organization can, but in my experience this is mostly concerned with padding the initial estimate to get the estimate to sit at the end of the bell curve and hence the likelihood of staying within the estimate being very high. Informally, this is referred to as the rule of Pi: you ask for an honest estimate, multiply it with Pi (3.14) and use that for planning.
Despite all the best efforts, many product development efforts run over budget, both in terms of time and cost, and that is where R&D bashing easily starts. Armed with the perfect knowledge of hindsight, senior leaders outside of R&D openly wonder why the idiots in R&D are again late and why we keep allowing those monkeys to destroy the profitability of the company.
Of course, R&D gets its chance at revenge when the product is finally done and in the market and sales is not able to generate the revenue that was presented at the beginning of the development initiative. This then leads to sales complaining about lacking features and poor product development practices resulting in a sub-par product. This is where the finger pointing starts and people dig their trenches and start lobbing grenades.
To me, the root cause of all this is a fundamental misconception in the heads of most leaders: the assumption that it is actually possible to predict what a product should contain in terms of functionality in order to be successful. And that brings me to the distinction between what is knowable and what is unknowable.
As the name implies, knowledge that is knowable is that which can be uncovered simply by putting more energy into collecting data and information. When product development fails due to lack of insight into knowable aspects, we have a problem in that someone might not have done as good a job as the person should have.
When it comes to unknowable things, the only answer that we have is experimentation. We have to try it out in the hope that we learn what we need. The idea that some things can not be known before the start of product development and that require experimentation during product development or after we have shipped the product through DevOps style practices is alien to most companies that I am aware of.
Of course, there are aspects that might be knowable if we would invest large amounts of resources, but where it is prohibitively expensive to do so. In those cases, the use of experimentation may be a much better way to collect the necessary information than to sit and guess.
Recommended by LinkedIn
The challenge is that when people are asked to provide tangible answers to unknowable questions concerning the functionality of a product, the most reasonable approach is to simply guess. The problem with guessing is that something that the individual initially understands is just a guess rapidly becomes a truth that is treated as being cast in stone. People love certainty and hate uncertainty and hence even the most “putting a stake in the ground” guesses rapidly become requirements.
One reason for the reticence toward experimentation is that it introduces uncertainty and risk. If we are unable to determine the exact functionality needed for a new product, we can’t do the effort estimation. If we can’t accurate predict the required effort, we don’t know the expected revenue and margin. All this makes us poor leaders as we decide on product development efforts without proper financial justification.
Still, for all the difficulty of dealing with the uncertainty and risk, it doesn’t change anything about the reality of product development. In my view, many of the dysfunctions in product development organizations originate in the inability of leaders to accurately distinguish between knowable and unknowable things. It takes courageous leadership to break out of this conundrum and confront reality as it is, rather than as you wish it was. Even if it is much more comfortable to pretend that you know what you don’t, nothing good ever came from ignoring reality.
The best way to address this situation is to treat product development as an iterative process of risk reduction. Risk can be technological in nature, but companies already know how to deal with technology risk. The concept of technology readiness levels (TRLs) was developed as one model to deal with this. The market or customer risk, however, is typically by far more significant but we are much less well equipped to deal with this.
Iterative development breaks up a product development efforts into a series of decision points where the team is asked to clarify and deal with one or a small set of open questions concerning the product. Each iteration receives a small amount of funds, performs tasks to answer the highest priority questions and based on the data that comes back, the governance team decides whether to fund the next iteration or to stop development.
One concern that is often raised when I suggest this approach is that the effort required to even create a simple prototype for testing with customers is so prohibitively expensive that there is no alternative but the traditional product development model. Although I appreciate that this may be the case in a limited number of cases, in the majority of these, my experience is that this is more of a lack of creativity. We saw the same in software companies adopting agile practices where initially teams complained about not being able to break down features such that these fit in a single sprint.
Concluding, the use of minimal viable products and A/B experimentation is typically non-existent and discouraged in traditional companies. The specification is used as a basis for all development activity and there is no desire to question it for a host of different reasons. This brings us to the notion of knowable versus unknowable. Some things are simply unknowable until we try them out and the response of customers to new products or new features is one of these. This requires a different, more iterative, product development process where each iteration is decided upon once the data from the previous one justifies continuation. This fits hand in glove with digitalization as product development continues after the product has reached the market with the continuous deployment of new functionality. So, in that sense, the iterative approach is a constant throughout the entire product lifecycle and the start of production is just a small blip in the overall process. As Mark Twain said, continuous improvement is better than delayed perfection.
Like what you read? Sign up for my newsletter at jan@janbosch.com or follow me on janbosch.com/blog, LinkedIn (linkedin.com/in/janbosch), Medium or Twitter (@JanBosch).
FORMATION CPTO | Experienced tech entrepreneur, (interim) CTO/CPO, strategy, and product consultant, hands-on troubleshooter and mentor.
1yGood points. I'm partially responsible for product development these days. In addition to being CTO. Our product roadmap is iterative. We revise it every quarter and it guides our development iterations. The tricky bit with product development is adjusting to both technical realities (what is possible/easy vs. impossible/hard) and market realities (what is actually worth pursuing short term and longer term). Mostly, companies get stuck in one of those camps with either a solution looking for a problem, or a perceived problem with some suboptiomal solution. So, either you have business people ignoring technical realities or technical people ignoring business realities. A successful product strategy needs to be rooted in both realities. You need to build things that solve a real problem with a real market in a way that is feasible and timely. AB testing and experimenting are nice tools but they are also very hard to do properly. Especially so in young/immature organizations.It usually involves having a data science team with some Ph. D. propeller heads and statisticians running the show. Not something most companies have. Observable software is much easier to implement. Observe, iterate, refine.
CEO and founder of Stickybit. Love business, tech, mentorship and consulting. Bit of an entrepreneur. Generally happy with my choices in life.
1yJan Bosch partly disagree. I mean… you’re right but loads if not most of the pathfinding happens as skunkwork dealt with by the tech teams in collaboration with some visionary with enough influence and / or recklessness to disregard from whatever micro-management dictates. In some sense, I think that this can be a good thing. A measure of repression makes you think harder when going out on a limb. At what point do I need to kill this? How do I turn this into a delivery beyond dispute? Am I keeping up with my core responsibilities? Can I cover for my team and manage pot. risk? They will like us when We win. Make sure to win often.
Director of Convergent Systems Engineering
1ySo very well said, Jan. Moving from the complicated to the complex really brings out the unknown. Not unexpectedly, experimentation is not currently one of INCOSE’s systems engineering competencies. In our education program at UCSD we are using variants of Barry Boehm’s Incremental Commitment Spiral Model as a framework to guide the experimentation process in the fuzzy front end both to gather data from what is known by others, and to create new knowledge where it does not yet exist.
Thanks for the article, I like the analysis Jan. I would add to the insistent (foolish) ambition of knowing the unknowable, the call-in of experts, aka the ‘industry veterans’ to mitigate the uncertainty and risks of a new product development… a classic management pitfall, ego-driven… instead of embracing a more pragmatic approach (like tge one you’ve mentioned), lean-based, anchored in experimentation and learnings, to cope with the uncertainty inherent to each new product launch. Your reflection (and the thread comments) bring a deja-vu feeling, reinforcing that, despite our diverse professional background, change is hard and always underestimated. PS. Congrats on your Kilimanjaro climb 💪
Engineering Manager at Manta
1yReally like your fallacy-serie 😍🤩😍 It is spot on!