Who needs a data scientist?

Who needs a data scientist?

as data is NOT really the new oil

Good decission is!

More than a decade ago, I started my innovation career as a trainee patent searcher in a large American company. My day job included searching, reading and analyzing dozens or even hundreds of patent documents in a day to generate reports and statistical summaries about trends in research, patenting and innovation in certain technologies and/or markets of interests. I had to send these reports to IP attorneys and CXO teams who interpreted these gazillions of cryptic reports/ stats based on their individual experiences and functional skills to arrive at certain business decisions – which I realized for the first time were heavily laced with individual biases, groupthink, ethical choices and interpersonal conflicts that benefitted one or few in the leadership at the cost of losing a better solution/decision which would have been beneficial for the rest of the organization who were usually absent in the corporate decission hierarchy! I realized I have entered an unfortunate vicious cycle of high quality data being converted into spurious or questionable decisions as the final output from an analytical process. On that very moment I also recalled one of my law school professors – who once opined that Politics is the process of decision making about an interest group when the said group remains absent in the same decision making process!!

Fast forward several years since then, I found the tech industry going gaga about big data, analytics, machine learning, AI and the likes. But my hunger to learn about decision theory, behavioral science, game theory and cognitive aspects about human intangibles grew rapidly by that time – and after 100s of business interactions ( friendly lunches too) with the C suite of Fortune 100 companies, VC backed Silicon Valley startups, Indian small and medium businesses, not for profits, wannabe tech founders and failed entrepreneurs I was convinced that the myopic glorification of business statisticians powered with open source software tools, must give way to a well rounded industry awareness of decision science as superior business tool than popularizing the misnomer of data scientists in the industry!

Applying technology, mathematics and statistics to well-defined business problems is essential for an engineer or analysts job role but the challenging part of this job role is that business problems keep changing constantly and are often poorly defined or framed. To tackle such business problems, expertise in technology, mathematics and statistics is not enough. There is clear need for professionals who have broad and strong business acumen, ability to effectively communicate with different stakeholders of the business, thinking ability to design and simplify poorly defined business problems and have an in-depth understanding of decision making processes within the organization. A rare breed of such professionals can be termed Decision Scientists who aim at helping organizations not only with deriving and translating meaningful insights but also assist in effective and profitable decision making using a unique and deterministic decision architecture.

 Recently the Product Strategy Head of a leading analytics firm proclaimed that data science misses half the equation as individuals with tech and math skills have been labeled as "Data Scientists" and organizations are competing to hire them. But what organizations need are professionals who, in addition to math and technology, can bring in the right business perspective too!

But the above executive as well as the rest of the tech industry are also overlooking the importance of incorporating established teachings and tools from the world of cognitive and behavioral sciences into their organizational business intelligence and decision analytics architecture. Organizational intangible like cognitive biases, team motivations, company culture, office politics, ego issues, ethical weaknesses, strategy alignment gaps in business operations, conflicting self interests of executives and teams, non value add/perfunctory business process, non transparent incentivisation structures e.t.c should be developed into enterprise decision network architecture using a set of leading cognitive performance indicators (KPIs), which can be orchestrated utilizing game mechanics ideally without the apparent knowledge of the employees and teams, for better adoption and securing huge business wins.

So as you can see in the table above, the decission scientist that you badly need to rescue your company operations and strategy alike, ideally will not be bespectacled nerdy 9 pointer IITian techie whose creative skills and natural curiosity are not at par with her coding and number crunching skills. Since there are no formal cross discplinary trainings for engineers keen for a introductory cocktail course on cognitive neuroscinece, behavioural economics, industrial psychology and decission theory, a self directed immersive exposure to real world decission making situations may be of great help. And among these situations, personal and professional scenarious involving different types of decission making biases will be practically invaluble. 

 I am describing some of them here as below:

 Clustering illusion. Wishful thinking is the most common of human tendencies. In all areas of life and thought, we have the potential to see what we want to see, and miss what's really there. In an analytical result, that's a particularly strong risk, as analytics are all about finding patterns in noisy data. It's the most natural thing in the world to see patterns that tell us what we want to hear. Avoid this mistake by scrupulous scoring of results and maintaining a commitment to accepting the numbers honestly, for the ultimate good of the enterprise.

Selective perception. The sibling of clustering illusion, selective perception is the interpretation of analytical outcomes in ways that confirm what we expected them to say. Expectations in general are precarious in the human experience; the world seldom turns out the way we want it to. But buying into expectations obfuscates the entire purpose of analytics: if we uncritically allow ourselves to simply see what we expect to see, what's the point?

Confirmation bias. This old favorite is familiar to almost everyone who has ever argued politics and/or religion. We tend to accept only new information that supports our old ideas. This is just as likely -- and even more dangerous -- in the realm of analytics, where outcomes can influence decisions at the highest levels. The entire point of analytics as a strategic tool is to push beyond old ideas into more effective choices and policy: why poison the waters?

Survivership bias. We like good news, we hate bad news. It's a natural tendency to focus on positive outcomes projected in our analytical results, ignoring the negatives. But this truncates the power of analytics. We have to give both positive and negative outcomes equal weight, if we're to be fully informed by the data.

Ostrich effect. And when the news is bad, we cover our ears. Often we try to ignore or argue against conclusions that aren't what we want to hear. This is especially problematic in analytics, where the results are generally culled from several different sources and objectively scored; the ideal is to embrace even bad news delivered by analytics, because we can be reasonably certain that addressing that bad news will be effective.

Bandwagon effect. When we find ourselves riding a wave, we tend to go with the wave. If an analytical result seems to put us into a positive industry trend, we may be inclined to buy in all the way, even if the result is weak. It's important in that circumstance to stay off the bandwagon and give the result the weight it deserves, no more, no less.

Outcome bias. A "little brother" of bandwagon effect is the outcome bias: over-trusting a process because it worked well once. The danger here should not be underestimated; analytical processes must be constantly fine-tuned to remain effective. To lock in on an analytical process because it delivered a positive result, forgoing continuous fine-tuning and critical scrutiny of the results, is asking for trouble.

Pro-innovation bias. Cognitive studies have shown that human beings tend to over-emphasize both similarities and differences between things that fall into different categories. We do this all the more when evaluating a new idea. We tend to overvalue a new idea's (unproven) usefulness, and undervalue its (probable) shortcomings. This is a natural consequence of human enthusiasm, and we see it all around us (especially if you live in Silicon Valley). But if analytics are being applied to evaluate some new product or process or service, realism is always the best choice. Money and effort dumped into a mediocre new idea benefits no one.

Information bias. This one is somewhat obscure, but it happens in the best of us. We can become focused on information or trends or details that really have no effect on the outcome we are pursuing. This bias is hard to root out because it's often a good thing to toss unknowns into the input of a data mining operation, and we often can't know what was really important until the results are in and the outcomes are apparent. The second time around, though, it's easier to prune the information that does and doesn't matter -- and emotional attachment to details that don't have a substantive effect is an unnecessary distraction.

Blind-spot bias. Finally there's blind-spot bias: the failure to recognize and allow for our own biases. Of all our tendencies in bias, this one is both the most obvious and the hardest to overcome. For this reason, it's a wise practice to employ analytics scientifically; that is, subject them to peer review. The fix for blind-spot bias is to put more than one set of eyes on every analytical result to ensure that no one person's biases distort an important outcome.

Conclusion

Data as you can now see is not really the new oil! Access to 50 Zettabytes of world’s data available on your fingertips by next year is not going to make you more powerful than our low tech forefather with a pen and paper in hand centuries ago! Since centuries all data that was subjected to human contact or subjective interpretation has been tainted by myriad of factors like biases, ethics, statistical “p” hacking e.t.c. In the years to come we must usher a new paradigm of human-machine decision architecture and related practices as an investment of time, energy and even culture change. It isn't as easy as installing new software, fiddling with the latest Python libraries orusing coolest mobile API around: it can change the enterprise from the inside out, achieving ambitious business results bringing about real change in both decission policy and processes.

If you are curious to know more on decission analytics, please feel free to drop me a line!

 

Manjunatha Gopinath Rao

AI Enthusiast | Technologist | Passionate Motorcyclist | motovlogger

5y
Andrew M.

LinkedIN Business Growth Channel ✔️ LinkedIN Coach ✔️ LinkedIN Profile Optimisation ✔️ LinkedIN Engagement Strategies ✔️ LinkedIN Sales Growth Partner ✔️ SETR Global

5y

I was just reading about this the other day on LinkedIn, though they had the opposite opinion! Great to get both sides.

To view or add a comment, sign in

More articles by Suddha Sattwa Basu

Insights from the community

Others also viewed

Explore topics