Core Capabilities for Leveraging Data to Support Any Strategy

Core Capabilities for Leveraging Data to Support Any Strategy

Having competitive data capabilities is essential to staying ahead of the market, and becomes even more so with each business cycle. This article discusses the core, foundational data management capabilities needed to deliver products and services that leverage data as a competitive asset.

An organization leveraging data as a competitive differentiator needs to excel in 4 capabilities: depth, breadth, speed, and deployment. Defined as:

Depth: Analyzing data in ways that gain innovative insights.

Breadth: Mixing disparate data together in novel ways.

Speed: Wrangling and analyzing data faster.

Deployment: Taking action once data is analyzed.

Organizations often face significant data related challenges when these capabilities are not fully in place, often having great ideas they can act upon when one or more … but not all … of these conditions exist. For example, a retailer who has excellent customer data from online purchases is unlikely to know who a particular customer is at physical locations until checkout. This is a “Deployment” challenge. Retail media networks represent an entire ecosystem that is turning this challenge into an opportunity.

How does an organization make sure it is best equipped to respond to these challenges on its own, and solve them on the profitable part of the opportunity curve? There is a data “ground game” that needs to be in order. This involves two foundational aspects: Governance and Operations.

Governance

A mature governance program is needed as the political mechanism by which all other issues related to data are resolved. I spent 10+ years of my career in media, so an analogy that resonates with me is “data as content.” In the years of static reporting, data was like linear broadcasting where content was tightly controlled. In this networked, social media age data still needs to be mediated even as it is made more accessible. The governance program needs to solve for:

Accessibility. Rights management needs to be addressed through governance, not judged solely by the data owner or publisher. The primary reason for this is the “Breadth” dimension. A division will generate data through the course of doing business. Some of this will be critical to its business and be proprietary; other data may be less critical and non-proprietary (picture a 2x2 matrix). There will be “if only we could…” concepts in other divisions that the data owner has no stake or interest in resolving. Governance is the mechanism for resolving these big questions, as well as smaller ones.

Transparency. Data won’t be found, certainly not quickly, unless there is some catalog or similar mechanism that makes it known that the data exists (Breadth and Speed). If an idea emerges that requires discovery, there needs to be a mechanism to prioritize that discovery (a good example is Deployment, where an application owner may or may not make time available to someone with an innovative data idea).

Responsibility. Having done work with data privacy since the early days of GDPR, I am excited that GenAI is likely to bring privacy out of the “compliance office” and make it more central to brand value. When I’ve had clients with sophisticated data governance programs, privacy was a natural extension of things they were already doing. For those who did not have data governance, the privacy program was the first foray into it. The Governance model is what will enable all aspects of governance, including privacy but also explainability and AI governance, to happen quickly and authoritatively. The approaches for these different concepts don’t need to be the same, but as with any structures within a company culture that need to interact with each other, they should rhyme.

Quality. Quality gets more important in every cycle of the numbers game: BI required higher quality than Reporting, Analytics higher than BI. The Governance program needs to be nimble … able to redirect the attention of data stewards quickly as priorities change. It needs to bring transparency to quality. Not all data can, or needs to be, managed to the same level of quality, Users should have available to them information about the quality of datasets, along with plain-English explanations of the rules.

Operations

Governance is about strategy and decision-making. Operations turns decisions into action, and it needs to do so quickly. It is fair to assume that if there is a strong governance program, data “Depth” and “Breadth” will be taken care of: these are challenges of political coordination and there are many tools that operationalize these decisions once made. Speed and Deployment have more operational blockers because they require the coordinated action of many people with different skillsets.

Data Stewards will instantiate rules and will be the first line of resolution for data quality issues. They need to understand the data, thoroughly, and be partnered with data engineering teams that will help resolve problems. Data Governance orthodoxy places stewards in the business. This can work, although data stewardship in the business cannot be a part-time job. The part-time role for the business is “data owner.” If the volume of stewardship activity isn’t going to warrant multiple full-time steward roles for a business unit, consider placing these activities in IT with data stewards aligned by businesses and having an escalation path to owners. It will ensure a career path for the data stewards because they are part of a group, and avoid friction between them and the engineering team.

Data Engineers will build the pipelines that bring data to the points of consumption. This activity has often been managed in a craft-like way because data scientists, generally speaking, have worked that way. The market is past that: the craft model for data delivery does not scale, and while it may feel more nimble because it is “close to the business”, in a large enterprise orchestrating a complex data landscape the data engineering team needs to be in a continuous delivery model like any other IT team building code.

Data Scientists. Data scientists will continue to be distributed to be close the business problems that they solve. There needs to be a coordinating mechanism for them that goes far beyond a traditional “Center of Excellence.” The coordinating mechanism helps share best practices, yes, but also what links the data science work to the Governance program, ensures standardization so that concepts can scale, and enables opportunities for cost savings by pooling procurement and similar functions.

Conclusion

If a data strategy starts with “knowing what”, there is a “knowing how” that is critical to execution. That “knowing how” is embedded in Governance and Operations practices that need to be planned, implemented, and refined in order to respond quickly as strategy changes. It is this foundational ground game that ensures data can be leveraged on an ongoing basis as a competitive asset.

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics