Don’t Stop Measuring
Organizations love to measure everything, but for some reason, they never measure leading indicators to understand adoption of strategic changes. Put a simple measurement strategy in place and your organization will be able to target reinforcement activities that drive long-term adoption.
As I type this blog title, I cannot help but think of the song, “Don’t Stop Believing.” I am really talking about how you draw out the tone of it to reinforce how important it is to never stop believing, because measurement proves equally as important, particularly when trying to transform an organization. To break this down a bit further, your organization should measure several key metrics, ranging from engagement to completions to knowledge to performance and ultimately ROI, before, during, after, and perpetually while managing major change efforts. Organizations should leverage those metrics to put meaningful actions that improve metrics and help increase adoption / usage even further. In short, they should never stop measuring, because even once the change targets fully adopt the solutions, change targets can retract their support and fall back into legacy behaviors, putting the investments required to implement the initial changes at risk.
Before breaking down change measurement into its key components and discussing how organizations can implement effective change measurement, let us look at the good and bad of change measurement. For our stories today, we’ll look at two big box retailers and discuss how one used measurement as an effective tool to not only determine progress, while also collect leading indicators that informed reinforcement activities and the other used surface level metrics that told a compelling progress story, but led to uninformed reinforcement activities and negative ROI.
Story 1: Intentional and Continuous Change Measurement Leads to Measured Reinforcement
A major big box retailer made a significant investment to develop a consistent approach to managing change, so much so, they decided to invest in the creation of a change management COE. The organization absorbed major changes including rapid store expansion, format changes, role change, technology advancements and many other common changes that growth companies experience.
When it came to change efforts, the organization traditionally “jammed in the solution and assumed people would adopt the changes.” While the organization did not realize the impact this approach had on people, process, technology, and business results, they assumed everything went well. The business continued to grow, after they implemented changes, results continued to improve, and very few people complained about the changes openly. Unfortunately, they did not look underneath the proverbial covers to understand the true impact and attrition came in 25% higher than industry average, employee sentiment (and therefore service performance) lagged behind industry competitors, and customers only shopped there for their low-cost options.
As store growth slowed, the organization recognized a need to optimize operations, in an effort to cut bottom line costs. Given the results of major change efforts previously, they knew implicitly that they needed to find a different approach, but did not know quite how to do that. To figure it out, they hired a number of employee engagement and change management experts to stand up a centralized framework and small tiger team, designed to create consistent adoption solutions for all changes, in hopes that the results would follow.
To cut to the chase, the results followed. First, they built an Enterprise change framework that enabled leaders at all levels to help facilitate change, regardless of the size and scale of the changes. Next, they built a playbook and service delivery model that spanned very simple changes all the way up to enterprise transformation type changes. Lastly, the measured results before, during, and after the creation of the Change Management COE, so they could understand the value delivered. And sure enough, the metrics showed improved employee engagement as the organization announced changes, better awareness and knowledge as the organization rolled out the changes, and lastly, the metrics showed better adoption, followed by better than expected ROI, achieving on average 6% above IRR targets when program teams deployed common change practices, as outlined by the COE.
You might ask what magic bullet this retailer find, and the answer is quite simple. They identified key leading metrics that demonstrated support and willingness to learn more at the beginning of change efforts. Then, during knowledge and awareness sessions, they not only measured completions, but they also measured knowledge and behavioral improvements. As the changes rolled out, they measured everything from sentiment to performance improvements to help show the value of the changes they implemented. These simple measurement milestones allowed them to do the most important part…take action to avoid resistance at key stages of the change process. And that allowed the organization to implement key changes that helped them become an industry leader.
Story 2: Surface – Level Measurement Provides Little Insight
As promised, I will share a second retail story, about a retailer that made transformational changes to their stores to drive a different customer experience but took little to no time to measure employee performance throughout the process.
In this case, a major retailer recognized an opportunity to digitize the in-store experience, better tying the digital experience outside the store to the shopping experience in the stores. They even went as far as enabling in aisle checkout, with digital receipts, to enable a seamless experience inside the store, while leaving with the product that day. The technology outcomes tested exceptionally well with customers and store operations leadership knew this plan would deliver the financial results they expected.
Unfortunately, though, the program team did not consider the needs, wants, and desires of the employees that had to help create a seamless omni-channel experience, by helping customers in the aisles. The organization decided to roll out the changes without even making employees aware, assuming employees valued their jobs and would work in any conditions. When customers started asking employees for support on using the digital experience in store, like accessing the store’s guest network, employees did not know how to help the customers. When customers scanned QR codes with product knowledge, they quickly felt stupid, because the descriptions tied to the QR codes more often than not exceeded their knowledge. And to the point of the blog, the organization did not measure anything, because they did not think the change impacted employees.
In the end, the entire program proved to be a huge failure, because the organization did not support the employee transition and without key measures to identify potential resistance, the employees just simply resisted the changes. When employees grew frustrated, their interactions with customers declined and an organization widely known for strong customer service started to receive a ton of backlash for taking employees out of the customer shopping experience. When finally removed, the organization spent $100M to differentiate themselves from industry peers through the digital shopping experience and saw a mere $38M in revenue improvements in the first quarter after implementing the new tool. This significant loss frustrated the board, and some key leaders lost their jobs based on the poor decision not to measure and adequately support employees through a major organizational change.
In the good story, you see some key themes that explain the process for collecting data and using it for actionable purposes during change efforts. Before talking about the how to, let us first discuss what type of metrics you should collect. There are two layers of metrics worth considering.
First, organizations must realize in change efforts they can never truly get to the holy grail of analytics, predictive analytics, but rather they can get to proactive analytics that help them understand progress. Change efforts involve people and even with years of historical trend data and lots of proven tactics, change efforts make predictive analytics difficult. At the end of the day, people respond how they want to respond to changes in stimuli and even the best data scientists can never fully predict how people will respond, even if the same people experience the change.
Second, organizations must measure leading and lagging metrics. Leading metrics give organizations insights into things like preliminary sentiment, early awareness and knowledge, and pre-go live knowledge. Through surveys, knowledge tests, and practical exams, organizations can assess the organization’s readiness to implement the change and put the right levels of support in place to ensure higher adoption rates. Lagging metrics show the value delivered, ensuring that folks are truly adopting and using the changes (regardless of what is changing), and prove to board members they invested wisely.
What should organizations measure?
When working with organizations on building out measurement plans, I typically recommend a simple 5-level measurement framework. See below for examples of the metrics I typically proposed they include:
Level of Measure / Title
Description
Examples
L1
User Satisfaction
Opinion of the application and support provided to drive adoption measured by surveys, interviews, and anecdotal feedback
• Pulse Surveys
• Associate Engagement Surveys
• Focus Groups
L2
Knowledge of the Tool
Understanding of how to use the tool to perform job in the future measured by assessing knowledge with quick check assessments online for easy tracking
• Training Knowledge Checks
• Training Assessments
• Certifications
Recommended by LinkedIn
L3
Usage
Measured by time spent in application doing productive work, as well as change readiness surveys of the end user population
• # of specific user actions (clicks)
• Pulse Surveys
• Logins, page views
L4
Business Metrics
Established by the program team at the launch of the program measured by calculating the baseline business metrics at go-live and again at key intervals (e.g. 30 – 60 - 90 – 180 days)
• # of specific user actions
• Training completion
• Support Tickets
• Pulse surveys
L5
ROI
Looks at loaded costs of design, deployment, and launch relative to the value delivered –in terms of OpEx savings
• Operational efficiency
• Costs
• Revenue growth (if applicable)
Once the organization establishes what they measure, they should take the time to set benchmarks for leaders and team members to ensure everyone plays their part in the adoption process. In the past, I have worked with organizations to establish something simple that looking like the following:
Pulse / Survey Results (L1) +
Awareness / Desire Results (L2) +
Knowledge / Practice Performance Results (L3) +
Key Business Metric Improvements (L4)
Adoption Score
With an adoption score, the company can usually find correlations between the changes and financial results, at least from a narrative perspective, and with that information, they can prove they met their internal hurdle rates and keep their finance partners happy.
How do we display progress?
The last question clients often ask centers on reporting the information. Many of the measures can be managed in Excel spreadsheets or simple data tooling, but execs love their visualizations. Where possible, I recommend that organizations build dashboards that enable real-time (or near real-time) data access and enable leaders at all levels of the organization to see change progress. Below is a sample of dashboards that tell compelling stories, without the project team spending time building long PPTs that tell the story in narrative form.
Visuals like the ones above show the change progress on several different levels in a very simple, easy to read format and in the end, leaders can self-serve the data and see for themselves without the project team getting involved.
Wrap Up
Like many things, data provides a compelling explanation not only of performance, but trends and actions to change trends that prose rarely does alone. So just like your financial performance, your employee engagement, and your competitive performance, measuring progress around your change efforts will provide powerful information that enables the organization and program team to put optimal support in place as employees experience change.
Thanks for reading. I look forward to the discuss