Measure What Matters
Metrics and Productivity Measures are all about managing risk... and many will tell you that what gets measured gets managed. But are you measuring the right things, and are you measuring them in a way that will drive success?
Regardless of the industry you are in, regardless of the size and scope of your enterprise, regardless of what or who you manage... at some point you’re going to need to measure it. Measure how it’s performing, measure for trends, measure for costs and revenue, measure for effectiveness, measure productivity, measure performance.... you will measure something done by someone in your realm. And how you take those measurements, and what you do in response to them, matters greatly.
At the end of the day, all enterprises involve humans. Humans behaving. Ideally, they’re doing what you need them to do... but just to be sure you’re going to need to measure that.
Its important to remember that metrics don’t just measure behavior, they also create behavior. So, while a lot of leadership will spend time and energy determining which outputs to measure, they often put a lot less energy (if any) into deciding which behaviors they want.
Behavior drives output, and Metrics drive Behavior
What your staff and teams and divisions DO, creates what they Deliver. The route to deliverables is through their behavior. Manage behavior.
When you measure something, you impact it. And that’s not just true in physics... you can see an Observer Effect in any human endeavor. You measure something to see if people are doing what you want them to be doing... you report on those metrics... and the behavior changes. But not always in the way you want it to.
If you know that your team / department / division / company is going to change their behavior in response to the numbers you put in front of them (and you should assume they will), then you need to evaluate that impact. What behavior changed? Is it the one I wanted? Are the numbers better because the performance is better? Or are the metrics hiding something? Have you inadvertently created a ‘perform to the test’ mentality, rather than a ‘do it right the first time’ culture??
Developing the right metrics design doesn’t have to be complex. Not easy perhaps, iterative certainly, but not complex.
Think about what your teams actually do... about what they actually spend time on. How much of that is truly value-add? How much of it drives the need for excessive or repetitive oversight? How much of it is driving cost? How much is inhibiting delivery?
And perhaps more importantly... what should they be doing?
As most are aware, at the Leadership levels, metrics are developed to measure and manage delivery. In Clinical Research, an area near and dear to my heart, that means hitting particular milestones around Start-Up, Recruitment, Data Cleaning, Database Lock, and Final Delivery... and it means hitting the metrics that drive the activities to meet those milestones. At the ground levels, metrics measure individual performance, individual contribution to larger goals, and if the metrics are developed well... they can be good indicators of hitting the larger Delivery metrics. But, when groups start to become ‘specialized’, you have the “I’ll do this part, you do that part” breakdown... which more often than not, leads to the “I did my part” excuse. Someone needs to OWN the overall delivery for a particular segment of Study Delivery. Someone needs to be Accountable for that segment of Delivery... and it needs to be someone close enough to the ground floor to be able to intercept challenges when they first occur. If all of the Accountability is at the top, you’re too far removed from the action. The General is Accountable for the War... but you need Commanders Accountable for the Battles, and Lieutenants Directing the Charges.
When staff at the ground level don’t understand what they are Accountable for, and their evaluations and rewards are based on Task Metrics, you consistently miss your larger Accountability Metrics.
Getting everyone on-board
Some of you will recognize this analogy, but it holds true even in this scenario...
Recommended by LinkedIn
Some staff will ask detailed questions about why certain metrics are in place, how they are measured, who’s evaluating them, and what conclusions are being drawn. The detailed question deserves a detailed explanation.
And then it deserves an open ear... you will gain insights into how behaviors are affected by these metrics, and you will hear other ideas and suggestions of how activities may be measured. Some of those may be great ideas, some perhaps not so, but listen... insight is always a gift.
Some staff may simply grumble and complain or make never-ending excuses. Listen to them. And then give them the short answer they deserve... “We measure what we do and what we deliver, because that is how we get better. And all of us can always be better.”
When staff simply ask ‘why do we need these metrics?’ Give them a simple answer... “We do this because it lets us know where we were, where we are, where we are going, and will help us to define how to get there.”
And you will always have some staff who do not ask at all. Either they’re confused, or they don’t understand the importance, or they’re too scared to inquire. Explain to them in detail, why a given metric is helpful to the organization, how it is helpful to the staff involved, how it was selected, how it is measured, what behaviors the company is looking for in response to those numbers, and what you expect of them. Help them to see the metrics as a tool, rather than a burden.
What to measure
Don’t measure your staff on the nit-picky, because they’ll adapt their behaviors to hit the nit-picky... measure them on things that drive the deliverables you want (via the behaviors and the engagement you are seeking).
In Clinical Research, if what you want is for your Start-Up staff to expedite site contracting and submissions, then measure them on those turn-around-times... but if what you really want is for them to expedite Site Activation, then measure and reward them on Site Activation turn-around-time. And you need to understand that they’re not the same thing.
And include in that display the non-process time... if staff can see how often and how long things (usually pieces of paper) “sit and wait for people”, they better manage their cycle time. People want to be effective, they believe in your mission, help them to self-identify the priorities. Think about what behavior you’re looking to drive in yourself, in your team, in your organization... and make (& remake) the metric to encourage that behavior.
Similarly, if you want your CRAs to complete visits, then measure them on Visit Completion or Days-on-Site... but if you want them to clean data and ensure patient safety, then measure and reward them on Site Cleanliness and Patient Oversight.
If you want trip reports submitted within 5 days, then instead of reporting on non-compliance, try highlighting compliance... and sharing lessons learned that others could mimic. “92% of CRAs with 90% or greater on-time Trip Reports, all report that they draft the report while on-site” ... showcase behaviors that might help the less-than-perfect be more compliant.
If you want your functional leads to focus on delivering their department’s activities, then measure them on departmental tasks. But if you want them to own their department’s component of study delivery, then measure and reward them on more than just their deliverables... they need to also own their budgets and timelines, and they need to own the hand-offs they make to other departments. They need to see themselves as parts of a larger team accomplishing a larger goal, and they need to feel a commitment and engagement with that larger purpose.
If you want sites to enter data within 5 days, then instead of screaming at sites for late data-entry, try showcasing how timely data-entry correlates with fewer Protocol Deviations, fewer queries, and faster payment to the site.
Making metrics useful can be more art than science, but far too often metrics are determined and then never revisited or revised or fine-tuned. Remember, at the end of the day, you’re trying to drive the right behaviors... so if you want your ‘on-call’ CRAs to be ready and willing to go to a site at a moment’s notice, then reward them for the visit, not the ‘on-call’ hours. An activity that requires higher order thinking, or greater experience, or a broader perspective, needs to be completed by the person who is Accountable for the impact of that activity. Instead of managing tasks, try managing Effectiveness. Set your Metrics, and the rewards associated with them, on Accountabilities.
Measure what matters. To your team, to your department, to your organization. Measure the behaviors you’re looking to drive, and then be open to re-doing those metrics as behaviors shift. If a metric doesn’t showcase what you think others need to see, then create additional measures and report on them... if they’re valuable, leadership will pick them up.
Oncology clinical research professional with extensive industry and therapeutic expertise. Develops and drives patient focused operational strategies, cultivating innovation and leading clinical trial study execution.
11moWell said! Love this, Stephanie!
Project Director Passionate about Cell & Gene Therapy
11moI love the idea of twisting the metrics we measure to motivate people! This reminds me of how teachers have to focus on teaching to the test, instead of focusing on the individual student. If we have too many metrics to hit, we cannot also focus on the actual deliverables.