Measuring your fuzzy leadership ideas with surveys

Measuring your fuzzy leadership ideas with surveys

In this:

* Identifying and operationalizing the fuzzy leadership and management concepts you want to quantify

* Designing and administering surveys to get useful data and high response rates

* Getting survey fundamentals right to avoid common survey pitfalls

When you come right down to it, a survey is just a question or group of questions, designed to make things that are hidden inside the human mind knowable as data. Survey research, therefore, provides access to attitudes, beliefs, values, opinions, preferences, and other cognitive descriptors that, without the help of survey tools, remain anecdotal or entirely indescribable and, therefore, not useful to math, science, or people analytics. In other words, surveys help you convert nebulous concepts into hard numbers so that you can analyze them. Other things might be going on, but the main benefit from surveys is that they allow you to relate fuzzy concepts to more tangible and observable things, like individual and group behaviors and outcomes. To this end, surveys have been a fundamental tool in fields like psychology, sociology, and political science for hundreds of years — and in newer fields, like marketing, for decades. It should come as no surprise, then, that surveys are essential to people analytics, too.

Employee surveys and related feedback instruments, when managed well, are great tools to diagnose what employees think — and they can help you determine the relationship among these views and important outcomes. It's then possible to obtain the nuanced details you need to take the right actions to influence collective behavior and to predict future outcomes. A good survey can uncover great insights about things going on inside folks' minds — insights that have the potential to guide meaningful actions that drive collective success outside their minds. When surveys are designed and executed poorly, though, can produce precisely the opposite of what you’re looking to achieve — increasing confidence in the wrong ideas and eroding employee trust and commitment.

Discovering the Wisdom of Crowds through Surveys

James Surowiecki's The Wisdom of Crowds: Why the Many Are Smarter Than the Few and How Collective Wisdom Shapes Business, Economies, Societies, and Nations. As the title implies, it is all about how the aggregation of information in groups can result in decisions that under the right conditions are better than decisions made by any single member of the group.

The premise of the wisdom of crowds is that, under the right conditions, groups can be remarkably intelligent — that is to say that the estimate of the whole can often be smarter than the estimate of the smartest person within them. The simplest example of this is asking a group of people to do something like guess how many jelly beans are in a jar.

So, if I had a jar of jelly beans and asked a bunch of people how many jelly beans were in that jar, the collective guess represented by the average would be remarkably good — accurate to between 3 percent to 5 percent of the actual number of beans in the jar. Moreover, the average of the guesses would likely be better than 95 percent of all individual guesses. One or two people may appear to be brilliant jelly bean guessers for a time. Still, for the most part, the group's guess would be better than just about all individual guesses, particularly over repeated tries.

Though counting jelly beans doesn’t sound practical, what is fascinating is that you can see this phenomenon at work in more complicated and more useful situations. Stock markets and sports betting to name a few but these are just the tip of the iceberg.

Think about something like Google, which relies on the collective intelligence of the web to seek out those sites that have the most valuable information. Google can do an excellent job of this because, collectively, the individualized efforts of this disorganized thing we call the World Wide Web, when understood through mathematics, can be incredibly useful when it comes to finding order in all the chaos.

Wisdom-of-crowds research routinely attributes the superiority of crowd averages over individual judgments to the elimination of individual noise. This explanation assumes independence of the individual judgments from each other. In other words, for the wisdom of crowds to work, you have to be able to capture and combine individual predictions while avoiding group discussion that creates groupthink. That sounds a whole lot like a survey to me. The crowd also tends to make better decisions when it’s made up of diverse opinions and ideologies, and when predictions are captured in a way that can be combined and evaluated mathematically. Again, that sounds like a survey to me.

The wisdom-of-crowds concept suggests that even subjective information can have high predictive value if the chaotic thoughts of people can be organized in a way that they can be analyzed in aggregate.

Pay attention carefully — what I propose doing is working to harness more successfully the wisdom of crowds for your company using the careful survey design and implementation. I am not generically recommending surveys as a check the box exercise that will take you to the promised land. There is a lot of challenges to fight through and we may not all arrive together.

O, the Things We Can Measure Together

You may think of a survey as a single-use company satisfaction poll; however, survey data can be used in many more ways than this.

At a high level, surveys can be used to either quantify what was previously a qualitative idea, identify that idea's frequency in a population, compare a part of a population to the whole, compare one population to another, or look for changes over time. After a qualitative idea has been quantified as a survey measurement, these measurements can then be mathematically correlated with each other and with other outcomes. Though the subjective opinion may be true or false, accurate or inaccurate, precise, or imprecise, it’s true to itself, and as data, it is inarguably a new data point. The degree to which the new data point is useful in analysis for explaining or predicting phenomena must stand on its own two feet. The proof is in the pudding. In posts that follow this, I will illustrate for you real-world analysis of the variable ability of survey questions to predict objective phenomenon - like employee exits and share my thoughts on what the implications of combed data are.

The range of possibilities for what concepts you can describe using a survey, how you use the data the survey produces, and why you use the data the survey produces are nearly endless. And, when it comes to all the varied ways you can apply surveys to learn about people, the possibilities there are endless as well. That said, I'll highlight some key survey types and uses as a way to fire up your imagination for the work ahead of you.

Surveying the many types of survey measures

Employee surveys can be designed to capture many different types or categories of information stemming from or influenced by psychology. In the following list, I describe eight categories and provide an example of what a survey item would look like using a Likert agreement rating scale design:

Awareness: An awareness is knowledge or perception of a situation or fact. For example:

I have a clear understanding of the priorities of <Company> over the next three months.

I have a clear understanding of what others expect of me in this job over the next three months.

Attitudes: An attitude is a psychological tendency or predisposition that is expressed by evaluating a particular object with some degree of favor or disfavor. It could be about a person, a group of people, an idea, or a physical object. Attitude is formed by a complex interaction of cognitive factors, like ideas, values, beliefs, and perceptions of prior experiences. The attitude can characterize the individual and can influence the individual's thought, and action and the results, in turn, can either change or reinforce the existing attitude. For example:

I am inspired by the people I work with at <Company>.

I feel motivated to go beyond my formal job responsibilities to get the job done.

Beliefs: Beliefs are ideas about the world — subjective certainty that an object has a particular attribute or that an action leads to a particular outcome. Beliefs can be tenaciously resistant to change, even in the face of strong evidence to the contrary. For example:

Overall, I think I can meet my career goals at <Company>.

I have the opportunity to do what I do best in my work at <Company>.

Intentions: An intent is something a person is resolved or determined to do. For example:

 I intend to be working at <Company> one year from now.

If I have my way, I will be working for <Company> three years from now.

Behaviors: Behaviors are how a person acts or conducts herself, especially toward others. For example:

My manager gives me actionable feedback regularly.

My manager has had a meaningful discussion with me about my career development in the past six months.

Values: Values are ideals, guiding principles, or overarching goals that people strive to obtain. For example:

The values and objectives of <Company> are consistent with my values and objectives.

I find personal meaning in the work I do at <Company>.

Sentiments: In its purest sense, sentiment is a feeling or an emotion. (Some definitions of sentiment overlap with opinion or attitude.) For example:

I am proud to tell others I work for <Company>.

I can recall a moment in the past three months when I felt genuine happiness at work.

Opinions: An opinion is a subjective view or judgment formed about something, not necessarily based on fact or knowledge. A person’s opinion is kind of like an image — the picture the person carries in his mind of the object, in other words. A picture may be blurred or sharp. It may be a close-up, or it may be a panorama. It may be accurate, or it may be distorted. It may be complete, or it may be just a portion. Each person tends to see things a little differently from others. When people lack information — and we all do — we tend to fill in a picture for ourselves. For example:

<Company> seems like it’s in a position to succeed over the next 3 to 5 years.

 I have the resources and tools I need to be successful.

Preferences: A greater liking for one alternative over another or others. Though there are exceptions, you'd generally measure preferences by asking a series of contrasting trade-off questions and then inferring from the responses you get to the whole set how employees rank-order each option. Here are a few simple item examples (you would have a lot more):

* I prefer that <Company> put more future investment in the 401k company match over increasing the company contribution to healthcare premiums.

* I prefer that <Company> put more future investment in employee technical learning-and-development programs over the big annual company event.

Though these simple examples are enlightening, it doesn't get you very far. You have to decide what you're trying to learn, what items you want to use to learn it, and why — and then you have to put it all together. The next several sections show you how.

Measuring Readiness to Change

A variety of theories and models have been developed to understand the relationship between changing beliefs, attitudes, intentions, and changing behavior. One of the most widely used theories is the transtheoretical model of behavior change (TTM). TTM is an integrative theory of therapy that assesses an individual's readiness to act on new healthier behavior and provides strategies to guide either yourself or other people through the process of change.

According to TTM theory, change occurs in a process that can be described (and measured) by a series of stages:

  1. Precontemplation ("not ready") – The person is not intending to take action in the foreseeable future and can be unaware of any reason to.
  2. Contemplation ("getting ready") – The person is beginning to become aware of a reason to change and start to look at the pros and cons of their continued actions.
  3. Preparation ("ready") – The person is intending to take action in the immediate future and may begin taking small steps toward this action.
  4. Action – The person is making specific overt actions towards their intention for change.
  5. Maintenance – The person has been taking action for at least six months.
  6. Termination – The person has stabilized wherever they are.

The TTM process is analogous to communication theorist and sociologist Everett Rogers stages in his theory of diffusion of innovation:

  • Knowledge – The person is first exposed to innovation but lacks information about the innovation. During this stage, the individual has not yet been inspired to find out more information about the innovation.
  • Persuasion – The person is interested in the innovation and actively seeks related information.
  • Decision – The person takes the concept of the change and weighs the advantages/disadvantages of using the innovation and decides whether to adopt or reject the innovation. Due to the individualistic nature of this stage, Rogers notes that it is the most challenging stage on which to acquire empirical evidence.
  • Implementation– The person employs the innovation to a varying degree depending on the situation. During this stage, the individual also determines the usefulness of the innovation and may search for further information about it.
  • Confirmation – The individual finalizes his/her decision to continue using the innovation.

If you are trying to influence some change, you can use a series of statements measuring awareness, belief, attitude, intention, and behaviors to figure out where people are at in the stages of change and measure whether or not the communication or actions you have taken have had any influence to increase or decrease the likelihood of an action.

Looking at survey instruments

Aside from the range of categories of psychological or social information that can be obtained from a survey, you have several people-related focus areas you can choose from when it comes to designing a survey. It’s a self-limiting trap to think of that annual employee survey and assume that this is the only type of survey instrument you have to collect data about employees. If you’re a skier, this would be like tying one leg behind your back and then setting out to ski down the highest and most challenging route down the mountain.

Here are some of the many types of surveys you can use to measure the employee journey and people operations — and the employee experience.

<Tip> I have provided sample questions for all of these in the appendix of People Analytics For Dummies.

Employee journey: Time-context deep dives

(For sample items see Survey Questions To Collect Analyzable Data For Your Employee Journey Map)

* Pre-recruiting market research

* Pre-onsite-interview candidate survey

 * Post-onsite-interview candidate survey

* Post-hire “reverse exit” survey

* 14-day onboard survey

* 90-day onboard survey

* Annual check-up

* Quarterly pulse check-in

* Exit survey

People operations feedback: Subject-focused deep dives

* Recruiter feedback

* Interview team feedback

* Talent acquisition process feedback

* Company career page feedback

* New hire orientation feedback

* First-day feedback

* Manager feedback

 * Onboarding process feedback

* Company employee intranet portal feedback

* Career advancement process feedback

* Learning and development feedback

* Talent management process feedback

* Diversity facilitation feedback

* Facilities feedback

Getting Started with Survey Research

In the last decade, there has been an explosion of new feedback tools powered by new technology and services partners. Nowadays, it’s virtually impossible not to give and get feedback. There are surveys, polls, reviews, and open channels galore, and the workplace is no different. Inside companies, inexpensive online tools like Survey Monkey make it possible for anyone in your company to ask questions of anybody else in the company at any time, and, unfortunately, all too frequently they do.

Aside from increasing access to structured survey tools, members of today's workforce aren’t shy when it comes to the many other outlets for unstructured feedback available for use. They contribute to anonymous employer-rating websites like Glassdoor or industry blogs like ValleyWag (although this specific one is now defunct). Twitter, LinkedIn, and Facebook are all outlets through which people’s real opinions about working for your company go bump in the night. Even tools designed without feedback in mind at all — your run-of-the-mill collaboration-and-productivity tools — can become yet another place for "always-on" feedback from individual to individual or from individual to the company — grist for the analytical mill.

That makes for exciting times in this industry and people analytics everywhere, but more does not always equal better. Feedback without structure is noise, and noise with no purpose is the worst form of noise. Though a gentle white noise may at times be accepted to drown out the outside world so that you can lull yourself to sleep, there’s nothing like an unpleasant shrill tone to evoke a swift search and removal of the offending speaker.

All this is not to dismiss the increasing predominance and interest in unstructured feedback devices. However, before you go chasing dragons disguised as windmills, you might consider learning the fundamentals first. That’s what this post provides. Although options for feedback are abundant and diverse, the fundamental principles of how you determine good from bad, useful from useless, and music from noise is about the same. Learn the fundamentals, then innovate.

Designing Surveys

Defining and communicating the purpose of a survey and its learning objectives are critical first steps of a successful survey strategy. Start with defining the desired objective and specifying how the information needed for that objective will be used when acquired. If you don’t have a clear picture of these elements from the get-go, your survey effort drifts aimlessly — or even turns into a total waste of everyone’s time. All design begins with defining what you’re trying to change and why. When that is determined, it’s merely a process of working backward and defining assumptions carefully, which you either accept, reject, or modify with the evidence you collect.

The whole of people analytics — all data science, for that matter — boils down to the following sequence of meta-research activity that you (or your hired-gun analyst) is responsible for facilitating. As you can see, you can sum up the process as your attempt to find the right answers to the following questions:

* What do you want to change?

* Can you measure this thing you want to change? How?

* What other things influence this thing? How can we measure those things as well?

* Upon measuring the outcome that concerns you and the things you think may matter, can you relate them and infer a direction for, and the strength of this relationship?

* Can you predict one measure from another measure?

* Can you infer a causal relationship to obtain the information you need to control the outcome you care about?

* Can you influence the outcome you care about by changing one or more of the antecedents?

These questions make it clear that it isn’t enough just to survey the thoughts of people on a concept you think you care about — say, employee happiness, employee engagement, or employee culture — and then measure their responses. Sure, you can define your research objective as merely the effort to measure these things and then label the completion of the survey process a success; however, these measures collected by themselves leave many of the critical questions unanswered. Even the surmountable achievement of making a fuzzy, previously unknowable concept measurable is a wasted effort if you don’t learn anything about a) how these measures connect to other important company outcomes and b) how to control those outcomes.

<Remember> What you get out of a survey effort, or any analytical project is predestined in the design phase. A poor research design amounts to taking a very-low-odds shot at learning anything of value; a high-quality design means a bigger chance at gaining insights that move your company forward.

Working with models

Observing and trying to interpret what you observe is a native human activity. It’s the foundation for all survival. In your everyday life, however, you’re often blissfully unaware of the nature of your observations and interpretations, with the result that you make errors in both. People analytics makes both observations about employees and the interpretation of those observations conscious, deliberate acts.

People analytics examines the “people side” of companies as it is, as opposed to how folks with their less-than-reliable "sixth sense" believe it should be. People analytics is superior to the vagaries of individual bias and delusion because it oriented to observe and to explain repeating patterns among groups of people, as opposed to attempting to explain the motives of particular individuals. In this, the object of attention of people analytics is to the variables that differentiate people into group segments — based on years of prior work experience, for example, or educational background, personality, attitude, intelligence, pay, type of work, tenure, gender, ethnicity, age and many more — in hopes of discovering patterns among these variables.

The understanding and interpretation of those things you measure in people analytics is the reason for using a model: an integrated conceptual mapping representing the relationships of variables, displayed either as a picture, a mathematical formula, or a series of statements containing a verifiable theory. Such models can be extremely detailed and complex, or they can start as a simple hypothesis: “Producing happier employees produces more productive employees,” for example.

Implied in such a conceptual mapping of variables is a verifiable theory, one that is operationalized into measures, collected from either systems or surveys and then tested mathematically. If you’re going to measure the hypothesis statement, you must first define what you mean by happy, productive, and employee. After you have defined the basic terms, you need to figure out how to measure them — but take pleasure in the fact that you’re halfway there to designing a successful survey just by defining the terms carefully and specifying the measurement tools. These steps act as the foundation of your research design, which then shapes everything that comes afterward.

More on Models here: Enhancing your ability to understand, predict and influence organization performance with models

Conceptualizing fuzzy ideas

Conceptualization refers to the process of identifying and clarifying concepts: ideas that you and other people have about the nature of things. For example, think of the common words used in management and human resources — satisfaction, commitment, engagement, happiness, diversity, and inclusion. What do these words mean? When talking about diversity, are you talking about measuring the composition of your workforce by gender and ethnicity, the presence of stereotypical beliefs, any specific acts of discrimination, feelings that reflect prejudice, relational associations that reflect inclusion or exclusion, or all of the above? Is your focus on understanding how these matters apply (or don’t apply) when looked at through the lens of gender, ethnicity, age, disability, socioeconomic status, economic background, personality, philosophical bent, or another factor? You need to be specific if you want to create a research plan and measurement framework that works. Otherwise, you're just talking about fuzzy ideas that nobody understands or agrees on. You can’t analyze that.

Groups of people fail to act on fuzzy ideas. They fail to act because members of the group hold different views, which are unknown. We either disagree secretly, or we agree, but we do not agree on the same idea. You hold one idea of what the fuzzy idea means, and I hold another. The idea has not been defined in terms of what signs you would see if it existed, and what signs you would see if you didn't. More specifically, you need this definition, specified into statements that people can either agree or disagree with.

<Remember> Specifying fuzzy ideas into measure is more than just about analysis

Operationalizing concepts into measurements

Though conceptualization represents the clarification of concepts you want to measure, operationalization is the construction of actual concrete measurement techniques. By operationalization, I mean the literal creation of all operations necessary for achieving the desired measurement of the concept. The whole of all people analytics rests on the operationalization of abstract concepts for analysis. The creativity and skill that are applied to this operationalization effort are indicative of the quality of the analysts — which might explain why results vary so widely.

For example, one operationalization of employee commitment is to record the level of agreement of the employee to the survey item (“I am likely to be working for this company three years from now”) using the standard 5-point agreement rating scale. Another operationalization of employee commitment is to ask the same question with a 7-point agreement rating scale. Yet another operationalization of employee commitment is to provide several statements representing commitment, have the subject record the level of employee agreement for each, and then combine the response to all these statements into an index.

<Technical Stuff> Each way of measuring commitment described in the paragraph above implies a concept of commitment and represents a measurement of the concept of commitment. All measures contain some errors. How much error is in the measure is an important question. The error can be determined by mathematically evaluating associated measures you predict should be present or absent in the presence or absence of the target concept. You can get to know this error by collecting your intended measure, measuring associated concepts and outcomes, then testing if our measurement of the concept aligns with the predicted outcomes in a manner that is statistically significant, if not also useful. Statistical significance implies that odds favor the association between the measured concept and the predicted outcome are not simply a result of random chance. Though any single statement may miss the mark, the idea of an index is that, by using a combination of statements, you’re better able to grasp the whole. That's the idea behind the use of indexes, which I cover in the next section. Introspection regarding these principles drives in the direction of longer surveys when you have less certainty about your measurement of fuzzy concepts and shorter surveys when you have more certainty about your measures. The problem of measurement is a conundrum. Your only way out of the conundrum is more measurement. If you don't have a firm grasp of these survey design issues, you may not get anywhere with surveys. If you approach survey design methodically and mathematically, you will get somewhere with surveys.

Though you have several different ways to ask survey questions, constructing surveys so that all survey items are using the same agreement response scale format has several important advantages:

* First and foremost, it’s difficult if not entirely perilous to mathematically evaluate responses to items when different response scales are used.

* It’s much easier and less error-prone for the person taking the survey if they’re asked to use just one scale for the entire survey.

* Last but not least, once you get into the groove of it, you'll see that you can measure a wide range of topics by simply coming up with a statement that expresses the essence of an idea and then asking the person taking the survey whether she agrees or disagrees with that statement.

Designing indexes (scales)

An important tool for operationalizing complex ideas is an index, also referred to as a scale. An index, or scale, measures a respondent’s attitude by using a series of related statements together in equal (or, in some cases, varied) weights. By measuring attitude with multiple measures, defined together as an index, you can gauge the sentiment of respondents with greater accuracy. The combined measure helps to determine not only how a respondent feels but also how strongly she/he/they feels that way within a broader range of values.

<Tip> You'll find a ton of examples of statements in the People Analytics For Dummies appendix.

Let's say you want to measure levels of employee commitment at your company. Using an index to evaluate commitment as a whole would entail using several (carefully chosen) survey items — perhaps items like these:

* I believe strongly in the goals and objectives of <Company>.

* I fit well into the culture of <Company>.

* I am proud to tell others I work for <Company>.

* If I were offered a comparable position with similar pay and benefits at another company, I would stay at <Company>.

* At present, I am not seriously considering leaving <Company>.

* I expect to be working at <Company> one year from now.

* I expect to be working at <Company> five years from now

* I would be delighted to spend the rest of my career at <Company>.

If you go with the Likert scale, where each question has a possible value from 1 to 5, the range of the overall index spans from 8 to 40.

<Remember> Though indexes at first glance may seem to be asking the same old question in several different ways, they have several important advantages over single items:

* Well-constructed indexes are more accurate measurement tools than single measures. Though good survey question design insists that you measure only one thing at a time, frequently, concepts that you want to measure have no clear, unambiguous single indicator. If considering a single data item gives only a rough indication of a given variable, working with several data items can provide a more comprehensive, accurate, and reliable picture.

* Often, you want or need to analyze the relationship between several distinct concepts. Measures with only a handful of response categories may not provide the range of variation necessary for the math to isolate a clear correlation. In that case, an index formed from several items may be able to provide you with the variation you need. A single question with 5 responses has a range restricted to 5 possible values, but 10 questions with 5 responses have a range of 40 possible values. It’s much better to correlate to 40 than to 5, especially if your other variables have a wider range as well.

<Tip> Feel free to weight each statement more or less based on its independent correlation to some validating outcome measure — or any other sound logic you want to use. This would mean that, with the same 10 questions, you can achieve a range of values even greater than 40.

* Indexes that gauge sentiment by including employee satisfaction, commitment, or engagement in the mix have proven to be more useful for predicting things like employee exit than any single item alone.

* Indexes produce important summarization for analysis and reporting. Several items are summarized by a single numerical score while preserving the specific details of all individual items for further analysis or explanation only if and when that detail is necessary. This means that instead of reporting all items from the index, you can begin by just reporting the index as a single measure – you may refer to it as a key performance indicator (KPI). One overall index measure is much easier to work with than all the component items that are contained in the index.

For example, imagine sharing the results of a 50-item survey with 100 different managers. That is 50 graphs, one for each item, which you have to produce for 100 managers. That is 5000 (100 x 50) graphs. Now imagine the reporting effort for the same 50-item survey with the same items summarized by 5 Index measures (Each index has 10-items in it). That is only 500 graphs (5 x 100). You have drastically reduced the workload and the possibility of confusion by just reporting the indexes. You can adapt your convention; however, I suggest the primary benefit of an index is that the particular items underneath indexes can be hidden away until it is needed. Depending on context and reporting environment constraints, you may make the detail accessible by putting it on an alternate page, or by drill-down or by request.

When it comes to creating an index, just follow these four simple steps:

1. Identify a research question that focuses on a single concept.

2. Generate a series of agree-disagree statements that relate to the concept in varying aspects or intensity. The intent is not to create your final index but rather some items you want to test for possible inclusion of an index.

3. Establish a test group to test your possible index items to obtain a survey response to all test items together and to get subjective feedback on each item from test subjects.

You should use a combination of subjective feedback and mathematical analysis to choose the best items to include in the final index. When testing survey items for the first time, you should sit down with at least 10 people and ask them if they find any ambiguity or confusion in the new items. You should also proactively ask the test subjects to explain how they interpret the items on the survey. These conversations, while subjective, help you see problems you may not have otherwise seen to improve or eliminate problem items.

4. After finalizing your list of statements and combining them into an index, decide whether you want to leave each statement at the same weight in the index or if you want to assign each statement a different weight based on some mathematically defensible logic.

<TechnicalStuff> In this post, I have not described the statistical procedure for creating a mathematically defensible index design consisting of the optimum combination of items to achieve a valid and reliable index, while asking as few questions as possible. A standard statistical tool you can use to accomplish this goal is called factor analysis. Factor analysis is a statistical method used to describe variability among observed, correlated variables in terms of a lower number of unobserved variables called factors. For example, it is possible that variations in 30 observed variables mainly reflect the variations in three underlying concepts (or any other number less than 30). Factor analysis is sometimes referred to as a “data reduction technique” because you can use it to find how items cluster together mathematically, such that you can remove items that are redundant or unnecessary to achieve the same result. The more an item correlates with another item, the closer they are likely to be to each other conceptually. Factor Analysis can be used to observe the pattern of correlation between items, clusters of items, and other measures to decide on index design. Other methods you can apply to index construction include principle component analysis (PCA) and varied machine learning algorithms that are referred to together generally as cluster analysis.

Testing validity and reliability

After you have initially operationalized a measure, your work is far from done. You still need to stack each specific measure against other concrete measures that either support or contradict the theory you’re trying to prove. For example, if you measure commitment, it makes sense to evaluate your commitment measure against actual employee retention/exit over time or referrals of candidates to open jobs. Stacking measures against other measures (particularly objective measures) allows you to test, validate, and improve the accuracy of your survey measures over time. If it turns out that your measure of commitment is an unreliable predictor of other, more objective measures that you think should be related, you need to make changes to improve the measure. If your tweaks don’t work, it's time to abandon the measure.

<Remember> Sooner or later, you learn two essential ideas:

* There’s no single way to measure anything.

* Not all measures are equal to all tasks.

Improving survey design

Here's a handy list of do's and don'ts when it comes to survey design:

   *  Avoid hearsay. Ask mostly first-person questions, or in some cases, ask about observable behaviors of other specific people. Don't ask respondents to speculate about “the company” or “the culture” or about unidentified people’s thoughts and motives. Don’t worry - you can measure abstract ideas like “the company” and “the culture” but you need to frame each item in ways each person can respond using first-hand observation or experience and then group these responses up to describe the larger abstract collective, as opposed to requesting individuals speculate about the abstract collective.

   *  Avoid compound sentences or “double-barreled questions.” In other words, avoid questions that merge two or more topics into one question.

   *  Avoid loaded and leading questions. That means you shouldn't use terms that have strong positive or negative associations. If your language implies that you expect the respondent to realize that she had better choose the answer you want, then there is a chance she'll choose the answer you want — no matter whether this answer correctly depicts her opinion.

   *  Avoid unnecessary distractions. Be careful of unusual question groupings and page breaks, which studies have shown can change the way people respond. Be spartan, deliberate, and consistent.

   *  Avoid questions or scales that pose problems. Use one response scale throughout the entire survey and make sure that the scale has regularly spaced and similar length labels, if at all possible.

      It also helps to use a scale with an odd number of choices (3,5,7,11). Odd scales allow respondents to naturally choose between an option that is on either end of the scale or neutral. While some survey designs are attempted to force difficult choices, research indicates this may frustrate the survey respondent and introduce error in situations where the respondent genuinely has a neutral opinion.

    Questions designed to require ranking of multiple items in a list can be useful for some purposes. Still, this technique should be used sparingly because it is more difficult for the respondent to complete and more exposed to errors than question designs. If your objective is to find the relative positions of a series of statements, there are other ways to infer an order mathematically from the inputs of survey respondents – using a Likert scale - without requiring survey respondents to rank multiple items at once.

   *  Maintain balance and adhere to strong design constraints. As much as possible, design survey sections so that they contain a similar number of items, make sure items have a similar word count and create indexes that have a similar number of items, if at all possible.

   *  Assess each question for focus, brevity, and clarity. Is the question expressed as briefly, clearly, and simply as it can be? Eliminate overgeneralization, overspecificity, and overemphasis.

   *  Assess each question for importance. Cull survey questions so that only the concepts that have previously been linked to crucial company outcomes remain. If questions haven’t previously been measured, at least choose items that have a clear theoretical relationship to an outcome you intend to drive. It is o.k. to overshoot this target, then reduce the number of items at the next survey iteration after you have the opportunity to analyze the correlations between the items you attempted on the last survey and the outcomes you are trying to achieve as a company.

    For example, if your primary objective is to reduce attrition of high performing employees, then you can reduce items on the employee survey by removing items that do not correlate with the attrition of high performing employees (or those that correlate the least). When you remove items, you can replace them with new items you want to test, as opposed to continuing to use items that do not correlate with the objectives you are trying to achieve – justified dubiously by the need to trend data. There are plenty of items that do correlate to an objective you care about that you can trend, without keeping those items that do not.

   *  Assess vocabulary. Use the words and phrases that people would use in casual speech. Limit vocabulary so that the least sophisticated survey-taker would be familiar with what she’s reading. Eliminate ambiguous words.

<TIP> You may have noticed, this vocabulary advice is hard for me. I read so many books that I don’t even realize it when I’m using words that other people don’t understand.

   *  Test for problems. If you can, try to include some items that can be independently verified for purposes of validation.

   *  Watch the clock. Test to make sure the survey can be completed in 15 minutes or less.

<Technical Stuff> Survey design matters. You would be surprised how fast a person many statements a person can agree or disagree within a short time if you keep all questions framed positively and with the same response scale. A majority of people can respond agree or disagree with 30 statements in under one minute. If you do the math, most people could complete a 90-item survey designed in this manner in under 5 minutes. Feel free to test me on this. However, people cannot respond to other question design forms as fast or with as little effort. The experience of the survey and the time it takes to complete is only partly influenced by the number of items. Question design has much more influence on the experience people have with the survey than the number of questions. You should be cautious, but data-driven in the decisions you make about survey design. There are a lot of bad ideas floating around about the need for short surveys or fewer surveys that simply are not true when scrutinized objectively.

   *  Plan to report survey results using the smallest unit of analysis possible within the parameters of the confidential sample-size restrictions. Of course, you can and should also report at higher-level aggregations and by chosen segments (diversity, location, manager, and so on). Specificity and breadth of reporting, and creativity, can help you achieve the level of impact from the survey you are hoping.

<Tip> I often hear people provide the advice that you should reduce the number of questions on your employee surveys to increase your survey response rate. My research and personal experience find this advice to be false -- in a controlled study response rate is virtually unaffected by the number of questions on the survey or previous surveys completed. Once people begin the survey, they are more likely to complete the survey, regardless of the number of questions (within reasonable limits), than those who never enter the survey. Research demonstrates that a list of other factors, notably executive attitude and communication factors, are more important to response rate. I provide specifics on how to improve the response rate to your surveys in a section below. The advice that you should continually reduce the number of items you use on surveys and the number of surveys you conduct for purposes of achieving more engagement in your survey is incorrect and may be detrimental to your analytics program in the long run. In general, for purposes of efficiency and respect for craft, yes, you should apply whatever techniques are available to make the most of everything you do. This said, it is better that you explicitly promote the idea that the culture you want to create at your company is one of candid, abundant, and continuous feedback in support of data-informed decision-making. If some people don't want to provide feedback, this may indicate then they are in the wrong environment.

Managing the Survey Process

Large companies with abundant resources (time, people, and money, in other words) might have the option to build their survey and analytics technology (and support team). Most, however, buy a subscription to one of the many services available. There are a plethora of service providers for employee surveys that range from the high-touch consulting to self-service software. Besides providing the latest technology, survey vendors can also provide industry-validated measures, thought partnership, benchmarks, robust reports for a large number of segments, in-depth data analysis, and other support such as communication templates, training, and advice. The most crucial element to address before getting excited about all the bells and whistles is ensuring that your chosen partner has the appropriate infrastructure, documentation, and internal experts to provide employee confidentiality and keep your data secure.

Getting confidential: Third-party confidentiality

Confidential means that personally identifiable information is attached to individual survey answers but is agreed to be kept private and expressed only outside of the survey database at group levels. It’s a common practice to outsource collection and analytics of survey data to external vendors to facilitate the administration of the survey while providing confidentiality. This convention allows the third party to link other employee data that helps with turning results in insights while protecting individual identity and employee trust.

Collect responses individually, but report the by segments greater than 5 to maintain individual confidentiality. The best practice is to enforce that results are only expressed for segment sizes of five people or more people. In most cases, the criteria are that there must be five or more survey responses to produce a report for a segment.

Some other companies, looking to stretch reporting to a broader audience of managers, apply the rule of five to the size of the actual segment population, not to the number of surveys returned, while applying a second criterion for response rate. For example, when I ran the survey program at Google, we applied a dual criterion: a) the segment must contain five or more employees, AND b) the segment must have three or more survey responses. We also used these same criteria when I was working with Jawbone. In both cases, we established these criteria because we wanted to get more managers' reports, and the dual criteria created equivalent confidentiality. Under standard guidelines, the majority of managers could not get a report since a manager of 5 would require a 100% response rate to get a report. Someone pointed out that practically speaking - if there are five people in a segment - it doesn't matter if we got five responses or three responses to the survey, we still could not determine from the aggregate who said what. We put the complete criteria in the FAQ, and other survey communications and employees and managers were comfortable enough to go with it.

When it comes to confidentiality, however, you have options:

* Confidentiality with explicit exceptions: Companies with a dedicated internal people analytics team are more and more likely to collect confidential data as described above, while only providing access to the personal details of respondents to specific trusted members of the team for data management and analysis. This option would need to be clearly defined in the employee survey FAQ and would not apply if the people responsible for people analytics “wear many hats” — meaning that they serve in other official HR or management capacities; in this situation, you'd be asking for trouble if you gave such individuals access to individual survey results.

Total anonymity: Anonymity means what you think it means — the personal identity of a respondent is kept hidden. Anonymity intends to make it impossible to trace back to a specific individual, something someone said so that they feel safer to speak their minds. As admirable as the intent may be, this particular practice dramatically limits your ability to turn the feedback you receive into more in-depth insight by connecting it to other employee data. In my opinion, anonymity isn’t the right choice for people analytics. Mainly because it’s possible to use a third party to establish safe data practices and trust without the downsides of anonymity. Still, in certain fringe situations, anonymity may be called for. 

<Technical Stuff> Anonymity can produce more problems than it solves. A typical example is that a single employee may hack the anonymous process to provide repeated responses to game the overall results or try to get a manager fired. In other situations, employees may just mistakenly assign themselves to groups they don’t belong to. (Trust me — this happens.) Then when the manager gets the survey results, she discovers that there are a total of 15 responses for a team of 12 people. Such mistakes undermine the integrity of the survey process, leading many to doubt its efficacy. There’s no way to undo the damage here, so the effort — months of work and everyone’s time to take the survey — is wasted.

<Remember> If you have real concerns about how your employees feel about sharing their thoughts, I strongly recommend taking the advice I give at the beginning of this section: Hire a third-party professional service that can provide services to connect individuals’ responses with data confidentially.

Ensuring a reasonable response rate

The response rate is the percentage of people who have responded to the survey. If you sent the survey to 1,000 people and 700 responded, your overall response rate is 70 percent.

Without getting into the nitty-gritty math of the situation, you don’t require a 95 percent response rate to have a 95 percent certainty that you know what you need to know. The fundamental basis of polling (and, in fact, all of modern science) is that you can mathematically predict the response of a larger body of people with the response of a much smaller sample if you have selected people randomly. Generally, you need many fewer survey responses than you think you do.

<Remember> Keep the most critical assumption in mind: the random part. If there is some pattern to when and why people respond — if who responds is not totally "random," in other words — then all bets are off. Often, it’s hard to recognize patterns in a smaller data set, so, for this reason, you try to get as high a response rate as you can, to cover more ground.

Determining a reasonable response rate

A U.S. senator once made this comment: “This is regarded as a relatively high response rate for a survey of this type” regarding a poll of constituents that achieved a 4 percent return rate. Though a customer satisfaction survey with a response rate higher than 15 percent might be considered a stunning success over in Marketing, those same response rates could get you fired fast in the People Analytics department.

Like most important things, the answer to “What is a good response rate?” is “It depends.” If I had to come up with some answers here, I'd say that a 60 percent response rate is adequate for analysis and reporting, while a response rate above 70 percent is reasonable, and a response rate above 80 percent is excellent. Keep in mind that these are rough guides and that demonstrating the absence of systematic response bias is more important than a high response rate.

<Remember> I'm convinced that there are way too much fuss and wild guessing about what drives the employee response rate for surveys. If I have learned any generalizable truth working with employee surveys at many different companies, it is this: Employees are dying to provide feedback. You don’t have to plead for feedback; just make an effort and get out of their way. If you want an exceptionally high response rate, make taking the survey as comfortable as you can, given the circumstances.

Examining factors that contribute most to high response rates

If you want to achieve Olympic-medal levels of survey response rates, make sure you have the following down pat:

* High-quality communication: Everything about the survey needs to communicate a sense of purpose, professionalism, and integrity. Throw in some charm and winks to unique aspects of company culture, and you have the recipe for a huge success.

Third-party confidentiality: Use a professional third-party survey partner to provide confidence in individual confidentiality. State the rules clearly. This isn’t a survey among parents for your child’s birthday party. This is a whole lot of working people sticking their necks out for their company — at minimum, they should be confident that their boss or someone in HR isn’t looking at it and saying, “Well, that one can go if they feel that way.” The employee must have confidence that his response will be reviewed with discretion and that he won’t be singled out.

A sincere interest in the results: When a range of important people known to the employee stand up and say, "I want to hear your feedback; it’s important to me" response rates improve dramatically. Communications from SurveyRobot.com (fictitious example) and HR are standard triggers for eye rolls when what you want is a high response rate. Depending on your communication design, you may have a message from the head of HR or automated survey reminders in your bag of tricks; however, these should be preceded and followed by messages from other people: key founders, the CEO, the heads of divisions, managers, and even analysts! People want to know that a real person is responsible for this survey and that they (and the people behind them) care.

<TechnicalStuff> If you use a survey provider, they can work with your IT department to send out invites and reminders from specific people at your company (with their permission). It is also helpful if leaders at your company are willing to send out personal messages leading up to the survey, leading up to close the survey and following the survey. You should have a communication plan in place, so each note is unique, personal, and covers the important points that need to be covered.

Repeated reminders: One survey invitation is not enough. You might think people are ignoring your emails on purpose when in reality, they’re just busy. They think that they’ll return to the message, but the onslaught of other ones pushes it down their inboxes until your message is entirely out of their minds. Little reminders are an important way to regain attention.

<Tip> Aside from email reminders, it’s useful to put up posters, set out table cards in the cafeteria, use the lobby and elevator video screens, put “stickies” on desks, schedule time on work calendars, and so on. Be creative!

High-quality survey design: There’s nothing worse than a poorly designed survey administered in an unprofessional manner and run by people who don’t know anything about what they’re doing. Opportunity squandered. It is awful, it shows, and people are tired of it. Don’t wing it. Get help.

Make it competitive and fun: One of the tried-and-true observations I have is that the mere public reporting of the response rate by executives drive response rate among the teams of all executives. Aside from creating transparency and an indirect spirit of competition, upping the fun factor shows that corporate communication doesn’t need to be boring. I admire executives who inspire their people by competing with other executives. And, by all means, come up with prizes: parties, dunk tanks, swag, bragging rights, and trophies may be just the ticket.

<Warning> It’s good to encourage competition over participation, but never sanction a competition between executives in the survey results themselves. By this, I mean is that employees should not be cajoled, harassed, or threatened for a particular response to a particular item. By this, I mean, no: “Hey guys, please rate me a five." First of all, it’s tacky, and second, it defeats the entire point. I know some companies that go as far as to fire managers for trying to influence survey results in the manner. Hopefully, you never have to do that, but, in any case, it can happen, so make it clear to everyone the survey is not a competition in popularity. Just get out the vote! Let the crowd do the rest.

* Establish a track record of running useful surveys, doing the right thing, and taking action: The first employee survey at Google achieved a 55 percent response rate, the second was a 65 percent response rate, the third was 75 percent, and then the rate moved up from there. It takes a few years to earn employee trust in the survey effort, but trust can be won. Just be patient — and remember to do the right thing

<Remember> Yes, you do want to use communication to the best of your ability so that you can achieve a sufficient response rate for analysis. Despite all that, keep a cool head about your real objective, which is to learn something useful for the good of the enterprise, not to achieve responses.

Planning for effective survey communications

Loads of bad surveys are out there. Toss a stone, and you'll hit a bad survey. And, because folks have had it up to here with bad surveys, getting participation requires catching people’s attention and convincing them that this particular survey is worth their time and effort. A comprehensive, thoughtful, and engaging communications plan can help. All the prep work you did to set objectives comes in handy now: who, what, when, and why. Now you only need to interpret that from the perspective of the survey takers.

Never met the acronym WIIFM? Well, say hello to sound old, “What’s In It For Me?” That's the critical question you have to answer for everyone but yourself. Why should people fill out your survey? If you did an excellent job of identifying action owners and engaging them in the creation process, it’s easier to develop an enticing value proposition and to enlist other people who are both recognized and respected to deliver “the ask.” No, HR emails and notifications from the survey tool are not enough to do it. You need to enlist the big guns. Remember to make it personal and enlist the support of the village.

The stakes involved should be made clear at the beginning, which means right there in the survey invitation. Here are the questions you need to address:

* What is this survey about?

* Who wants to know this information?

* Why do they want this?

* Why was I picked? (if this is a sample)

* How important is this?

* Will this be difficult?

* How long will this take?

* Is this anonymous, confidential, or what? Will I be identified?

* Is it safe for me to share my opinion? How?

* How will this be used?

* What is in it for me?

* When is it due?

Here's what a survey invite might look like:

Hi, Mike.

I’d like to invite you to participate in the XYZ Survey to help us understand more about your experience as an employee at <Company>. We do this survey each year to get a sense of how happy you are, where we're improving, and where we can get better. <Company> is an extraordinary place, and we want to make sure we preserve that uniqueness as we grow. Your feedback helps guide our decisions as we think about where we stand and where to focus our efforts so that we can advance together as a company.

To participate, please follow this (Hyperlink: Link).

The survey should take about 5 to 10 minutes to complete.

Please be assured that your responses are entirely confidential. We have commissioned an independent employee research agency, (XYZ SURVEY PARTNER), to conduct this survey on our behalf. Their work is being conducted under our (Hyperlink: People Analytics Code of Conduct).

If you have any questions about the survey, please email: (Email: survey @ xyz.com) or visit the (Hyperlink: FAQ page).

I very much value your feedback, and I hope you take the time to participate.

Sincerely, XYZ

Comparing Survey Data

To compare your survey data, you need something to compare it to. One logical point of comparison is to look at how companies in the same field are faring. That's where benchmarks come in — a set of averages of the responses to the same or similar questions collected from other enterprises. Survey vendors and other consulting firms are quite happy to provide (sell, in other words) such as benchmarks. They can then be used to understand how your company compares to other companies. Is an Engagement Index score of 70 out of 100 good or bad? The answer may not be evident; however, if you can determine that 70 is, in fact, a statistically significant difference and 33 percent better than that achieved by similar companies, then you say with some degree of certainty that an Engagement Index score of 70 out 100 is pretty darn good.

Understanding where your employees stand relative to the employees of your competitors or relative to high-performing companies can produce vital feedback, especially regarding crucial concepts, like compensation and benefits. Most people like to have more of the good stuff, and less of the bad stuff — what is more useful to know is the degree to which your population varies from others in your industry, either due to your best efforts or despite them.

External benchmarks can be quite useful; however, you should consider these limitations:

Imperfect comparison error: Gone are the days when giant consulting firms could retain their customers indefinitely because they had the best brand-name, clients. Nowadays, it's hard to find one firm that holds data from all the top companies in a particular industry. And, if they claim that they do, ask probing questions, and you'll find the holes. For example, some use 5-year rolling averages, which allow them to market old data as new. An aspiration to grow a unique culture coupled with investment and advances in technology and analytics has increased companies' capability to generate and gather intelligence on their own. Therefore, top brand companies that used to keep decade-long contracts with the Deloittes and PWCs of the world are now quitting consulting firms and mining (keeping) their data.

Benchmark target error: There is no single target for all. In today’s world, companies have moving targets but also personalized targets. In other words, what makes Google great will not necessarily work for Facebook. It’s okay, and even smart, to check external reference points —and the more, the better. However, do not make those external benchmarks your company’s goal. If you do, you may reach the target, but in so doing, miss focusing on those items that are important for you.

A better way to win is by looking to improve the key measures that correlate to the outcomes you’re trying to achieve as a company, regardless of whether you’re already ahead of the pack. Let’s say that scores for innovation at your company match external benchmarks, but work-life balance is significantly below the norm. Where would you recommend taking action? I hope you answer, "It depends.” If a culture of innovation is a necessary differentiator for your company and the key motivator for the type of people you want to attract, but work-life balance isn’t even in their vocabulary, then focusing on improving satisfaction with work-life balance could be a bad investment for your company.

Vanity measure error: Even if your company is ahead of all benchmarks, you can’t rest on your laurels. Outperforming against peers can make you complacent, and that is a dangerous thing in today’s fast-changing world. Always be looking to improve, and use external benchmarks for what they are: comparative information at a specific time.

Here’s an arrogance-breaker. Through all of my years of working with employee surveys at different companies, I have noticed a significant pattern: New employees nearly always respond more enthusiastically to survey questions than do employees a few years later. This may seem to be a natural phenomenon — it fits the pattern of all intimate human relationships — so you may not think much of it. However, if you’re comparing your company to other companies or comparing units to each other within your company, those with newer employees have a distinct advantage hidden in their average. They seem to have it all together, and everything they do is golden; however, as growth slows and tenure increases, the average survey score tenaciously declines. The natural temptation is to jump to the opposite conclusion: Something is entirely wrong, and these old managers must be driving the company into the ground. These attribution problems are dangerous. The manager may not be either as good or as bad as you think she is. Comparisons without control should be suspect. The question you should always ask is this: Are we comparing apples to apples? The fact is that, when it comes to employee surveys, you cannot compare a group that has 50 percent new employees with a group that has 10 percent new employees. If you do, go ahead and crown as “best manager” the manager of the group with the new employees, without even looking at data.

As established in the introduction to benchmarking, you need a relative point of reference to interpret any point of data, especially surveys. Still, this point of reference need not be an external benchmark. Other points of reference options include:

Current segment score vs. previous segment score (Trend): did the segment measure improve, stay the same or get worse?

Segment vs. company average (Average): how does the segment measure compare to company average? Is the segment above, the same, or worse than the company average?

Segment vs. all segments range (Range): how does the segment measure compare to the range of scores for segments of a like size? Is the segment high in the range, in the middle of the range, or at the bottom?

Segment vs. target (Target): how does the segment measure compare to a segment target determined either by executive prerogative or by a number mathematically representing good derived from multivariate analysis of previous survey responses?

 Survey design is like billiards. Professionals call the pocket and are accountable to the pocket they call. By calling the pocket, I mean, you need to know what good looks like, with targets determined by any or all of the perspectives above. If you don't know how to set a target or don't know how to move the measure to the target, then you need to revisit the mechanics I summarize in this article. No survey program is perfect. You have worked hard on the areas where you, or your survey program as a whole, have weaknesses.

This is an excerpt from the book People Analytics for Dummies, published by Wiley, written by me.

Don't judge a book by its cover. More on People Analytics For Dummies here

I have moved the growing list of pre-publication writing samples here: Index of People Analytics for Dummies sample chapters on PeopleAnalyst.com

You will find many differences between these samples and the physical copy in the book - notably my posts lack the excellent editing, finish, and binding applied by the print publisher. If you find these samples interesting, you think the book sounds useful; please buy a copy, or two, or twenty-four.

Three Easy Steps

Joe Meyer

Senior AI Scientist @SAP

4y

Awesome read!

Richard Easlick

Cloud Architect and Instructor at CloudSmart

4y

Awesome. Good job. I am going to read it again. lol

To view or add a comment, sign in

Insights from the community

Others also viewed

Explore topics