Numbers, Facts and Trends Shaping Your World

Frequently Asked Questions

Why am I never asked to take a poll?

You have roughly the same chance of being polled as anyone else living in the United States. This chance, however, is only about 1 in 26,000 for a typical Pew Research Center survey. To obtain that rough estimate, we divide the current adult population of the U.S. (about 260 million) by the typical number of adults in our panel (around 10,000 people). We draw a random sample of addresses from the U.S. Postal Service’s master residential address file. We recruit one randomly selected adult from each of those households to join our survey panel. This process gives every non-institutionalized adult a known chance of being included. The only people who are not included are those who do not live at a residential address (e.g., adults who are incarcerated or living at a group facility like a rehabilitation center) and places where the Postal Service does not deliver mail (e.g. living in a remote area without a standard postal address or some Native American reservations).

Can I volunteer to be polled?

While we appreciate people who want to participate, we can’t base our polls on volunteers. The key to survey research is to have a random sample so that every person has a chance of having their views captured. The kinds of people who might volunteer for our polls are likely to be very different from the average American – at the very least they would probably be more politically interested and engaged, which would not be a true representation of the general population.

Why should I participate in surveys?

Polls are a way for you to express your opinions to the nation’s leaders and the country as a whole. Public officials and other leaders pay attention to the results of polls and often take them into account in their decision-making. If certain kinds of people do not participate in the surveys, then the results won’t represent the full range of opinions in the nation.

What good are polls?

Polls seek to measure public opinion and document the experiences of the public on a range of subjects. The results provide information for academics, researchers and government officials and help to inform the decision-making process for policymakers and others. Much of what the country knows about its media usage, labor and job markets, educational performance, crime victimization and social conditions is based on data collected through polls.

Do pollsters have a code of ethics? If so, what is in the code?

The major professional organizations of survey researchers have very clear codes of ethics for their members. These codes cover the responsibilities of pollsters with respect to the treatment of respondents, their relationships with clients and their responsibilities to the public when reporting on polls.  Some good examples of a pollster’s Code of Ethics include:

American Association for Public Opinion Research (AAPOR)

Council of American Survey Research Organizations (CASRO)

You can read Pew Research Center’s mission and code of ethics here.

How are your polls different from market research?

One main difference is the subject matter. Market research explores opinions about products and services and measures your buying patterns, awareness of products and services or willingness to buy something. Our polls typically focus on public policy issues, mainly aimed at informing the public. We also try to measure topics like how voters are reacting to candidates in political campaigns and what issues are important to them.

How are people selected for your polls?

Most of our U.S. surveys are conducted on the American Trends Panel (ATP), the Center’s national survey panel of about 10,000 randomly selected U.S. adults. ATP participants are recruited offline using random sampling from the U.S. Postal Service’s residential address file. Respondents complete the surveys online using smartphones, tablets or desktop devices or over the phone with an interviewer.

Do people lie to pollsters?

We know that not all survey questions are answered accurately, but it’s impossible to gauge intent and to say that any given inaccurate answer necessarily involves lying. People may simply not remember their behavior accurately.

More people say they voted in a given election than voting records indicate actually cast ballots. In some instances, researchers have actually verified the voting records of people who were interviewed and found that some of them said they voted but did not. Voting is generally considered a socially desirable behavior, like attending church or donating money to charity. Studies suggest these kinds of behaviors are overreported. Similarly, socially undesirable behaviors such as illegal drug use, certain kinds of sexual behavior or driving while intoxicated are underreported.

How can I tell a high-quality poll from a lower-quality one?

Two key aspects to consider are transparency and representation. Pollsters who provide clear, detailed explanations about how the poll was conducted (and by whom) tend to be more accurate than those who do not. For example, reputable pollsters will report the source from which the sample was selected, the mode(s) used for interviewing, question wording, etc. High-quality polls also have procedures to ensure that the poll represents the public, even though response rates are low, and some groups are more likely to participate in polls than others. For example, it helps to sample from a database that includes virtually all Americans (e.g., a master list of addresses or phone numbers). Also, it is critical that the poll uses a statistical adjustment (called “weighting”) to make sure that it aligns with an accurate profile of the public. For example, Pew Research Center polls adjust on variables ranging from age, sex and education to voter registration status and political party affiliation. More general guidelines on high-quality polling are available here.

How can a small sample of 1,000 (or even 10,000) accurately represent the views of 250,000,000+ Americans?

Two main statistical techniques are used to ensure that our surveys are representative of the populations they’re drawn from: random sampling and weighting. Random sampling ensures that each person has a chance of selection to participate in a survey and that the people selected into a sample are a good mix of various demographics, such as age, race, income and education, just like in the general population. However, sample compositions can differ. For example, one sample drawn from a nationally representative list of residential addresses may have a higher percentage of rural dwellers compared with another sample drawn from the exact same list. To ensure that samples drawn ultimately resemble the population they are meant to represent, we use weighting techniques in addition to random sampling. These weighting techniques adjust for differences between respondents’ demographics in the sample and what we know them to be at population level, based on information obtained through institutions such as the U.S. Census Bureau. For more on this topic, check out our Methods 101 video on random sampling.

Do your surveys include people who are offline?

Yes. For the online ATP panel to be truly nationally representative, the share of those who do not use the internet nationally must be represented on the panel. In the past, we did this by providing identified non-internet users with internet-enabled tablets to take their surveys. Now, those who don’t have internet access can take their surveys over the phone with a live interviewer. These individuals are representative of our non-internet population in the Center’s analyses.  

  翻译: