Numbers, Facts and Trends Shaping Your World

Q&A: After misses in 2016 and 2020, does polling need to be fixed again? What our survey experts say

The 2016 and 2020 election cycles weren’t the best of times for public opinion polls. In 2016, many preelection surveys underestimated support for Donald Trump in key states. And last year, most polls overstated Joe Biden’s lead over Trump in the national vote, along with several critical states. In response, many polling organizations, including the American Association for Public Opinion Research (AAPOR), the survey research field’s major professional group, have taken close looks at how election surveys are designed, administered and analyzed.

Pew Research Center is no exception. Today, the Center releases the second of two reports on what the 2020 election means for different aspects of its survey methodology. The first, released in March, examined how the sorts of errors that led most polls to understate Trump’s support might or might not affect non-election polls – especially the issue-focused surveys that are the Center’s bread and butter. Today’s report looks at what we’ve learned about the American Trends Panel (ATP) – the Center’s online survey panel of more than 10,000 randomly selected U.S. adults – how well it represents the entire U.S. population, and how it could be improved.

A headshot for Courtney Kennedy, director of survey research
Director of Survey Research Courtney Kennedy
A headshot for Scott Keeter, senior survey advisor
Senior Survey Advisor Scott Keeter

We spoke with the lead authors of the two reports, Director of Survey Research Courtney Kennedy and Senior Survey Advisor Scott Keeter, about their findings. Their responses have been edited for clarity and concision.

Kennedy: Both reports explore the implications of survey samples underrepresenting Republicans. But beyond that, they posed two very different questions. Scott’s piece essentially asked: “Can flaws like those seen in some recent preelection polls lead to wrong conclusions about public opinion on issues?” The answer to that was “no.” This new report, by contrast, focuses on the role and responsibility of pollsters, both to diagnose whether underrepresentation is occurring and to identify ways to address it.

Keeter: Even if a particular problem – in this case, underrepresenting Republicans – doesn’t seriously threaten the validity of our measures of public opinion, we’re obligated to do what we can to fix the problem. Often we can correct imbalances in the composition of a sample through the statistical process of weighting, but it’s much better to solve the problem at its source – especially if we have reason to believe the problem might be getting worse over time. In any case, pollsters should always strive to have their surveys accurately represent Republican, Democratic and other viewpoints.

Often we can correct imbalances in the composition of a sample through the statistical process of weighting, but it’s much better to solve the problem at its source — especially if we have reason to believe the problem might be getting worse over time.

The Center doesn’t do “horserace”-type preelection polling and hasn’t for several years. Why should we be concerned over errors in election polls in 2016 and 2020?

Kennedy: It’s true that we don’t predict election outcomes, but we do ask people who they would vote for, and we ask about many topics, like immigration and climate change, that are correlated with presidential vote. So if we see an industry-wide problem in measuring vote preference, this signals the possibility of challenges in measuring related things that we do study. For instance, if recent election-polling problems stem from flawed likely-voter models, then non-election polls may be fine. But if the problem is fewer Republicans (or certain types of Republicans) participating in surveys, that could have implications for us and the field more broadly.

Before going any deeper, let’s define our terms. How do issue polling and election polling differ, both conceptually and practically?

Keeter: They’re certainly related, in that both rely on the same research methods to select samples and interview respondents. And issues play a role in elections, of course, so we often measure opinions about issues in polls that also measure candidate preferences and voting intentions. But they differ in two important ways.

First, election polls typically try to estimate which candidate the respondents support and whether they’ll actually turn out to vote. Issue polls usually don’t need to identify who will vote.

Second, election polls are judged by their accuracy in depicting the margin between the candidates. By contrast, issue polls typically are trying to characterize the shape and direction of public opinion, and generally that can’t be summed up in a single number or margin like an election poll result. Often, we want not just an expression of opinion – for example, whether a person believes the earth is warming because of human activity – but also how important they believe the issue is, what factual knowledge they have about the issue, or how the problem might be mitigated.

So given that, how can we assess the accuracy of issue polls, when there’s no ultimate outcome to measure them against like there is for election polls? Put another way, how can the average person tell whether an issue poll’s findings are accurate or not?

Kennedy: We know from various benchmarking studies, where polls are evaluated against known figures like the U.S. smoking rate or the health care coverage rate, that rigorous polls still provide useful and accurate data. We’ve conducted several studies of that nature over the years. Our polling estimates tend to come within a few percentage points of most benchmarks we can measure. And if that’s the case, we can have confidence that a poll’s other findings are valid too.

The analysis in the March report used simulated survey results based on different assumptions about partisan divides among voters and nonvoters. What can we learn from such a simulation?

Keeter: With the simulation, we were trying to find out how different our measures of opinion on issues would be if the survey sample had more Republicans and Trump voters. So, statistically, we added more Republicans and Trump voters to our samples and then looked at how our measures changed. What we found was that, in most cases, opinions on issues weren’t much affected.

This was true for two reasons. First, people don’t fall perfectly into line behind a candidate or party when expressing opinions on issues. What that means is that adding more supporters of a candidate, or more members of that candidate’s party, won’t move the poll’s issue measures by the same amount.

Second, even though we may think the election poll errors in 2020 were large, correcting them actually requires adding relatively few Trump voters or Republicans. And that small adjustment makes even less difference in the issue questions.

Kennedy: Basically, we ran several different tests and “triangulated” the results. Each individual test showed only a small amount of evidence for underrepresentation, but taken together, we found the evidence quite compelling.

Let’s start at the beginning of the survey process – recruiting the sample. Since 2018, the ATP has used address-based recruitment. Invitations are sent to a random, address-based sample of households selected from the U.S. Postal Service’s database, which means nearly every U.S. adult has a chance of being selected.

What we’ve found is that, in 2020, people living in the country’s most – and least – pro-Trump areas were somewhat less likely than others to join our survey panel. We also noticed a trend in our recruitments: Adults joining our panel in recent years are less Republican than those who joined in earlier years. There are several possible explanations for that, but as we say in the report, the most plausible explanation is increasing resistance among Trump supporters to taking surveys.

We also looked at who has stayed active in our survey panel since 2016 and who has dropped out. We found that a higher share of 2016 Trump voters stopped taking our surveys during the subsequent four years, in comparison with other voters. It’s worth noting, though, that the demographic makeup of 2016 Trump voters basically explains this difference: When we account for voters’ age, race and education level, presidential vote preference doesn’t help predict whether they later decided to leave the panel.

We don’t have any hard data that speaks to why this is happening. That said, it’s clear that Republicans have relatively low levels of trust in various institutions. The polling field is intimately connected with some of those institutions, particularly the news media, which sponsors a good deal of polling. It’s also the case that President Trump had some strong, often critical views of polls, and sometimes messages like that resonate with supporters.

If Republicans are underrepresented, why can’t you correct for that by simply weighting (or reweighting) the raw data?

Kennedy: The short answer is that we do correct for underrepresentation with weighting. ATP surveys have always been adjusted to so that Republicans and Democrats are represented in proportion to their share of the population.

“While weighting can cover a lot of imperfections, it’s not a perfect cure-all.”

The longer answer is that while weighting can cover a lot of imperfections, it’s not a perfect cure-all. For one thing, there isn’t timely benchmark data for what share of Americans are Republicans or Democrats. The targets that we use to weight are certainly close, but they may not be exactly right. Also, when a pollster relies on weighting to fix something, that tends to make the poll estimates less precise, meaning a wider margin of error. A third limitation with weighting is that it relies on assumptions – the most important one being that the opinions of people who don’t take the survey are just like those who do take the survey, within the groupings that the poll uses in weighting (things like age, education and gender).

We should be clear: Weighting is a best practice in polling. We don’t put any stock in unweighted public opinion polls. But relying on weighting alone to fix any and all skews that a sample might have can be risky. If a pollster’s weighting doesn’t capture all the relevant ways that the sample differs from the general public, that’s when estimates can be off.

Kennedy: We’ve identified five action steps in total. Most are direct changes to our survey panel, and one is an experiment that may lead to direct changes. The direct changes are retiring several thousand demographically overrepresented panelists; weighting to new targets for the partisan balance of Americans; developing new recruitment materials; and empaneling adults who initially prefer taking surveys by mail rather than online. The experiment involves testing an offline response mode – specifically, an option for people to call into a toll-free number and take a recorded survey (what we in the field know as “inbound interactive voice response”).

These steps are designed to increase the representation in our surveys of people who are rather hesitant to take surveys online. Our goal is to make joining and participating in our survey panel just as appealing to rural conservatives as it is to urban progressives – or as close to that ideal as possible.

“Our goal is to make joining and participating in our survey panel just as appealing to rural conservatives as it is to urban progressives – or as close to that ideal as possible.”

Do these changes mean that previous survey results were inaccurate or that they should no longer be relied on?

Kennedy: No. Polling practices have always evolved over time in response to changes in society and technology, and they’ll continue to evolve in the future. But that doesn’t invalidate polling data from previous years. And as Scott’s piece showed, the magnitude of errors that we’re dealing with here is small – generally on the order of 1 percentage point or so for major findings.

The American Association for Public Opinion Research is working on a broader examination of the performance of preelection polls in 2020. What more will that report tell us, and could its findings change anything?

Keeter: The AAPOR Task Force, on which I am serving, will provide a detailed description and analysis of the performance of the polls, so that any future discussion of the issue can have a solid base of evidence. But like the 2016 Task Force, this one is also attempting to understand why polls understated Trump’s support. Was the pandemic a factor? Did pollsters have trouble correctly estimating who would vote? Or was it simply the case that polls had an easier time locating and interviewing Biden supporters and Democrats? By working through the various possibilities systematically, the task force is taking something of a Sherlock Holmes approach. As the great detective said, “When you have eliminated the impossible, whatever remains, however improbable, must be the truth.”

  翻译: