Polls offer cold, hard data… that are open to a lot of interpretation. Context, including where you’re getting poll results, impacts polling analysis and uncovers different information that may not tell the whole story.
You may have a news source that you trust to interpret polls for you, but what if you want to look at the data yourself?
Reading polling results doesn’t have to feel like such a mysterious process. Once you know what to look out for, polls can empower you to make more informed decisions. Here, we provide some guidelines for assessing and getting the most out of polling results.
Our tips fall into three categories. First, some basics about polls in general. Second, what to keep in mind when reading a specific survey. Lastly, we discuss how to think about polls more aggregately.
Sample: In polling, a “sample” refers to a subset of a larger population that is selected to represent the whole population’s views or characteristics. Pollsters use samples because it is often impractical or too expensive to survey an entire population. Instead, by surveying a carefully chosen sample, they can make reasonable inferences about the opinions, behaviors, or demographics of the larger population.
Sample Size: Fortunately, statistics helps us with determining how close the responses given by a sample are likely to be to the broader population our poll is trying to understand. While proper sampling techniques are critical to obtaining a representative sample, the sample size is another factor that determines how precise a poll is. A larger sample size generally leads to more accurate and reliable findings, while a smaller sample size may result in less precise or less representative results. Larger sample sizes are particularly important when looking at subgroups and crosstabs to ensure that the number of respondents in each response category doesn’t become too small to draw reasonable conclusions.
Margin of Error: The margin of error reported by a pollster accounts for the variability around your estimates, whether that’s a candidate’s level of support or the answer to a public opinion question. It represents the range within which the true population percentage is likely to fall. A smaller margin of error means there is a smaller range within which your estimates will fall if you were to sample repeatedly. Pew Research Center does a great job explaining the margin of error and its role when interpreting results: “A margin of error of plus or minus three percentage points at the 95% confidence level means that if we fielded the same survey 100 times, we would expect the result to be within three percentage points of the true population value 95 of those times.”
Sample weighting: Weighting is a procedure where pollsters take account of who did and did not respond to the survey, compare it to what they know the population looks like, and if some types of people are underrepresented in the responses they might give their responses a little more weight so that the sample can be better representative of the full population. Or, if some groups are overrepresented, responses from members of those groups can be down-weighted as well.
However, sample weighting is also often used to get a representative sample of a population whose characteristics are not precisely known, such as likely voters. Here, the skill of a given pollster in generating weights can vary quite a bit, given the challenges of applying suitable demographic or behavioral weights to achieve a representative sample. For example, pollsters could use reported voting intention, voter file history, and weights like race, education level, and gender to try to achieve a representative sample of people who are likely to vote in an upcoming election. In fact, an experiment where four different pollsters were asked to estimate election results from the same poll resulted in a five point spread in estimates due to subtle differences in how pollsters tried to identify likely voters.1
Cross-Tabulations: Some polls provide results by demographics (age, gender, location) or other variables in the data. Analyzing “crosstabs” can give you a more detailed understanding of who supports what. In fact, they are often extremely important for fully understanding the results of the poll. For example, where a topline might say that 51% of all registered voters support a particular candidate, a crosstab by age might tell us that only 20% of 18-29 year olds support the candidate while 75% of registered voters age 65 and up support the candidate. In this case, looking at a crosstab by age provides important insights that the topline on its own obscures. If the results of a poll are presented with toplines but no crosstabs, ask yourself what the crosstabs might reveal and question why they are not being provided to the public.
Things to consider when reading a specific poll in a memo or the news
- Who conducted the poll? All pollsters are not created equal. You should remember who created a poll when you read it because pollsters’ decisions, like sampling techniques, weighting, and coding the results, can mean big differences in what results they get. Is the pollster a public government organization? A private polling firm? A newspaper? Looking at who conducted the poll may give you clues as to who the poll might be inclined to favor (or oppose). However, keep in mind that who fielded a poll (and their partisan bias, if they have any) is only sometimes immediately apparent because many polling firms conduct polls for other groups. Moreover, even when a pollster has no partisan bias, their polling can still be consistently inaccurate due to various quality control issues.FiveThirtyEight maintains pollster ratings for most major US pollsters, which are based on historical polling accuracy and adjusted for things like the type of election, sample size, and the performance of other polls. These ratings can be helpful when assessing polling sources’ quality.
- When was the poll conducted? Remember, polls are snapshots in time. Always look at when a poll was fielded and consider major events that happened just before the survey was sent out that may have impacted the results. Similarly, consider what events have occurred since the poll came out of the field and how that could have shifted public opinion. For example, the Supreme Court’s decision to overturn Roe v Wade meant that early summer 2022 polling foresaw a midterm red wave that failed to appear due to the boost of urgency the ruling gave to Democratic voters.
- How was the poll conducted? Did the poll call voters at home or on their cell phones? Did it use online questionnaires? Did it use a mix of methods? Live caller polling had long been the gold standard, but as internet access expanded, so did the use of online polls. The polling method matters, and a given method’s effect is constantly evolving. For example, in 2012, FiveThirtyEight found that online polls performed slightly better than telephone polling overall. However, in 2016, they discovered that Hillary Clinton’s lead was larger in live interview polls compared to non-live polls. Of course, the method only matters insomuch as it affects the sample’s representativeness, but even polling researchers still lack a clear understanding of the quality of online samples compared to phone-based sampling. Moreover, sample quality can vary from pollster to pollster based on many factors that may or may not be related to the polling method.
- Who was polled? A general U.S. poll typically looks at the general population, registered voters, or likely voters. The group polled can have a significant impact on results (and making comparisons between polls with different target samples can be very tricky). Voters are generally older, whiter, and thus more Republican and about 40% of Americans are not registered to vote. Thus, the decision to poll a specific group can skew the results due to restrictions (or a lack of restrictions) on the sample.
- What questions were asked and how were they framed? Do not just look at the results when reading a poll, look at how the questions were asked. In 2022, Public Wise conducted a poll experiment where respondents were asked about their thoughts on President Joe Biden’s decision to nominate a Black woman to the Supreme Court. The survey found, among other things, that how you phrase a question and the context you do (or do not) provide can have an enormous impact on responses. Some polls are even not quite genuine polls at all! Polling experts refer to “push polls” to describe polling attempts designed to influence rather than measure the attitudes and beliefs of respondents, by posing loaded questions that contain negative information about an opposing candidate or issue. Always look at how a question is asked when interpreting results. This can also be key to understanding why different polls may find quite different results for what seems to be the same topic.
- What is the poll trying to estimate? Polls are used for many tasks, such as estimating public opinion on a given issue, predicting election outcomes, or trying to understand whether a specific action or event had an effect on peoples’ views or behaviors. These different uses require distinct levels of precision when it comes to practical applications. The margin of error in a public opinion poll can be larger, without changing our overall conclusions in a meaningful way. For example, a 5% difference in either direction on a question about gun control likely doesn’t change the general conclusion that a large share of the population favors more gun control. But when trying to estimate the outcome of a tight political race where a candidate might win by a few percentage points, it is more important to have a much tighter margin of error.
How to Look At Polling In The Aggregate
In any given election cycle or when you are focused on a specific issue campaign, you will often be flooded with polls and surveys for months on end. This advice focuses on how to think about longer trends and things that can impact the polling ecosystem at large.
Things to consider when reading polls more generally:
- Look at a variety of polls from a variety of sources. A sample of one is not a particularly helpful sample, both when conducting a poll and when reading them. Looking at a large selection of polls from many different sources will give readers a better understanding of the landscape as a whole. Pollsters make different decisions when conducting a poll and interpreting results, as the New York Times demonstrated when they gave four pollsters the same raw data and wound up with 4 different sets of results.
- Outliers are tricky.
Sometimes, polls can produce unusual or extreme results that don’t align with other polls. It’s important to be cautious of such outliers, but paying attention to whether the pollster has a reliable reputation can help decide how much weight to give these outliers.In the opposite vein, the phenomenon known as “pollster herding” can also lead you astray. Pollsters are often disinclined to publish polls that deviate too much from what others are putting out for fear of being wrong. Because of this, pollsters might hold back on publishing outlier polls.But sometimes, outlier polls are not outliers at all; they are accurate. That was the case in the 2014 Virginia Senate race when outlier pollsters decided not to publish findings in the campaign’s final stretch. Conversely, in 2020, Ann Selzer of the Iowa Poll released results in the run-up to the general election that put President Donald Trump 7 points up in Iowa. These results were a departure from the tighter race polls had been predicting in Iowa, and Selzer faced a great deal of blowback for releasing what was considered an “outlier” poll. When all was said and done, Seltzer was right. Trump won Iowa by 8 points.
In sum, there is a long list of things to consider when reading a poll. Whether the poll is looking at public opinion in a single state or is being used as part of a larger model to predict a presidential election outcome, it is easy to lose sight of the forest for the trees. These tips will help you make the most of polls this primary and general election season.
1 Sometimes samples will also include oversamples of certain groups to ensure they have enough respondents to do subgroup analyses. When a group is oversampled, it means their share of the sample is larger than the share of the population the sample is supposed to represent.