RSS
Briefing: How polls work
Polling as we know it today began in 1936, when a young statistician named George Gallup conducted the first poll using statistical modeling. He accurately predicted that Franklin Roosevelt would trounce Alf Landon. For decades after that, the polling bus
 

Long a staple of American politics, opinion polls no longer just reflect public opinion, they often shape it. But when polls get the results wrong, people ask: Can we trust the numbers?

How long have polls been around?
Polling as we know it today began in 1936, when a young statistician named George Gallup conducted the first poll using statistical modeling. He accurately predicted that Franklin Roosevelt would trounce Alf Landon. For decades after that, the polling business was dominated by Gallup and a few rivals. But the 1990s saw a polling explosion, as virtually every major news organization began conducting polls and treating them as news events equal in importance to the daily maneuverings of the candidates, to say nothing of the issues. Today, virtually all candidates and elected officials commission their own polls to gauge their popularity and whether their messages are working.

How does polling work?
Sampling public opinion, George Gallup once said, is like sampling soup: One spoonful can reflect the taste of the whole pot, if the soup is well-stirred. In other words, it’s all about finding a sample that reflects the larger population. Polling is based on the laws of probability. According to probability theory, it’s not necessary to sample the opinions of all 300 million Americans; a much smaller sample can reflect the larger population—if that sample is truly representative. So in surveying the opinions of the whole country, pollsters have to sample a proportional percentage of men and women; Republicans, Democrats, and independents; rural and urban residents; and so on. That sample group, moreover, has to reach a certain size threshold to be statistically accurate. For national polls, most pollsters use a sample of 1,500 as a rule of thumb. A sample that size will accurately reflect the whole within about 3 percentage points, a variance that statisticians call the margin of error.

How do pollsters take their samples?
These days, most national polls are conducted by telephone. Poll workers call randomly generated phone numbers, then conduct 15- to 20-minute interviews with those who respond. Most polls try to compile about 1,500 responses, since smaller samples have a larger margin of error and larger ones aren’t significantly more accurate. Pollsters then compare the pool of respondents to the broader population in terms of age, race, gender, and other characteristics. If the match isn’t perfect—it rarely is—pollsters use statistical techniques to “weight” some responses more heavily than others. But there are pitfalls. Results can be distorted by the wording of questions, the order they’re asked, even the interviewer’s tone of voice. That’s why some polling experts argue that news stories on polls should routinely include the full questionnaire, so people can judge whether the questions are biased. In the face of such hazards, says MIT political science professor Stephen Ansolabehere, “I’m perpetually surprised that results aren’t wrong more often.”

How often are they wrong?
Rarely. In every presidential election since 1980, with the exception of the 2000 Bush-Gore race, national polls have correctly predicted the winner, usually within a couple of points of the eventual tally. But that’s not to say that polls are always understood by the public. People tend to brush off the margin of error, but it’s crucial. If candidate A is leading candidate B by 55–45, with the margin of error of plus or minus 5 percentage points, the two could actually be tied, or candidate A could be leading 60–40. It’s also important to remember that polls provide a snapshot of how voters feel at the moment, not necessarily how they’ll vote on Election Day.

Do polls influence voter behavior?
Many political scientists say they do—by exerting a subtle form of peer pressure. Experts speak of a so-called bandwagon effect, in which voters flock to the candidate with a healthy lead in the polls because they want to pick a winner. On the flip side, there’s the underdog effect, when voters switch to the trailing candidate out of sympathy; that may have played a part in Hillary Clinton’s surprise win in New Hampshire. Then there’s the boomerang effect, when people are so sure that their favored candidate will win that they don’t bother to vote. Historians say the boomerang effect helped President Truman defeat Thomas Dewey in 1948; with polls showing Dewey holding an insurmountable lead, many Republicans stayed home and Truman snatched an upset victory.

Are polls bad for democracy?
They certainly have a downside. Polls help turn elections into proverbial “horse races,” in which more attention is paid to who’s ahead or behind than to candidates’ leadership qualities and ideas. And when one candidate has a big lead close to the election, as Bill Clinton did against Bob Dole in 1996, voters can lose interest and stay home on Election Day. Only 49 percent of eligible voters showed up that year—the lowest turnout since 1924. The polls “dampened voters’ interest and participation by announcing that the presidential contest was really no contest at all,” said political scientist Everett Ladd. But pollsters say their work satisfies a natural curiosity about what other people are thinking, while helping to identify the priorities of the electorate. As for those who complain that polls are inaccurate, pollsters don’t take it personally. “People always think there’s something wrong with the polls,” said pollster Micheline Blum, “if they don’t agree with them.”


 

THE WEEK'S AUDIOPHILE PODCASTS: LISTEN SMARTER

Subscribe to the Week