Nothing makes the political world lose its gravity as much as gyrations in polls during the final few weeks of a presidential race.

But if we let social scientists serve as psychological balms for our collective panic, we find that a large degree of that uncertainty can be accounted for by errors that humans make, by methodological traumas inherent to polling, and by differences in how likely certain groups of voters are to respond to pollsters when their candidates suffer through a bad news cycle.

It seems logical that Hillary Clinton would suffer in the polls if a negative event, like the FBI director's letter to Congress about her emails, dominates news coverage. It is not logical. It is, in fact, a classic example of a post hoc ergo propter hoc fallacy. For one thing, there is no actual proof that a large enough cohort of undecided voters is going to shift their preferences at this late a date because they're reminded of Clinton's email scandals.

For another, this type of explanation filters out everything else that's going on, including slower-developing, but more plausible drivers in voter preferences. It also ignores a quite obvious and well-studied phenomenon in polling: News events change the inclination of committed but less partisan voters to respond to pollsters. Simply put: If your candidate has a bad news cycle, you aren't as likely to be as enthusiastic about your vote, and you're less likely to respond to a pollster, or to be honest with a pollster about your partisan or ideological affinities.

And because the media will cover polls that depart from the mean, bad news cycles can be artificially prolonged, and the event that triggered the news cycle can suddenly acquire a salience that it doesn't actually have.

This is what's known as "differential non-response," and it's one of the biggest issues that survey takers have to deal with this close to an election. It turns out that, for a variety of reasons, it's much easier to get certain demographic groups to respond to a pollster's telephone call, regardless of the circumstance. If you're a white woman over the age of 50, you are more likely to respond to a pollster than if you are black, or Hispanic, or a millennial. Pollsters try to control for this in their weighting, but their sample sizes are often so small that tiny changes compound error rates.

Second: There are fewer pollsters conducting fewer state polls than before, which means that there is more variation across the average of these polls. Small fluctuations — essentially random fluctuations — seem larger than they are. Sean Trende, the senior elections analyst at Real Clear Politics, notes that "if the polls are coming slowly, where we only have three to five polls in an average at a time, the swings are going to look wild." FiveThirtyEight's Harry Enten found this troubling statistic: "From [a] month before election to nine days before it, we had 80 live interview polls in 2012 in 10 states closest to national vote. In 2016? Thirty-six."

Add to these layers of uncertainty two more facts:

1. Over time, the percentage of Americans who say they would participate in polls has declined significantly, from nearly one out of three in 1997 to one in 10 in 2016, according to Pew.

2. Half of American households don't even have landlines. The younger you are, the less likely you are to have ever used a telephone with a cord attached to it.

Again, pollsters try to correct for these skews by weighting and finding other ways to sample voters. But every interpolation creates additional room for error, quite apart from the survey's statistical "margin of error."

To be clear: The best evidence we have right now is that, as Mark Blumenthal of SurveyMonkey says, virtually all of those likely to vote have made up their minds by now.

Those polls point to a narrow, but solid, Clinton victory.