Facebook can't protect voters from their own gullibility
"The whole thing is just a big distraction for the country," President Trump's son-in-law and senior adviser, Jared Kushner, said Tuesday of investigations into Russian meddling in the 2016 election. "You look at what Russia did — buying some Facebook ads to try and sow dissent. And it's a terrible thing, but I think the investigation and all the speculation that's happened over the past two years has had a much harsher impact on our democracy."
As Kushner no doubt knows, “buying some Facebook ads” is not all the Russian government did. The social media operations alone were far more complex and varied, to say nothing of the hacking of Democratic officials' email accounts. Kushner's convenient oversimplification deserves the pushback it's received.
But his comments do raise an issue which hasn't been examined enough, which is how to deal with this sort of informal election interference that is effective, essentially, because of our failure to see through it. How do we defend against election tampering that runs on our own gullibility and ignorance?
The investigations Kushner decried have been valuable for establishing what happened, but their findings also highlight law enforcement's very limited means to right these wrongs. Think about what Russian social media operatives actually did: In addition to the ads, they created ordinary accounts, groups, and pages to simply interact with voters. Much of their reach was organic, meaning they didn't need to pay for advertising to get clicks, likes, and shares.
Millions of Americans followed and propagated the Russians' content because they decided it was worth that attention. (Special Counsel Robert Mueller's report notes an estimated 126 million people on Facebook and 1.4 million people on Twitter were reached by posts from Russian Internet Research Agency-controlled accounts, though that certainly reach does not guarantee effective influence.) In some cases, Americans even attended real life rallies and meet-ups organized online by Russian accounts, and they went because they wanted to go.
And here's the rub where redress and prophylaxis are concerned: None of that is illegal, nor is there an obvious way to ban any of it without thoroughly shredding the First Amendment's protections of speech, press, and assembly.
This isn't like stuffing a physical ballot box, compromising online voting, or hacking political opponents' emails. Posting fake news articles or misleading political memes on the internet is not a crime — and with good reason. Obviously the world in general and American politics in particular would be better if no one were ever wrong on the internet, but that is not a circumstance state regulation can or should attempt to produce. (I suppose some sort of libel prosecution might be attempted for content which spreads falsehoods about specific individuals, but American defamation laws are deliberately defensive of free speech, especially when government officials and matters of public concern are involved. That many fake news sites include a satire disclaimer and that memes are generally seen as an unserious medium combine to make this sort of solution even less probable.)
So maybe social media companies will be able to proactively identify and shut down the accounts of would-be election meddlers before they can develop much influence. And maybe Washington will be able to coerce state actors, like Russia, into leaving our elections alone. But there's only so much protection and punishment possible. This sort of social media meddling will never go away entirely.
Unless it just stops working. Would anyone bother making fake social media accounts to dupe voters if voters couldn't be duped? Would anyone spend their days crafting false stories and lying memes that hit just the right balance of outrage and plausibility if no one shared or believed them?
The good news is most people already avoid spreading false content online. A study published in Science Advances in January found only 8.5 percent of Facebook users shared one or more fake news stories in 2016. Even if the share rate were twice that on Facebook or other social platforms like Twitter, that would still mean the vast majority of Americans are never actively disseminating any of this stuff online. Of course, not everyone who is influenced by such content will share it, and that broader, less visible effect is difficult, if not impossible, to measure, especially because of the subtle mechanism in action. The meddler's work is designed to play into our biases, confirm our fears, inflame our emotions, and shift our views — all without our noticing any of this is happening.
I don't have any panacea to offer against that influence, though digital literacy and a self-interrogating skepticism would go a long way toward defanging this 2016-style interference.
The Science Advances study found age to be the single demographic factor consistently linked to sharing fake news: "More than one in 10, or 11.3 percent, of people over age 65 shared links from a fake news site, while only 3 percent of those age 18 to 29 did so," the study authors explained in The Washington Post. "These gaps between old and young hold up even after controlling for partisanship and ideology. No other demographic characteristic we examined — gender, income, education — had any consistent relationship with the likelihood of sharing fake news."
It's possible that "digital native" generations will also be more easily fooled by fake news as they age, the researchers allowed, but I suspect their alternative hypothesis — that age is functioning as a proxy for digital illiteracy — is closer to the truth. Either way, developing the skills to identify false information on the internet has officially become part of what it means to be a responsible voter.
Skepticism, including scrutiny of our own feelings and assumptions, is necessary for all demographics. Research shows manipulative, false content is crafted to evoke more dramatic emotions in us than true news stories typically effect. Those stronger reactions prompt higher rates of online engagement, which in turn attracts broader attention. When a story or image we encounter on social media "produces feelings of amazement, anxiety, shock, and repulsion," as fake news is wont to do, our habit should be to notice that response in ourselves and be skeptical of the content's claim.
Even taking a few seconds to google additional sources may be enough to debunk many lies. And if we don't have the time, wherewithal, or discipline to practice such skepticism consistently, removing all political content from our social media feeds is a good alternative. The news — the real news — will still be available, whenever you want to read it, from sources far more trustworthy than some random (and maybe Russian) page whose grainy, half-literate memes your aunt just loves to share.
Russia's 2016 election meddling wasn't just Facebook ads (and accounts and groups and pages and so on), but it was those, too. This type of interference almost certainly can't be stopped by law enforcement, but its influence is something any and all of us can negate.