The slippery problem of policing hate on the internet

Who gets to decide what constitutes hate?

Policing the internet.
(Image credit: Maksim Kabakou / Alamy Stock Photo)

When my septuagenarian father first started to use the internet a decade ago, he was amazed at the wealth of content, the sheer endlessness of it — until he read the comments. "Anyone can just say anything they want to?" he asked me incredulously after scrolling underneath a YouTube video. I explained to him that this was both the up- and downside of the new medium: In reducing the barriers to having a platform, inevitably, things both brilliant and awful would make their way there. He understood, but remained disheartened.

For a time, the giants that began to dominate digital — the Googles, Facebooks, and Twitters of the world — allowed and even encouraged this neutral approach to content. But no longer, it seems. This week, after the horrific events in Charlottesville in which a counter-protester was killed by someone with white nationalist links, tech companies began cutting off their services to some racist web sites. Now, we are deeper into the murky waters of having corporations policing speech — and with it, it seems likely we will have to confront the specter of regulating the internet.

It began with GoDaddy, which provides the domain names that we know as web addresses. After a post on white supremacist website The Daily Stormer criticizing Charlottesville victim Heather Heyer drew a surge of criticism, the domain registrar cut off the site's web address. When The Daily Stormer tried to move to Google, they too quickly acted to block them. They were followed by other companies: Apple stopped allowing its Pay service to work on white supremacist sites, and so did PayPal.

Subscribe to The Week

Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

SUBSCRIBE & SAVE
https://cdn.mos.cms.futurecdn.net/flexiimages/jacafc5zvs1692883516.jpg

Sign up for The Week's Free Newsletters

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

Sign up

This represents a marked change. Tech has always had a very hands-off approach to content. "We are the free speech wing of the free speech party" was for years the line at Twitter. Facebook still allows numerous racist groups to operate on its site. And Google will only filter content from searches that is officially illegal. But this sometimes commendable approach to neutrality has had consequences. It allowed hatred and harassment to fester on Twitter, Facebook, and elsewhere. It has of course also allowed the alt-right and white supremacists to flourish online, especially buoyed as they now are by a sympathetic American president.

There is thus something heartening about tech's intervention. After years of sitting too comfortably on the sidelines, here, at an important moment in history, they appear to be stepping up against unquestionably evil ideologies. And because companies like Google or Facebook constitute a kind of modern infrastructure for social relations and how we get media, it can, as an observer, feel good to see them help cut off oxygen to the pernicious and insidious viewpoints now so plainly and terrifyingly in view.

But the ambivalence of these moves was crystallized when Cloudflare, a company that acts as a buffer for sites to prevent hacking and denial of service attacks, also cut off its services from The Daily Stormer — not because of a strict content violation, but because white supremacists started to say Cloudflare supported racist views. In a memo, CEO Matthew Prince, who has long maintained a content-neutral approach despite hating the views of some sites, said his move was essentially arbitrary and convenient: that Cloudflare kicked off The Daily Stormer so it could get out from under a cloud, and only then could the real discussion about how to regulate content begin.

And the problem is a sticky one. After all, Google, GoDaddy et al are private companies, and their reactions can be both moral and driven by PR. They each said they acted because of violations of their terms of service — despite the fact that racist sites like Stormfront have gone largely untroubled for years. The problem, however, as Will Oremus at Slate pointed out, is that when the president of the United States himself can appear to conflate white supremacists and leftist counter-protestors, such ad-hoc rationalizations become dangerous, subject to the ebb and flow of popular interest and dubious definitions and understandings.

Making matters worse is that relying on the good will of tech companies is additionally risky given the culture of tech itself. As seen by the rise of Facebook-backer Peter Thiel, and the strong support given to now infamous "Google Manifesto" author James Damore, there is a strong libertarian, reactionary streak in Silicon Valley. Given the vast amounts of wealth concentrated in an industry that also knows how to scale up quickly, it is not hard to imagine some other alternative infrastructure of domain registrations, content delivery networks, or other parts of the internet's backbone to be funded by sympathetic parties and used to disseminate hatred across the web. Indeed, just this week it came out that Pax Dickinson, a hard right-leaning tech personality, was willing to help The Daily Stormer find a new host.

This is the ubiquitous tension of market-based solutions to social problems: They are random, driven by sentiment rather than principle, and can go wrong very quickly.

Yet surprisingly, it was just this issue that was cleverly highlighted by Cloudflare CEO Prince in his memo. He also outlined the intricate nature of internet infrastructure, made up as it is of not just domains, or servers, but a whole host of technologies responsible for routing, hosting, platforms, backups, and more. Each of those various parts serves a different function and must be treated differently. And it is in such details that the discussion should focus. After all, there may be a case for regulating how easy it is to find certain sites, but not for the much more drastic step of regulating what content can appear online at all.

But it is just that murkiness that raises the specter that elicits terror in most techies and users alike: not just self-regulation by tech companies, but regulation by the state. It's an immensely difficult issue, not just because of the obvious free speech concerns, but also because government tends to lag an industry that moves so fast that companies that are dominant now may be defunct in a decade. But given how arbitrary and self-serving the reactions have been, it is perhaps time to admit we cannot rely on the will of technology companies alone — and instead consider that, even in tech, we need the push-pull tension between public and private that eventually comes to all central facets of our society.

After all, at this moment in history, it appears we risk seeing reactionary, racist forces threaten to build and coalesce. What is at stake on this now ubiquitous medium is not, say, merely the offended sensibilities of my elderly father — questions about what can or should be said online — but, rather, the kind of social infrastructure and culture we might want to build for our children.

To continue reading this article...
Continue reading this article and get limited website access each month.
Get unlimited website access, exclusive newsletters plus much more.
Cancel or pause at any time.
Already a subscriber to The Week?
Not sure which email you used for your subscription? Contact us
Navneet Alang

Navneet Alang is a technology and culture writer based out of Toronto. His work has appeared in The Atlantic, New Republic, Globe and Mail, and Hazlitt.