Clarence Thomas and the enigma of social media
America owes Clarence Thomas an expression of gratitude.
In a 12-page concurrence to an opinion dismissing a case growing out of Donald Trump's decision to block certain people from reading his Twitter account while president, Thomas has raised a series of sweeping questions about the nature of social media and how we treat it legally and philosophically. His answers to these questions are mostly wrong. But that doesn't mean his arguments should be dismissed. On the contrary, we're in his debt for helping to clarify just how novel social media really is — and how imperfectly our thinking and existing law applies to it and the dilemmas it poses to our body politic.
Since the shock of the 2016 election and the realization that online disinformation and radicalization played a significant role in its outcome, both Washington and Silicon Valley have moved in the direction of acknowledging the need for digital platforms to more actively monitor and control what is posted and promoted on them. This conviction intensified in the run-up to the 2020 election, and it reached a fever pitch during the turbulence of the weeks that followed, which culminated in the insurrectionary violence on Capitol Hill on Jan. 6 and the subsequent decision by Twitter and Facebook to ban the outgoing president's social media accounts.
Put in slightly different terms, the emerging consensus seems to be that social-media companies need to think of themselves as editors making millions upon millions of decisions every day about what to accept for publication. This process, in the argot of the moment, is content moderation. Section 230 (of Title 47 of the U.S. Code) has always delineated between publishers and content moderation so online companies aren't legally liable, with a few exceptions, for what third parties post on their sites. It also protects these companies from legal liability for removing third-party material deemed offensive or obscene. What's happening now is that the terms of offensiveness and obscenity are being greatly expanded to include much broader categories of speech and expression, including fake news, lies, hate, gaslighting, and incitement, with companies like Twitter, Facebook, Google, and Amazon empowered to make the myriad judgment calls about how, when, and against whom to apply the restrictions.
That's where Clarence Thomas comes in.
In his concurrence, Thomas acknowledges that, in addition to the special immunity Section 230 confers on websites and digital platforms, private enterprises in general are typically presumed to possess the freedom (grounded in the First Amendment) to disassociate from forms of expression they disapprove of. The government normally couldn't force social media companies to publish ideas or host people they don't want to be associated with — any more than it could force a newspaper or magazine to publish specific ideas or individuals against its will.
Yet there are important exceptions to this rule — and Thomas suggests that there is a strong case for thinking that social media platforms should be treated as exceptions. That's primarily because of their extraordinary size and power, and because of the crucially important role they've come to play in our public life. As Thomas points out, social media companies "provide avenues for historically unprecedented amounts of speech, including speech by government actors."
This "enormous control over speech," according to Thomas, turns these private enterprises into something very much like political gatekeepers, defining what can be said and who can say it on a national level. This is a power orders of magnitude greater than what The New York Times enjoys when it decides whether or not to publish an op-ed in its pages and on its website, or when Fox News decides whether or not to invite a guest onto one of its prime-time programs. A more fitting analogy would be to a company that provides every citizen with a microphone for use as the precondition of full citizenship and political participation — and then selectively exercises the power to shut it off for certain people or groups when the company decides such an act is warranted.
We saw one example of this power being exercised in the weeks leading up to the 2020 election, when Twitter and then Facebook blocked The New York Post from tweeting in order to keep it from promoting a murkily sourced hit job on Hunter Biden, the troubled son of the Democratic nominee for president. We saw another example when, in the wake of the Jan. 6 insurrection, several social media companies banned Trump from tweeting through the final weeks of his presidency. (This restriction on the former president remains in effect today, three months later.)
In these cases, social media companies are exercising powers that, when it comes to controlling political speech, far outstrip those possessed by The New York Post or even the president of the United States. Like many liberals and progressives, I very much enjoyed not having to endure a stream of Trump tweets filled with incendiary provocation and lies about election fraud through his administration's final ten days, and I think muting him through that post-insurrection period, as we prepared to inaugurate his successor, was very good for the country. Yet the decision to do so was made entirely by a small handful of private companies acting with effectively no political oversight or democratic accountability.
That those on my side of the country's main political divide were happy about the exercise of this enormous power in these particular cases shouldn't lead us to turn a blind eye to the ominous implications. Who exactly is running the political show in this country (and the world)? And what are the limits of their powers?
In his concurrence, Thomas suggests a number of different possibilities for how we might begin to think about social media companies, all of them pointing in the same general direction. In some passages, Thomas claims that such enterprises resemble businesses that provide essential public services — like a pipeline, network, or utility. What such companies have in common is that they are constrained in various ways by onerous government regulations because their activities directly impact the public interest. But in other paragraphs of the concurrence, Thomas indicates that he thinks social-media companies more resemble hotels or restaurants, businesses that are expected to treat all comers equally, without discrimination. Legally speaking, the first group of businesses are called common carriers. The second are places of public accommodation.
I find all of Thomas' analogies quite strained. (I'll explain why in a moment.) But I think it's possible to tease out a metaphor from his various comments about social-media companies to capture how he views them and their role in our politics. He sees Twitter, Facebook, Google, and Amazon as akin to gigantic public billboards on which the vast majority of Americans regularly communicate, share information, conduct commerce, and express political opinions. On that model, efforts at content moderation amount to the billboard owners taking it upon themselves to decide who can post on it and what they can say when they do — which is to give those owners government-like power to determine the rules of the political game. Instead of permitting these owners this level of power over our public life, Thomas suggests regulating the billboards on the model of anti-discrimination law, with no one banned and everyone welcomed to compete in overlapping marketplaces of information, commerce, ideas, and opinions.
There are a slew of problems with this hyper-libertarian vision of online life, but I want to focus on just one — which is that social media platforms are very different from gigantic billboards (and pipelines and utilities and hotels and restaurants). What we see on the billboards isn't some neutral reflection of what users are posting on it. What we see is a product of the interaction between what people are posting with complex, proprietary algorithms devised and controlled by the companies.
It's not even possible to say that each platform is equivalent to a single billboard. On the contrary, each of us sees a distinct billboard that's custom-tailored (curated) for us, with the precise content, placement, and rank ordering determined by our past interactions with the platform and the algorithm's best effort at anticipating our wants, desires, hopes, and needs.
Objecting to the prospect of more intentional (and intentionally political) content moderation is to wake up to a problem quite late. Twitter, Facebook, and the other social media companies could vow tomorrow never to deplatform another user or delete another politically controversial post and they would still be engaged in massive amounts of content moderation, adjustment, and manipulation of what we see when we visit their websites. Our news feeds, searches, scrolling, and potential purchases would still be curated especially for each of us with an eye to keeping us maximally engaged and clicking.
The problem with Thomas' analysis is that it's not radical enough. He's right to recognize the political danger posed by social media — and right to note the many ways in which the megabusinesses in this sector are categorically different from other kinds of companies — but he fails to grasp the true character and parameters of the threat. These companies aren't simply enormous common carriers, and they can't just be treated as places of public accommodation on a national scale. They are different and new, posing distinct and novel challenges to democracies around the world, and we are going to need new kinds of laws and regulations to deal with them.
Whether we determine that the right response demands the development of new tech-based forms of regulation, or we ultimately decide the companies need to be broken up, Clarence Thomas deserves credit for using his position as one of nine justices on the Supreme Court to raise pointed and fruitful questions about one of the most pressing problems of our time.