Late in 2017, writer James Bridle noticed something strange about YouTube. Someone or something was creating odd, disturbing, and downright bizarre videos aimed at kids. After clips of Peppa the Pig or some random nursery rhyme, suggestions were popping up for videos that were weird knockoffs or sometimes even violent.
Many of these clips have since been purged, but the heart of the problem remains: the YouTube algorithm. It's designed at its core to keep suggesting similar videos. This tendency keeps sending people further and further down the hole.
YouTube's troubles only seem to magnify. There is a string of content exploiting children, and networks of pedophiles can be found in the comments sections. In response, companies as large as Nestle and Disney have pulled ads from the platform, making video creators fear for their livelihoods. YouTube has become a home to the alt-right, as well as to anti-vax content, flat-earth conspiracy theories and more. The site is still massively popular and increasingly a core part of contemporary culture, but it's still worth asking: Can YouTube be saved from itself?
When YouTube was first purchased by Google in 2006, it was the start of "Web 2.0," the era of user generated content and social media. Since then, YouTube has entrenched itself firmly in public consciousness. YouTubers like Phillip DeFranco and Lily Singh are legitimate stars unto themselves, and their style of vlogging and speaking to the camera has come to dominate web video. The platform has approximately two billion active monthly users, a number matched in scope only by Facebook.
In that sense, though, YouTube is emblematic of the issues affecting digital platforms and by extension societies in general. Like all ad-supported businesses, it is out of necessity designed to maximize engagement and views. This has had all sorts of knock-on effects, even beyond the obvious bent toward inflammatory content. As but one example, star Lily Singh had to take a mental health break because of the pressures of constantly needing to produce content to remain relevant to the site's algorithm.
And just like Facebook, solutions are difficult because of the sheer scale involved. When the videos watched per day number in the billions, and millions of hours of video are uploaded every day, policing by humans is essentially impossible. As a result, YouTube must predominantly rely on AI tweaks, which are imperfect at best, and occasionally produce unexpected results of their own, or are then gamed themselves.
What's more, because of the bias toward particular sorts of content — the controversial, the counter-intuitive, the rebellious — an odd dynamic emerges whereby, for example, anti-vax content gets popular but scientifically rigorous content can only emerge as a response to it. Put more simply, no-one is going to go viral with a video titled "Here are 5 reasons we know the Earth is round."
The situation is thus a exemplar of how we are all struggling with the internet. It is at once vital and deeply flawed, expansive yet limiting, inescapable yet at times genuinely harmful. What then are the options when platforms like YouTube have already worked their way into culture and their scale makes them resistant to disruption?
It can be tempting to suggest that the market will take its course. Enormous brands like MySpace and Nokia have evaporated in a matter of years, and disruption is the norm in tech. But when one looks at Facebook and Twitter — each continually plagued with problems regarding privacy, harassment, and misleading content and bad actors — it's hard to have faith in that passive approach.
At the same time, regulation is not a clear and simple thing. While it seems we are now well past time that some sort of regulation of tech giants is necessary, both the enormous amount of revenue generated by Silicon Valley and their centrality to the economy in general means a light hand will be necessary.
Rather, what is becoming increasingly difficult to avoid is that the debate between market solutions and regulation is actually obscuring a debate that is in fact about scale itself — that is, the sheer size of platforms is the underlying issue. It's scale that is at the root of so many problems: how hard it is to track or counter harmful uses of platforms; how expensive it is to hire human moderators to review content; and the sheer pace of how quickly content changes and is put to differing ends.
There's something unnerving about this phenomenon. It's not just weird videos on YouTube. Instead, it's that in a very short amount of time, a few companies have amassed so many users that they have almost exhausted their addressable markets. It feels historic, but also historical, a change that scholars will write about decades from now. But at the cusp of this radical upheaval in culture, perhaps the thing we need to consider is that tech is simply too big. Why not break these giants apart?