Social media: Time to get rid of ‘trending’?
YouTube’s list of “Trending” videos, often the first thing visitors see, typically features film trailers, buzzy late-night TV segments, and funny clips, said John Herrman in The New York Times. But for a few hours last week, the platform’s No. 1 trending video was far uglier, falsely claiming a teenage survivor of the Parkland school shooting was a “crisis actor” coached by anti-gun activists. Thanks to the site’s promotion, the video, made by an Idaho man with fewer than 1,000 followers, accumulated more than 200,000 views before it was removed. This wasn’t the first time that YouTube had served “as an accomplice in the rapid spread” of nasty conspiracy theories. After last year’s gun massacres in Las Vegas and Texas, the site’s algorithm gave prime placement to videos packed with “discredited and unproven information” supposedly showing the attacks had been staged. Facebook’s and Twitter’s trending lists were also awash in unfounded conspiracy theories last week involving the Parkland teens, said Craig Timberg in The Washington Post. The sites’ continued “weakness in detecting misinformation” shows that some of the richest companies in the world “are losing against people pushing content rife with untruths.”
It’s clearly “time to end trending,” said Brian Feldman in NYMag.com. Although Twitter, YouTube, and Facebook all prominently feature trending lists, “none of them has a public or transparent definition—let alone a common one” for what should be on them. “Trending” is not the same as “popular”; it would more accurately be described as “popular, in some relative, technically defined way.” YouTube’s algorithm, for instance, selects videos based in part on audience growth and how old the video is, but “with no eye toward accuracy or quality.” Facebook’s and Twitter’s algorithms are even more opaque. The system is clearly broken, said Issie Lapowsky in Wired.com. Because trending tools seek out topics that are getting lots of conversation, they tend to “naturally drive the public consciousness toward topics of outrage.” The result is that the companies help spread misinformation, and then hide behind the notion that it was simply the algorithm’s fault.
“Why can everyone spot fake news but the tech companies?” asked Charlie Warzel in BuzzFeed.com. Whenever these firms stumble through a national breaking news event and fail to spot hoaxes and untruths, they promise to do better next time. Yet they’ve proved themselves incompetent or unwilling “over and over and over again.” Google and Facebook are “wildly profitable” and employ some of the world’s best minds. Why not create a team to monitor for “clearly misleading conspiratorial content” during breaking news? “There is no digital replacement for educated humans who are dedicated to sorting and verifying what stories get passed along,” said Joshua Topolsky in TheOutline.com. We’ve tried it the tech companies’ way. “It’s not working.”