Platforms: Curbing the technology of hate
The massacre at two New Zealand mosques last week was a first, said Kevin Roose in The New York Times: “an internet-native mass shooting.” The accused gunman, Brenton Tarrant, broadcast the killings live on Facebook, with video designed to pander to the internet’s white supremacist subcultures. It was shared on all the major internet platforms, and in the hours following the shooting, not only Facebook but also YouTube, Twitter, and Reddit scrambled to take it down. The shooter’s actions beforehand suggest an acute awareness of his audience—he even paused in his broadcast to say “Subscribe to PewDiePie,” a reference to a popular right-wing YouTube influencer. The heinous acts were carried out with the knowledge that the platforms “create and reinforce extremist beliefs.” They have algorithms that “steer users toward edgier content” and weak policies to contain hate speech, and they’ve barely addressed how to remove graphic videos. Extremists are exploiting this with increasing skill, said Joan Donovan in The Atlantic. The New Zealand attacker “knew that others would be recording and archiving” his video so that it could be re-uploaded in the wake of removals. In the first 24 hours after the attack, “Facebook alone removed 1.5 million postings of the video” and was still working around the clock days later.
Facebook did exactly what it’s designed to do, said Peter Kafka in Recode.com. That is, “allow humans to share whatever they want, whenever they want, to as many people as they want.” Of course, Facebook Live was never intended for white supremacists. But the company produced the tools for ease of uploading, and the platform is “fundamentally built” to spread content with little friction. Facebook’s recent announcement about a shift to private communication “wouldn’t prevent that stuff from going up,” and encryption might make it even harder for Facebook to police. Facebook, Twitter, and Google (which owns YouTube) have invested heavily in artificial intelligence “designed to detect violence,” said Jon Emont in The Wall Street Journal. Unfortunately, it’s nearly impossible for AI to determine “which videos cross the line.” It has difficulty recognizing a person holding a gun, for instance, because there are many different types of guns and stances for holding them. “Computers also struggle to distinguish real violence from fictional films.”
Actually, there is technology that can flag “obvious indicators” of extremism and prevent hate speech from spreading, said Ben Goodale in The New Zealand Herald (New Zealand). After all, if I searched online for a suitcase, there would be smart ad software targeting me instantly. These platforms compose digital profiles indicating what “I’m interested in—my age range, gender, hobbies, reading preferences, sporting affiliations, you name it.” All trackable, all within seconds. So why can’t the platforms pick up “the obvious indicators” of fanaticism? Simply saying “we can’t help it” or “that’s not our job” is no longer acceptable, said Margaret Sullivan in The Washington Post. Facebook puts “tremendous resources and ingenuity” into maximizing clicks and advertising revenue. They rely on low-paid moderators and faulty algorithms to control content. But just as major news companies have dealt with such questions for centuries, “editorial judgment” from the platforms is not merely possible. “It’s necessary.” ■