Social media: Can anyone stop the hate?
Online hate is boiling over into real-world violence, said Andy Rosen in The Boston Globe. Robert Bowers, accused of murdering 11 people at a Pittsburgh synagogue last month, left a stream of anti-Semitic messages on Gab, a social network favored by far-right extremists. Cesar Sayoc, the Florida man who allegedly mailed bombs to leading Democrats, had been reported for making death threats on Twitter. The incidents highlight the biggest challenge facing social media firms: “What to do about the threats and abuse that pollute their platforms.” Facebook and Twitter have tried to use algorithms to crack down on online vitriol, but those efforts have merely “highlighted the limitations of today’s technology.” So far, “algorithms have proved no match for the nuance of human language.” Facebook has hired 7,500 humans to moderate content, but the difficulty for these employees is deciding “what is acceptable and what is not.” What one person views as demeaning, another may see as political speech that’s worth protecting. Platforms with strict rules also run the risk of driving hateful users to fringe services such as Gab, where it’s harder for society “to track the threat or reckon with it.”
Tech companies have had some success in tackling hate speech and extremism, said Patrick Tucker in DefenseOne.com. Back in 2014, the problem was “content from extremists of a different sort: violent jihadist groups such as ISIS.” Facebook began employing contractors to track jihadist content in extremist chat rooms so that they’d be ready to censor the material when it appeared on the platform. Intelligence sources also tell Facebook, “in as close to real time as possible, when bad content is being released,” says Erin Marie Saltman, a counterterrorism expert at the company. Social media firms must adopt similar tactics for domestic extremism. Many tech companies pay bounties to programmers who find bugs in their code, said Ina Fried in Axios.com. Why not do the same for users who report hate speech? Tech needs to devote the same energy to “minimizing hate and harassment” as it does to boosting profits.
There’s a chance that in more-developed countries “things will stabilize,” said Ryan Broderick in BuzzFeedNews.com. Wealthier consumers in those nations now increasingly get their news from reliable sources located behind paywalls. Others still make do with “algorithmically served memes, poorly aggregated news articles, and YouTube videos.” Social media inundates users with anti-Muslim videos in Myanmar and Hindu nationalist propaganda in India. Worryingly, things could get worse, said Joan Solsman in CNET.com. Deep fakes—manipulated videos that can “turn almost anybody into an audiovisual puppet”—haven’t yet surfaced in the U.S., but it’s only a matter of time before they do. “Ask an expert about escaping fake news in your social feed and you’ll get a bleak response: You can’t.” ■