How are social networks tackling hate speech?

AI programmes and human content reviewers crack down on offensive comments online

hate speech social media
(Image credit: Zach Gibson/AFP/Getty Images)

Hate speech is not a new phenomenon for social network websites, but Saturday's far-right rally in Charlottesville, Virginia, that left one protestor dead has led to companies taking action against offensive groups online.

Facebook

Subscribe to The Week

Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

SUBSCRIBE & SAVE
https://cdn.mos.cms.futurecdn.net/flexiimages/jacafc5zvs1692883516.jpg

Sign up for The Week's Free Newsletters

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

Sign up

Following the violent events in Charlottesville on Saturday, Engadget says Facebook has "shut down numerous hate groups in the wake of the attacks", including the "event page for the Unite the Right march that conducted the violence".

The social media giant uses a combination of artificial intelligence (AI) and human content reviewers to find hate speech, which The Verge says removes "66,000 hate mail posts per week". But the site says Facebook "relies heavily" on users reporting content as offensive or hateful.

In May, Facebook hired an additional 3,000 people to its "team of content reviewers", says TechCrunch, bringing the total up to 7,500. This was triggered by several global "content moderation scandals", including the use of Facebook Live "to broadcast murder and suicide."

But Wired says there have been situations where the company's "algorithmic and human reviewers" have labelled comments or posts as offensive without considering the context. For instance, the website says some words "shouted as slurs" are sometimes "reclaimed" by groups "as a means of self-expression."

Twitter

Twitter's hateful conduct policy says the service does not "allow accounts whose primary purpose is inciting harm towards others" on the basis of areas such as race or sexual orientation.

The Independent reports that over the past months the site has introduced more systems and tools to detect and remove hate speech, as well as improving the process where its users manually report offensive material.

But the social media site has been in a "fair bit of hot water in recent months regarding a perceived lack of action in the wake of perceived threats", says TechCrunch, leading to an activist spraying "hate language on the streets outside the company's Berlin headquarters."

In the wake of Saturday's clash, the website says the account for the far-right group The Daily Stormer has been taken down, although "Trump's tweets that teeter on the edge of threatening nuclear war" appear to fall in line with the company's policy.

Reddit

The chat forum Reddit has also cracked-down on hate speech. Engadget says the website has "shut down numerous hate groups in the wake of the attacks."

Among the groups removed from the social media site was the subreddit /r/Physical_Removal, a page Engadget says "hoped that people in anti-hate subreddits and at CNN would be killed, supported concentration camps and even wrote poems about killing."

"We are very clear in our site terms of service that posting content that incites violence will get users banned from Reddit," Reddit told Cnet.

Google

While Google isn't exclusively a social network, the tech giant plays a key role in directing internet traffic and the social apps that users can access.

Since the clash in Charlottesville on Saturday, TechCrunch says the firm has removed the "conservative social network" Gab from its Play Store as it had become a "haven" for users banned from mainstream platforms.

Google says it does not support "content that promotes or condones violence against individuals or groups" based on certain criteria, adding that it depends "heavily upon users to let us know about content that may violate our policies".

But TechCrunch says "it's not clear what specifically Gab did that warranted its being kicked off the store", as the app is a chatroom and doesn't appear to actively promote hate speech. The website says "there's plenty of hate speech on Twitter and YouTube", but these are still available to download despite this week's "crackdown" on offensive content.

According to The Verge, Gab has "never been approved for placement on Apple's App Store."

What are others doing?

One of the notable cases of the hate speech clampdown after Saturday's events includes the web domain name retailer GoDaddy evicting The Daily Stormer's website from its service, says TheRegister.

The website says activists told the retailer that the group made "extraordinarily vulgar and disparaging remarks" about the victim of the Charlottesville attack, Heather Heyer. GoDaddy handed The Daily Stormer 24 hours "to move the domain to another provider".

YouTube is also expected to "institute stricter guidelines with regard to hate speech", reports TechCrunch. This could see more videos being removed after users mark them as offensive, even if there's nothing illegal in the content.