How are social networks tackling hate speech?
AI programmes and human content reviewers crack down on offensive comments online
Hate speech is not a new phenomenon for social network websites, but Saturday's far-right rally in Charlottesville, Virginia, that left one protestor dead has led to companies taking action against offensive groups online.
Here's how the biggest social media groups are responding to hate speech, as well as their plans to prevent offensive content from spreading:
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
Following the violent events in Charlottesville on Saturday, Engadget says Facebook has "shut down numerous hate groups in the wake of the attacks", including the "event page for the Unite the Right march that conducted the violence".
The social media giant uses a combination of artificial intelligence (AI) and human content reviewers to find hate speech, which The Verge says removes "66,000 hate mail posts per week". But the site says Facebook "relies heavily" on users reporting content as offensive or hateful.
In May, Facebook hired an additional 3,000 people to its "team of content reviewers", says TechCrunch, bringing the total up to 7,500. This was triggered by several global "content moderation scandals", including the use of Facebook Live "to broadcast murder and suicide."
But Wired says there have been situations where the company's "algorithmic and human reviewers" have labelled comments or posts as offensive without considering the context. For instance, the website says some words "shouted as slurs" are sometimes "reclaimed" by groups "as a means of self-expression."
Twitter's hateful conduct policy says the service does not "allow accounts whose primary purpose is inciting harm towards others" on the basis of areas such as race or sexual orientation.
The Independent reports that over the past months the site has introduced more systems and tools to detect and remove hate speech, as well as improving the process where its users manually report offensive material.
But the social media site has been in a "fair bit of hot water in recent months regarding a perceived lack of action in the wake of perceived threats", says TechCrunch, leading to an activist spraying "hate language on the streets outside the company's Berlin headquarters."
In the wake of Saturday's clash, the website says the account for the far-right group The Daily Stormer has been taken down, although "Trump's tweets that teeter on the edge of threatening nuclear war" appear to fall in line with the company's policy.
The chat forum Reddit has also cracked-down on hate speech. Engadget says the website has "shut down numerous hate groups in the wake of the attacks."
Among the groups removed from the social media site was the subreddit /r/Physical_Removal, a page Engadget says "hoped that people in anti-hate subreddits and at CNN would be killed, supported concentration camps and even wrote poems about killing."
"We are very clear in our site terms of service that posting content that incites violence will get users banned from Reddit," Reddit told Cnet.
While Google isn't exclusively a social network, the tech giant plays a key role in directing internet traffic and the social apps that users can access.
Since the clash in Charlottesville on Saturday, TechCrunch says the firm has removed the "conservative social network" Gab from its Play Store as it had become a "haven" for users banned from mainstream platforms.
Google says it does not support "content that promotes or condones violence against individuals or groups" based on certain criteria, adding that it depends "heavily upon users to let us know about content that may violate our policies".
But TechCrunch says "it's not clear what specifically Gab did that warranted its being kicked off the store", as the app is a chatroom and doesn't appear to actively promote hate speech. The website says "there's plenty of hate speech on Twitter and YouTube", but these are still available to download despite this week's "crackdown" on offensive content.
According to The Verge, Gab has "never been approved for placement on Apple's App Store."
What are others doing?
One of the notable cases of the hate speech clampdown after Saturday's events includes the web domain name retailer GoDaddy evicting The Daily Stormer's website from its service, says TheRegister.
The website says activists told the retailer that the group made "extraordinarily vulgar and disparaging remarks" about the victim of the Charlottesville attack, Heather Heyer. GoDaddy handed The Daily Stormer 24 hours "to move the domain to another provider".
YouTube is also expected to "institute stricter guidelines with regard to hate speech", reports TechCrunch. This could see more videos being removed after users mark them as offensive, even if there's nothing illegal in the content.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
-
Today's political cartoons - November 2, 2024
Cartoons Saturday's cartoons - anti-fascism, early voter turnout, and more
By The Week US Published
-
Geoff Capes obituary: shot-putter who became the World’s Strongest Man
In the Spotlight The 'mighty figure' was a two-time Commonwealth Champion and world-record holder
By The Week UK Published
-
Israel attacks Iran: a 'limited' retaliation
Talking Point Iran's humiliated leaders must decide how to respond to Netanyahu's measured strike
By The Week UK Published
-
States sue TikTok over children's mental health
Speed Read The lawsuit was filed by 13 states and Washington, D.C.
By Rafi Schwartz, The Week US Published
-
The 'loyalty testers' who can check a partner's fidelity
Under The Radar The history of 'honey-trapping goes back a long way'
By Chas Newkey-Burden, The Week UK Published
-
Elon Musk's X blinks in standoff with Brazil
Speed Read Brazil may allow X to resume operations in the country, as Musk's company agrees to comply with court demand
By Peter Weber, The Week US Published
-
Pakistan 'gaslighting' citizens over sudden internet slowdown
Under the Radar Government accused of 'throttling the internet' and spooking businesses with China-style firewall, but minister blames widespread use of VPNs
By Harriet Marsden, The Week UK Published
-
Threads turns one: where does the Twitter rival stand?
In the Spotlight Although Threads is reporting 175 million active monthly users, it has failed to eclipse X as a meaningful cultural force
By Keumars Afifi-Sabet, The Week UK Published
-
The growing dystopian AI influencer economy
In the Spotlight AI-generated digital personas are giving human influencers a run for their money
By Theara Coleman, The Week US Published
-
Social media could come with a warning label
Talking Points Do Facebook and TikTok need the notifications that come on cigarettes?
By Joel Mathis, The Week US Published
-
What happens if TikTok is banned?
Today's Big Question Many are fearful that TikTok's demise could decimate the content creator community
By Justin Klawans, The Week US Published