Google bows to pressure to tackle extremist content

Internet giant introduces new measures to identify and remove terror-related material on YouTube

The system uses eye scans to assess factors such as blood pressure
(Image credit: Chris Jackson/Getty Images)

Google has unveiled new measures to identify and remove extremist material after coming under increasing political pressure to do more to tackle radical content online.

Plans include increasing the use of technology to help identify extremist and terrorism-related videos, hiring more independent experts for YouTube's Trusted Flagger programme, taking a tougher stance on videos that do not clearly violate Google policies and expanding YouTube's role in counter-radicalisation efforts.

This means the tech giant will, for example, take a tougher position on videos containing supremacist or inflammatory religious content, "even if they do not clearly violate its policies", says Reuters. Google will also issue a warning and refrain from selling advertising or recommending such videos for user endorsements.

Subscribe to The Week

Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.


Sign up for The Week's Free Newsletters

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

Sign up

Laying out the new measures in an opinion piece for the Financial Times, Google general counsel Kent Walker said the internet giant was working with "government, law enforcement and civil society groups to tackle the problem of violent extremism online".

He added: "The uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now."

Governments have pressed Google and social media firms to do more to remove militant content and hate speech following a wave of terrorist activity in Germany, France and the UK.

Last week, Facebook said it had ramped up the use of artificial intelligence such as image matching and language understanding to identify and remove content quickly.

However, despite increasing political pressure over extremism, "Google is evidently hoping to retain its torch-bearing stance as a supporter of free speech by continuing to host controversial hate speech on its platform, just in a way that means it can't be directly accused of providing violent individuals with a revenue stream", says TechCrunch.

Continue reading for free

We hope you're enjoying The Week's refreshingly open-minded journalism.

Subscribed to The Week? Register your account with the same email as your subscription.