Facebook to expand artificial intelligence to help prevent suicide

The software works by identifying phrases and other clues a user posts on the site that could suggest they are suicidal

Social media giants set to reveal extent of Russian election interference
The EU has demanded social media companies do more to combat the problem 
(Image credit: JUSTIN TALLIS/AFP/Getty Images)

Facebook is expanding its automated efforts across the globe to prevent suicide

The software, which originally rolled out in the US in March, “works by identifying phrases and other clues a user posts on the site that could suggest they are suicidal,” says Reuters. If the software determines that they are, “the user will be sent resources that can help them cope - such as the information for a telephone helpline,” the news agency adds.

Subscribe to The Week

Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

SUBSCRIBE & SAVE
https://cdn.mos.cms.futurecdn.net/flexiimages/jacafc5zvs1692883516.jpg

Sign up for The Week's Free Newsletters

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

Sign up

Guy Rosen, Facebook’s vice president for product management, said the company was beginning to roll out the software outside the United States because the tests have been successful.

During the past month, he said, first responders checked on people more than 100 times after Facebook software detected suicidal intent.

“There have been cases where the first-responder has arrived and the person is still broadcasting,” said Rosen.

Facebook said it tries to have specialist employees available at any hour to call authorities in local languages.

“Speed really matters. We have to get help to people in real time,” Rosen said.

The idea of Facebook “proactively scanning the content of people’s posts could trigger some dystopian fears about how else the technology could be applied,” says TechCrunch.

When questioned, Rosen didn’t answer how Facebook would avoid scanning for political dissent or petty crime, but replied “we have an opportunity to help here so we’re going to invest in that.”

Facebook’s chief security officer Alex Stamos did respond to the concerns, however, with what TechCrunch describes as “a heartening tweet signaling that Facebook does take seriously responsible use of AI”.

“With all the fear about how AI may be harmful in the future, it's good to remind ourselves how AI is actually helping save people's lives today,” the company’s CEO Mark Zuckerberg wrote in a post on the social network.

“Between Facebook's role in the 2016 election and that it has come under fire for experimenting with whether or not gaming your News Feed can alter your mood, the company needs to work on repairing its image these days,” says EnGadget.

“Stories like this can help, but until there are more successes than unfortunate happenstances the social network needs to keep at it.”