Facebook is using artificial intelligence to scan posts for suicide and self-harm warning signs

Facebook preventing suicide.
(Image credit: Sean Gallup/Getty Images)

Facebook is rolling out software Wednesday that scans users' posts to identify language indicating suicidal or harmful thoughts, BuzzFeed News reports. In cases where indicative language is found, the software alerts Facebook's community team for review and can send a message with suicide-prevention resources to the flagged user, including options such as contacting a helpline or a friend.

The decision to implement the software follows a number of suicides that have been broadcast on Facebook Live over the past several months. Facebook says its program is actually even better at recognizing the warning signs of suicide and self-harm than real people are. "The AI is actually more accurate than the reports that we get from people that are flagged as suicide and self-injury," product manager Vanessa Callison-Burchold told BuzzFeed News. "The people who have posted that content [that AI reports] are more likely to be sent resources of support versus people reporting to us."

Subscribe to The Week

Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

SUBSCRIBE & SAVE
https://cdn.mos.cms.futurecdn.net/flexiimages/jacafc5zvs1692883516.jpg

Sign up for The Week's Free Newsletters

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

Sign up
Explore More
Jeva Lange

Jeva Lange was the executive editor at TheWeek.com. She formerly served as The Week's deputy editor and culture critic. She is also a contributor to Screen Slate, and her writing has appeared in The New York Daily News, The Awl, Vice, and Gothamist, among other publications. Jeva lives in New York City. Follow her on Twitter.