The future is now
Facebook is rolling out software Wednesday that scans users' posts to identify language indicating suicidal or harmful thoughts, BuzzFeed News reports. In cases where indicative language is found, the software alerts Facebook's community team for review and can send a message with suicide-prevention resources to the flagged user, including options such as contacting a helpline or a friend.
The decision to implement the software follows a number of suicides that have been broadcast on Facebook Live over the past several months. Facebook says its program is actually even better at recognizing the warning signs of suicide and self-harm than real people are. "The AI is actually more accurate than the reports that we get from people that are flagged as suicide and self-injury," product manager Vanessa Callison-Burchold told BuzzFeed News. "The people who have posted that content [that AI reports] are more likely to be sent resources of support versus people reporting to us."
Facebook is only alerted by its AI in situations that are "very likely to be urgent," Callison-Burchold added. Facebook has also made "suicide or self-injury" a more prominent option for users when reporting a post or video. "In suicide prevention, sometimes timing is everything," explained Dr. John Draper, a project director for the National Suicide Prevention Lifeline, which has partnered with Facebook.
"There is this opportunity here for people to reach out and provide support for that person they're seeing, and for that person who is using [Facebook Live] to receive this support from their family and friends who may be watching," Facebook researcher Jennifer Guadagno told BuzzFeed News. "In this way, Live becomes a lifeline."