Canada to deploy AI that can identify suicidal thoughts

Programme will scan social media pages of 160,000 people

Person looking at online content
The new system will analyse online posts to pinpoint potential suicide risk areas in the country
(Image credit: Credit: Getty photos)

The Canadian government is launching a prototype artificial intelligence (AI) programme this month to “research and predict” suicide risks in the country.

The AI company’s chief scientist, Kenton White, told Vice News that scanning social media platforms for information provides a more accurate sample than using online surveys, which have seen a drop in response rates in recent years.

Subscribe to The Week

Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

SUBSCRIBE & SAVE
https://cdn.mos.cms.futurecdn.net/flexiimages/jacafc5zvs1692883516.jpg

Sign up for The Week's Free Newsletters

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

Sign up

“We take everyone from a particular region and we look for patterns in how they’re talking,” White said.

According to a contract document for the pilot programme, reported by Engadget, the AI system scans for several categories of suicidal behaviour, ranging from self-harm to attempts to commit suicide.

The government will use the data to assess which areas of Canada “might see an increase in suicidal behaviour”, the website says.

This can then be used to “make sure more mental health resources are in the right places when needed”, the site adds.

It’s not the first time AI has been used to identify and prevent suicidal behaviour.

In October, the journal Nature Human Behaviour reported that a team of US researchers had developed an AI programme that could recognise suicidal thoughts by analysing MRI brain scans.

The system was able to identify suicidal thoughts with a reported accuracy of 91%. However, the study’s sample of of just 34 participants was criticised by Wired as being too small to accurately reflect the system’s potential for the “broader population”.