Canada to deploy AI that can identify suicidal thoughts
Programme will scan social media pages of 160,000 people
The Canadian government is launching a prototype artificial intelligence (AI) programme this month to “research and predict” suicide risks in the country.
The Canadian government partnered with AI firm Advanced Symbolics to develop the system, which aims to identify behavioural patterns associated with suicidal thoughts by scanning a total of 160,000 social media pages, reports Gizmodo.
The AI company’s chief scientist, Kenton White, told Vice News that scanning social media platforms for information provides a more accurate sample than using online surveys, which have seen a drop in response rates in recent years.
The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
“We take everyone from a particular region and we look for patterns in how they’re talking,” White said.
According to a contract document for the pilot programme, reported by Engadget, the AI system scans for several categories of suicidal behaviour, ranging from self-harm to attempts to commit suicide.
The government will use the data to assess which areas of Canada “might see an increase in suicidal behaviour”, the website says.
This can then be used to “make sure more mental health resources are in the right places when needed”, the site adds.
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
It’s not the first time AI has been used to identify and prevent suicidal behaviour.
In October, the journal Nature Human Behaviour reported that a team of US researchers had developed an AI programme that could recognise suicidal thoughts by analysing MRI brain scans.
The system was able to identify suicidal thoughts with a reported accuracy of 91%. However, the study’s sample of of just 34 participants was criticised by Wired as being too small to accurately reflect the system’s potential for the “broader population”.
- 
Margaret Atwood’s ‘deliciously naughty’ memoirIn the Spotlight ‘Bean-spilling’ book by The Handmaid’s Tale author is ‘immensely readable’
 - 
Being a school crossing guard has become a deadly jobUnder the Radar At least 230 crossing guards have been hit by cars over the last decade
 - 
Crossword: November 4, 2025The Week's daily crossword
 
- 
‘Deskilling’: a dangerous side effect of AI useThe explainer Workers are increasingly reliant on the new technology
 - 
AI models may be developing a ‘survival drive’Under the radar Chatbots are refusing to shut down
 - 
Saudi Arabia could become an AI focal pointUnder the Radar A state-backed AI project hopes to rival China and the United States
 - 
AI is making houses more expensiveUnder the radar Homebuying is also made trickier by AI-generated internet listings
 - 
‘How can I know these words originated in their heart and not some data center in northern Virginia?’instant opinion Opinion, comment and editorials of the day
 - 
AI: is the bubble about to burst?In the Spotlight Stock market ever-more reliant on tech stocks whose value relies on assumptions of continued growth and easy financing
 - 
Your therapist, the chatbotFeature Americans are increasingly turning to artificial intelligence for mental health support. Is that sensible?
 - 
Supersized: The no-limit AI data center build-outFeature Tech firms are investing billions to build massive AI data centers across the U.S.