Canada to deploy AI that can identify suicidal thoughts
Programme will scan social media pages of 160,000 people

The Canadian government is launching a prototype artificial intelligence (AI) programme this month to “research and predict” suicide risks in the country.
The Canadian government partnered with AI firm Advanced Symbolics to develop the system, which aims to identify behavioural patterns associated with suicidal thoughts by scanning a total of 160,000 social media pages, reports Gizmodo.
The AI company’s chief scientist, Kenton White, told Vice News that scanning social media platforms for information provides a more accurate sample than using online surveys, which have seen a drop in response rates in recent years.
The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
“We take everyone from a particular region and we look for patterns in how they’re talking,” White said.
According to a contract document for the pilot programme, reported by Engadget, the AI system scans for several categories of suicidal behaviour, ranging from self-harm to attempts to commit suicide.
The government will use the data to assess which areas of Canada “might see an increase in suicidal behaviour”, the website says.
This can then be used to “make sure more mental health resources are in the right places when needed”, the site adds.
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
It’s not the first time AI has been used to identify and prevent suicidal behaviour.
In October, the journal Nature Human Behaviour reported that a team of US researchers had developed an AI programme that could recognise suicidal thoughts by analysing MRI brain scans.
The system was able to identify suicidal thoughts with a reported accuracy of 91%. However, the study’s sample of of just 34 participants was criticised by Wired as being too small to accurately reflect the system’s potential for the “broader population”.
-
3 officers killed in Pennsylvania shooting
Speed Read Police did not share the identities of the officers or the slain suspect, nor the motive or the focus of the still-active investigation
-
Fed cuts interest rates a quarter point
Speed Read ‘The cut suggests a broader shift toward concern about cracks forming in the job market’
-
ABC shelves ‘Kimmel Live’ after Trump FCC threat
Speed Read ‘A free and democratic society cannot silence comedians because the president doesn’t like what they say’
-
Is the UK government getting too close to Big Tech?
Today’s Big Question US-UK tech pact, supported by Nvidia and OpenAI, is part of Silicon Valley drive to ‘lock in’ American AI with US allies
-
Google: A monopoly past its prime?
Feature Google’s antitrust case ends with a slap on the wrist as courts struggle to keep up with the tech industry’s rapid changes
-
Albania’s AI government minister: a portent of things to come?
In The Spotlight A bot called Diella has been tasked with tackling the country's notorious corruption problem
-
The tiny Caribbean island sitting on a digital 'goldmine'
Under The Radar Anguilla's country-code domain name is raking in millions from a surprise windfall
-
GPT-5: Not quite ready to take over the world
Feature OpenAI rolls back its GPT-5 model after a poorly received launch
-
Deep thoughts: AI shows its math chops
Feature Google's Gemini is the first AI system to win gold at the International Mathematical Olympiad
-
The jobs most at risk from AI
The Explainer Sales and customer services are touted as some of the key jobs that will be replaced by AI
-
Why AI means it's more important than ever to check terms and conditions
In The Spotlight WeTransfer row over training AI models on user data shines spotlight on dangers of blindly clicking 'Accept'