Canada to deploy AI that can identify suicidal thoughts
Programme will scan social media pages of 160,000 people
The Canadian government is launching a prototype artificial intelligence (AI) programme this month to “research and predict” suicide risks in the country.
The Canadian government partnered with AI firm Advanced Symbolics to develop the system, which aims to identify behavioural patterns associated with suicidal thoughts by scanning a total of 160,000 social media pages, reports Gizmodo.
The AI company’s chief scientist, Kenton White, told Vice News that scanning social media platforms for information provides a more accurate sample than using online surveys, which have seen a drop in response rates in recent years.
The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.
Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
“We take everyone from a particular region and we look for patterns in how they’re talking,” White said.
According to a contract document for the pilot programme, reported by Engadget, the AI system scans for several categories of suicidal behaviour, ranging from self-harm to attempts to commit suicide.
The government will use the data to assess which areas of Canada “might see an increase in suicidal behaviour”, the website says.
This can then be used to “make sure more mental health resources are in the right places when needed”, the site adds.
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
It’s not the first time AI has been used to identify and prevent suicidal behaviour.
In October, the journal Nature Human Behaviour reported that a team of US researchers had developed an AI programme that could recognise suicidal thoughts by analysing MRI brain scans.
The system was able to identify suicidal thoughts with a reported accuracy of 91%. However, the study’s sample of of just 34 participants was criticised by Wired as being too small to accurately reflect the system’s potential for the “broader population”.
-
Political cartoons for December 10Cartoons Wednesday's political cartoons include a titanic war crime, a hostile takeover, and skinny Santa Claus
-
The Week contest: No smokingPuzzles and Quizzes
-
Phish food for thought: Ben & Jerry’s political turmoilIn the Spotlight After a landmark demerger by Unilever, spinning off their ice cream brands, a war of words over activism threatens to ‘overshadow’ the deal
-
Separating the real from the fake: tips for spotting AI slopThe Week Recommends Advanced AI may have made slop videos harder to spot, but experts say it’s still possible to detect them
-
Inside a Black community’s fight against Elon Musk’s supercomputerUnder the radar Pollution from Colossal looms over a small Southern town, potentially exacerbating health concerns
-
Poems can force AI to reveal how to make nuclear weaponsUnder The Radar ‘Adversarial poems’ are convincing AI models to go beyond safety limits
-
Spiralism is the new cult AI users are falling intoUnder the radar Technology is taking a turn
-
AI agents: When bots browse the webfeature Letting robots do the shopping
-
Is AI to blame for recent job cuts?Today’s Big Question Numerous companies have called out AI for being the reason for the culling
-
‘Deskilling’: a dangerous side effect of AI useThe explainer Workers are increasingly reliant on the new technology
-
AI models may be developing a ‘survival drive’Under the radar Chatbots are refusing to shut down