Does big tech monetise adolescent pain?
Father of Molly Russell, 14, said she had been sucked into a ‘vortex’ of ever darker material

“It’s the rueful half-smile that breaks my heart,” said Judith Woods in The Daily Telegraph. Ambushed by a camera-wielding parent outside their home, a younger child would have beamed or scowled.
But Molly Russell offers that “oh-go-on-then-if-you-must” expression that “every self-aware teen gives her soppy dad”. It is an “intimation of adulthood” – the willingness to make “brief accommodations” to spare other people’s feelings, and get on with the day.
But Molly will not reach adulthood. In November 2017, the 14-year-old was found dead in her bedroom, having spent months viewing online content linked to depression, anxiety, self-harm and suicide. Some of it romanticised self-harm. Much of it was so graphic, bleak and violent that a psychiatrist told the inquest into Molly’s death that he’d been “unable to sleep well” for weeks after seeing it.
A ‘vortex’ of dark material
Not only had a child been able to access this distressing material, many of the posts had been provided to Molly, unasked, by Instagram and Pinterest’s “recommendation engines”, said John Naughton in The Observer. Thus the teenager – described by her father as a “positive” young woman, who seemed to be having only “normal teenage mood swings” – had been sucked into a “vortex” of ever darker material.
At the inquest, an executive from Meta, Instagram’s owner, denied that the 2,100 depression, self-harm or suicide-linked posts that Molly had liked or saved in the six months before her death were unsafe for children, and said that it was good for them to be able to share their feelings. But in what may be a world first, north London’s senior coroner ruled last week that Molly had died from an act of self-harm while suffering from depression – and “the negative effects of online content”.
Potential impact of Online Safety Bill
There are some who hope that the Government’s Online Safety Bill will make the internet safer for our children, said Hugo Rifkind in The Times. But this legislation keeps being delayed because trying to ban harmful content leads ministers into a “thicket of debate about free speech and censorship”. It’s complex; it’s also a red herring.
Teenagers have always sought out dark material, and found it. The difference now is the manner in which it is delivered to them. To read one sad post may be cathartic – it’s being bombarded with them that is dangerous. This is what we should tackle: not the content itself, but Big Tech’s “relentless monetising” of people’s absorption in content about self-harm, or “anything else”. “It’s a system, a design, a business model.” And the tech giants can fix it, if we make them.
-
AI: is the bubble about to burst?
In the Spotlight Stock market ever-more reliant on tech stocks whose value relies on assumptions of continued growth and easy financing
-
Your therapist, the chatbot
Feature Americans are increasingly turning to artificial intelligence for mental health support. Is that sensible?
-
Supersized: The no-limit AI data center build-out
Feature Tech firms are investing billions to build massive AI data centers across the U.S.
-
Jaguar Land Rover’s cyber bailout
Talking Point Should the government do more to protect business from the ‘cyber shockwave’?
-
iPhone Air: Thinness comes at a high price
Feature Apple’s new iPhone is its thinnest yet but is it worth the higher price and weaker battery life?
-
Google: A monopoly past its prime?
Feature Google’s antitrust case ends with a slap on the wrist as courts struggle to keep up with the tech industry’s rapid changes
-
Trump allies reportedly poised to buy TikTok
Speed Read Under the deal, U.S. companies would own about 80% of the company
-
GPT-5: Not quite ready to take over the world
Feature OpenAI rolls back its GPT-5 model after a poorly received launch