Google predicts when people will die ‘with 95% accuracy’
Huge benefits for medical profession - but will patients be willing to give up even more data to tech giant

Google has claimed it can predict with 95% accuracy when people will die using new artificial intelligence technology.
In a paper published in the journal Nature, the company’s Medical Brain team detailed how it is using a new type of artificial intelligence algorithm to make predictions about the likelihood of death among patients in two separate hospitals.
For predicting patient mortality, Google’s Medical Brain was 95% accurate in the first hospital and 93% accurate in the second.
Subscribe to The Week
Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.

Sign up for The Week's Free Newsletters
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.
It works by analysing patient’s data, such as their age, ethnicity and gender. This information is then joined up with hospital information, like prior diagnoses, current vital signs, and any lab results, reports The Sun.
But according to Bloomberg, what impressed medical experts most “was Google’s ability to sift through data previously out of reach: notes buried in PDFs or scribbled on old charts. The neural net gobbled up all this unruly information then spat out predictions. And it did it far faster and more accurately than existing techniques.”
It is not the first time Google has made inroads into the medical industry. Its DeepMind subsidiary, considered by some experts to lead the way in AI research, “courted controversy” in 2013 after it was revealed it had access to 1.6 million medical records of NHS patients at three hospitals, reports The Independent.
Yet despite concerns the search giant could be given access to even more data, the latest findings prove Google could have a potentially life-saving impact on its 1.17 billion users worldwide.
The Medical Brain team said: “These models outperformed traditional, clinically used predictive models in all cases. We believe that this approach can be used to create accurate and scalable predictions for a variety of clinical scenarios.”
“For medical facilities bogged down in bureaucratic red tape, Google’s software is a godsend” says Vanity Fair. “Not only can it predict when a patient may die, but it can also estimate how long someone might stay in a hospital, or the chance they’ll be readmitted”.
But the magazine also offers a word of warning saying that “for patients, giving a tech giant like Google access to sensitive medical information may have unintended consequences”.
Sign up for Today's Best Articles in your inbox
A free daily email with the biggest news stories of the day – and the best features from TheWeek.com
-
What's wrong with America's air traffic control systems?
Today's Big Question The radios and radar keep going out at Newark International
-
8 splashy items to elevate any pool party
The Week Recommends Fire up the snow cone machine, and turn on that outdoor movie projector
-
What to know as student loan collections resume
the explainer The restart comes as part of the Trump administration's reversal of Biden-era policies
-
AI hallucinations are getting worse
In the Spotlight And no one knows why it is happening
-
Deepfakes and impostors: the brave new world of AI jobseeking
In The Spotlight More than 80% of large companies use AI in their hiring process, but increasingly job candidates are getting in on the act
-
Secret AI experiment on Reddit accused of ethical violations
In the Spotlight Critics say the researchers flouted experimental ethics
-
Google ruled a monopoly over ad tech dominance
Speed Read Attorney General Pam Bondi hailed the ruling as a 'landmark victory in the ongoing fight to stop Google from monopolizing the digital public square'
-
Fake AI job seekers are flooding U.S. companies
In the Spotlight It's getting harder for hiring managers to screen out bogus AI-generated applicants
-
How might AI chatbots replace mental health therapists?
Today's Big Question Clients form 'strong relationships' with tech
-
What are AI hallucinations?
The Explainer Artificial intelligence is known for making things up – and that can cause real damage
-
The backlash against ChatGPT's Studio Ghibli filter
The Explainer The studio's charming style has become part of a nebulous social media trend