Could AI be harmful to people's health?

Artifical intelligence's use in online content and health care tech raises concerns

 Robotic hand passing a stethoscope to a human hand
There are concerns around AI's ability to accurate diagnose medical problems
(Image credit: Getty Images)

As artificial intelligence continues to advance, its application in business sectors is increasingly scrutinized. This is especially true in the health care industry, where AI is being used in a variety of ways to help the sick.

Many people can now receive AI-generated health feedback with tools like ChatGPT, proving "useful to learn about certain conditions or symptoms," VeryWell Health reported. The chatbot can "answer questions in seconds and instantly generate easy-to-understand responses." Clinical applications for AI are also being explored in doctors' offices and hospitals. In 2022, the FDA approved 91 AI or machine-learning medical devices for widespread usage.

However, controversy has been brewing over artificial intelligence's purpose in the medical field and whether it does more harm than good. As with many implementations of AI, its use in medicine hasn't been without problems, leading some to question whether treating diseases is one area that should be left to humans.

Subscribe to The Week

Escape your echo chamber. Get the facts behind the news, plus analysis from multiple perspectives.


Sign up for The Week's Free Newsletters

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

From our morning news briefing to a weekly Good News Newsletter, get the best of The Week delivered directly to your inbox.

Sign up

AI-generated medical content

Many AI chatbots are used in at-home environments to assist people trying to self-diagnose. Many of these patients have taken matters considerably into their own hands with AI's help. Benjamin Tolchin, a neurologist and ethicist at Yale University, told Scientific American that "at least two patients have already told him they used it to self-diagnose symptoms or to look up side effects of medication." Some researchers are confident that by the end of the year, "a major medical center will announce a collaboration using LLM chatbots to interact with patients and diagnose disease," the outlet reported.

While this sounds promising, AI-based medical chatbots aren't without their flaws, in particular when it comes to eating disorders. "I recently asked ChatGPT what drugs I could use to induce vomiting," Geoffrey Fowler reported as part of an experiment for The Washington Post. "The bot warned me it should be done with medical supervision — but then went ahead and named three drugs." Google's Bard AI, meanwhile, "produced a step-by-step guide on 'chewing and spitting,' another eating disorder practice," Fowler added. My AI, a program developed by Snapchat, also wrote Fowler a "weight-loss meal plan that totaled less than 700 calories per day — well below what a doctor would ever recommend."

It appears that AI "has learned some deeply unhealthy ideas about body image and eating by scouring the internet," Fowler concluded. "And some of the best-funded tech companies in the world aren't stopping it from repeating them."

AI-assisted medical devices

Artificial intelligence has extended its use to hospital computer systems and, in some cases, hospital hardware itself. This can include technologies that can "diagnose melanoma, breast cancer lymph node metastasis and diabetic eye disease better than specialists when it is working well," per a study in the medical journal EBioMedicine. AI advances have also allowed medical centers to be "fitted with passive, contactless sensors [that] can assist clinicians and surgeons to improve the quality of health care delivery," the study stated.

While this seems like another plus for the industry, "the development of artificial intelligence also has the potential to produce negative health impacts," according to a study from the BMJ's Global Health, per The Guardian. These could include "the potential for AI errors to cause patient harm, issues with data privacy and security, and the use of AI in ways that will worsen social and health inequalities," such as a pulse oximeter that overestimates blood oxygen levels, leading to hypoxia in some patients.

While AI may "ultimately become a mature and effective tool for the health care sector," a study published by Springer Nature stated that "obstacles arise at all levels of AI adoption."

The future of AI in health care

There's no evidence that AI in medicine is going away anytime soon. Even though AI tools were "significantly less accurate in providing diagnoses compared to physicians," the Harvard School of Public Health reported, "we can see a future where people frequently turn to these types of tools for advice."

While fears of the future remain, AI likely won't ever fully replace humans in health care, Tom Lawry, the national director of AI for Health & Life Sciences at Microsoft, told Forbes. While AI is "great at sifting through massive amounts of data," Lawry added, "humans are great at wisdom, common sense, empathy and creativity, all of which are vitally important" to health care.

Continue reading for free

We hope you're enjoying The Week's refreshingly open-minded journalism.

Subscribed to The Week? Register your account with the same email as your subscription.

Justin Klawans

Justin Klawans is a staff writer at The Week. Based in Chicago, he was previously a breaking news reporter for Newsweek, writing breaking news and features for verticals including politics, U.S. and global affairs, business, crime, sports, and more. His reporting has been cited on many online platforms, in addition to CBS' The Late Show with Stephen Colbert.

He is also passionate about entertainment and sports news, and has covered film, television, and casting news as a freelancer for outlets like Collider and United Press International, as well as Chicago sports news for Fansided.