Skip to content

I’m a medical oncologist. Here’s why A.I. isn’t going to cure cancer.

As a cancer physician, the amount of data I obtain on my patients is ever-increasing, along with options for cancer therapies. This is, as the saying goes, a good problem to have, but the amount of data management oncologists must do after hours (because there isn’t enough time in the clinic day) to keep up with the deluge of input contributes to burnout.

Many things keep oncologists up at night, but chief among them is the constant rethinking about our patients, “Could I have done something more, or differently? Did I miss something?”

Some think artificial intelligence could help solve both of these problems. Many future-thinking leaders envision a time when an A.I. tool will analyze a precision medicine report on every patient. This would, in theory, not only improve patient outcomes but decrease physician burnout rates.

If A.I. could, indeed, make us better physicians, I, for one, would welcome it.

But as a rural U.S. community oncologist for the past decade, I see daily the challenges in an under-resourced system as well as the disparities in care that can occur in our own country between urban and rural regions. We already know that the unequal application of precision medicine may widen cancer disparities in under-represented groups.

Combine this with the insurance companies increasingly putting profits over patients and the strain on the system from the past three years of the COVID-19 pandemic, and I worry over how A.I. might be misused—not for the best outcome of the patient, but for the best result for Big Medicine.

This is, sadly, already coming to fruition, as detailed in the March 13, 2023, STAT News article: Denied by A.I.: How Medicare Advantage plans use algorithms to cut off care for seniors in need.

In this chilling exposé, the reporters detail the results of their investigation into A.I.-driven denials of medical care coverage to seniors on Medicare Advantage. According to the article, “Behind the scenes, insurers are using unregulated predictive algorithms, under the guise of scientific rigor, to pinpoint the precise moment when they can plausibly cut off payment for an older patient’s treatment…often delaying treatment of seriously ill patients who are neither aware of the algorithms, nor able to question their calculations.” [emphasis added]

It brings to my mind the hype of a decade ago when there was much excitement about IBM’s Watson (a machine-learning A.I.) and its potential promise in helping oncologists sort data to better define treatments. But, as detailed in this Jan 2022 article in Slate, IBM’s Watson failed to deliver and essentially ended up being “sold for scraps.”

This 2018 article from the WSJ explains how Watson didn’t add to the care plans oncologists already had recommended and was, in some cases, inaccurate. “Watson can be tripped up by a lack of data in rare or recurring cancers, and treatments are evolving faster than Watson’s human trainers can update the system…No published research shows Watson improving patient outcomes.” [emphasis added]

Similarly, no published research has shown that the Medicare Advantage algorithms improve patient outcomes. But insurance companies are using them anyway, and as the STAT news article explains, while A.I. models designed to detect disease or suggest treatment are evaluated by the Food and Drug Administration, “tools used by insurers in deciding whether those treatments should be paid for are not subjected to the same scrutiny, even though they also influence the care of the nation’s sickest patients.” [emphasis added]

If we have no research showing actual benefits to patient care, one might logically ask why big tech companies remain so attracted to attempts to apply A.I. in healthcare.

In the Slate article, technology correspondent Casey Ross explains:

“It’s one of the biggest parts of our economy. It’s a three trillion business that has legacy technology infrastructure that should be embarrassing. Tech companies are drawn to audacious challenges like this, and ones where they can make—if they’re successful—a ton of money.” [emphasis added]

As a clinical oncologist, I spend hours on the phone with insurance companies who, sight unseen, deny cancer treatments to my patients. I have never not succeeded in overturning a denial. I emphasize this because it demonstrates how the insurance companies’ iron-fisted “prior authorization” requirements are not—and have never been—about proper oversight of care but denying care to avoid paying for it.

Perhaps the most disturbing quote from the recent STAT news article on the Medicare Advantage A.I.-based denials is this:

“In a six-page report, the algorithm boils down patients, and their unknowable journey through health care, into a tidy series of numbers and graphs.”

Oncologists live, breathe, and work in that unknowable space, journeying with our patients through often arduous attempts at life-saving treatments. Our patients are so much more than their data. They’re human beings, each a precious life worth saving. Trying to reduce a human life to an algorithm is not a worthy—or beneficial—goal.

Originally published April 4, 2023 on Kevinmd.com

Published inhealthcare and tech