Skip to content

How current-day applications of AI inspired my book

My debut novel, The Algorithm Will See You Now, is a speculative medical thriller set in near-future Seattle, where the implementation of artificial intelligence algorithms to guide—and limit—healthcare turns out to be, in the end, subject to its human creators’ flaws.

The book’s publication on 3/2/2023 coincided with the explosion of AI in the media and news. Did I have a crystal ball to predict that, after years of rejections by industry professionals who said the concept wasn’t “sellable,” I would finally find a publisher in the year everyone’s talking about AI? Lol, no.

Here’s how it evolved:

I started the first draft of the book almost seven years ago, after the failure of IBM’s Watson Health. In my day job, I’m a hematologist/oncologist (a specialist in blood and cancer medicine). During the 2010s, there was a lot of excitement about IBM’s Watson (a machine-learning AI) having a role in helping oncologists sort data and test results for our patients to help us define treatment. But in the mid-2010s, it failed spectacularly.

Shortly after that, I had the idea for the novel when I read about some of the mistakes AI tools were making (like the misclassification of photos on Google). I immediately thought of the failed Watson Health project. My concern, which became my book’s premise, was what if we did one day achieve the goal of an advanced medical AI, but it turned out to be ultimately flawed at a very deep level? Mix that with the increasing corporatization of healthcare in the U.S., and my story was born. I suppose very much a classic trope of the science fiction thriller, which is the question of ultimately what fault lies in the technology versus what responsibility lies with humanity.

In just the past few months alone, two major stories have exposed the use of AI in Big Healthcare to put profits over patients.

First, the March 13, 2023, STAT News article: Denied by A.I.: How Medicare Advantage plans use algorithms to cut off care for seniors in need.

In this chilling exposé, the reporters detail the results of their investigation into A.I.-driven denials of medical care coverage to seniors on Medicare Advantage. According to the article, “Behind the scenes, insurers are using unregulated predictive algorithms, under the guise of scientific rigor, to pinpoint the precise moment when they can plausibly cut off payment for an older patient’s treatment…often delaying treatment of seriously ill patients who are neither aware of the algorithms, nor able to question their calculations.” (Emphasis added).

Second was the March 25, 2023 article by ProPublica: How Cigna Saves Millions by Having Its Doctors Reject Claims Without Reading Them. This investigative report details how a major insurance company, Cigna, instructed its employed physicians to deny coverage of treatments and services without ever reviewing the medical record.

According to the article, “Medical directors do not see any patient records or put their medical judgment to use, said former company employees familiar with the system. Instead, a computer does the work. A Cigna algorithm flags mismatches between diagnoses and what the company considers acceptable tests and procedures for those ailments. Company doctors then sign off on the denials in batches, according to interviews with former employees who spoke on condition of anonymity.” (Emphasis added).

I have a medical degree, and before that, a B.S. in biochemistry. I’m not a computer scientist. But as an oncologist of twenty years, I have a keen sense of empathy, observation, and medical intuition. I’ve spent ten of those past twenty years in rural practice in the U.S., where, too often, patients don’t make it to me until too late. Lack of insurance, and under-insurance, play a prominent role in this.

I became a writer out of a driving need to explore the fundamental question of whether healthcare is a human right. (Spoiler: I believe it is).

So why the heck are we giving corporate medicine and insurance companies the power to force physicians to practice in a way that denies some people that right?

That’s the burning question we must ask—and answer—before adding AI into our already broken system.

Published inBookshealthcare and tech