Skip to content

Algorithm Anarchist – week of February 9, 2023

A weekly roundup of AI algorithms in the news

Greetings readers! In my upcoming speculative thriller, The Algorithm Will See You Now, there’s a podcaster who dubs herself the “Algorithm Anarchist.” She’s trying to get the world to see the truth about Big Medicine conglomerate “PRIMA” — Prognostic Intelligent Medical Algorithms (but no spoilers).

So I thought it might be fun to start a weekly blogpost rounding up interesting articles on AI algorithms in the news in our current reality. What a fun timeline we’re in right now!


1. Nuns versus algorithms

From deadline.com: Mrs. Davis’ Drama Series Gets Release Date At Peacock; Betty Gilpin Is A Nun On A Mission In First Look Photos

From the website:

Mrs. Davis is described as ‘an exploration of faith versus technology — an epic battle of biblical and binary proportions.’ Gilpin plays Simone, a nun who goes to battle against an all-powerful Artificial Intelligence known as ‘Mrs. Davis.'”

They pretty much had me at “nun versus algorithm,” but I can’t help but notice the “evil” AI is coded as female (bored of this trope already), although interesting that they use the now-becoming-old-fashioned title “Mrs.” as the title of the AI (I mean, do Millennials and GenZ even use this anymore ??)


2. The Ghosts Behind AI

From The Lead, The Ghosts Behind AI is a dive into the “microworkers” behind the deep learning that fuels AI like the recently popular ChatGPT.

From the article:

“While most of us might have never met a microworker, an estimated 20 million people do this type of work, with the majority located in the Global South. Anthropologist Mary Gray and computational social scientist Siddarth Suri term this invisible and exploited labour “ghost work”, due to these workers operating behind closed doors. Microworkers complete short data tasks which are hosted in digital platforms, who sit between the contractor and the microworker. Contractors tend to be the tech companies we know and love – Facebook, Google, Amazon – and the microworkers are often invisibilised workers who depend on these tiny tasks to earn a living.”

An insightful read into the exploitation and marginalization of these microworkers at the hands of capitalism:

“The Big Tech empires that surround us put microworkers on the lowest rung, devaluing their labour and raising their work. Popular representations of AI as some autonomous and omnipotent machine are inaccurate – artificial intelligence gets its intelligence from the labour of armies of exploited workers across the globe.”


3. Systemic bias and healthcare AI algorithms

From Wired: Health Care Bias Is Dangerous. But So Are ‘Fairness’ Algorithms

For those who want to dive a little deeper into how systemic and structural biases end up being reflected in healthcare algorithms. The authors work through a hypothetical example of how an algorithm would end up incorrectly predicting lower risk of lung cancer in Black patients than white patients:

“Our imaginary system, similar to real world examples, suffers from a performance gap between Black and white patients. Specifically, the system has lower recall for Black patients, meaning it routinely underestimates their risk of cancer and incorrectly classifies patients as ‘low risk’ who are actually at ‘high risk’ of developing lung cancer in the future. “

But this isn’t just hypothetical. As the authors point out at the beginning of the article, there are many areas where this kind of harm has already occurred

“Yet racially biased medical devices, for example, caused delayed treatment for darker-skinned patients during the Covid-19 pandemic because pulse oximeters overestimated blood oxygen levels in minorities. Similarly, lung and skin cancer detection technologies are known to be less accurate for darker-skinned people, meaning they more frequently fail to flag cancers in patients, delaying access to life-saving care. Patient triage systems regularly underestimate the need for care in minority ethnic patients. One such system, for example, was shown to regularly underestimate the severity of illness in Black patients because it used health care costs as a proxy for illness while failing to account for unequal access to care, and thus unequal costs, across the population. The same bias can also be observed along gender lines. Female patients are disproportionately misdiagnosed for heart disease, and receive insufficient or incorrect treatment. 


4. AI algorithms are sexist

From The Guardian, There is no standard’: investigation finds AI algorithms objectify women’s bodies.

Guardian exclusive: AI tools rate photos of women as more sexually suggestive than those of men, especially if nipples, pregnant bellies or exercise is involved. This story was produced in partnership with the Pulitzer Center’s AI Accountability Network.”

There’s been lots of talk of “shadowbanning” in…er…certain places. Well, to no surprise of any woman on the planet, gender bias in AI algorithms has led to shadowbanning of women’s bodies.

“Two Guardian journalists used the AI tools to analyze hundreds of photos of men and women in underwear, working out, using medical tests with partial nudity and found evidence that the AI tags photos of women in everyday situations as sexually suggestive. They also rate pictures of women as more “racy” or sexually suggestive than comparable pictures of men. As a result, the social media companies that leverage these or similar algorithms have suppressed the reach of countless images featuring women’s bodies, and hurt female-led businesses – further amplifying societal disparities.”

But apparently it’s surprising to male researchers, as per this quote from the article:

“‘This is just wild,’ said Leon Derczynski, a professor of computer science at the IT University of Copenhagen, who specializes in online harm. ‘Objectification of women seems deeply embedded in the system.'”

Quelle surprise. Cue biggest eyeroll so far today.


5. Yes, AI algorithms are already being used in your healthcare

From HealthExec, FDA has now cleared more than 500 healthcare AI algorithms.

Many early readers of my book suggested to me that setting my book in 2035 was unrealistic. I refer them to this list of 500 algorithms already being used in your healthcare today.

“The first AI algorithm was cleared by the FDA in 1995, and fewer than 50 algorithms were approved over the next 18 years. However, the numbers have increased rapidly in the past decade, and more than half of algorithms on the U.S. market were cleared between 2019 to 2022––more than 300 apps in just four years. Last October, the FDA approved 178 new AI and machine learning (ML) systems. That number is expected to grow rapidly into the future, the FDA has said.”

Much of the use is in imaging, using pattern-recognition and pattern-matching to help radiologists identify critical lesions.  

“Most of the radiology applications focus on specific subspecialty imaging, such as brain, breast, cardiac, lung and stroke imaging.”

But here’s the most interesting (and important) sentence in this article:

AI algorithms do not require FDA clearance if they do not directly impact clinical care, and this type of use is also seeing rapid proliferation across healthcare.

So what are some nonclinical uses of AI algorithms in healthcare that might be implemented?

From the article:

“Non-clinical AI is often used in IT systems to sort through the vast amounts of patient data to pull out relevant pieces of information for specific patient encounters…It’s also used to search for population health traits. This data can target some patients for additional care or resources, streamline diagnosis or workflow or aid in data mining or identifying patterns in data, which can be very difficult for humans to do easily.”

Hmmm, I’m all for algorithms helping to streamline my work as a physician, but see #3 above…


That’s all for this week! I’ll try to put out this roundup every Thursday. See you next week!


If you’re as fascinated as I am by all of this, check out my novel that explores AI algorithms in healthcare through a fictional lens, and in what thriller author Rob Hart (The Paradox Hotel) called a “clever, ripping thriller.”

Published inhealthcare and tech