Science Corner: Healthcare for All

photo of physician measuring a patient's blood pressure

Blood pressure checks are standard care; deciding who to prioritize for more complex treatment is not straightforward. (Photo by rawpixel)

Amidst all the conversations of how to best pay for healthcare in the United States, we also need to discuss how to best treat patients for those dollars. As with so many areas of our lives, algorithms trained via machine learning are becoming a part of the treatment process. Machine learning techniques look for patterns in data, even to the point of finding patterns their programmers did not expect or know about. Sadly, one common pattern in many data sets is racial bias. Often when an algorithm is looking to replicate human decision making, it winds up replicating the biases of the humans as well. A recent study revealed exactly this sort of racial bias in an algorithm used to prioritize healthcare delivery such that black patients were getting less care than white patients with equivalent needs.

In this case, prior healthcare costs were used as a proxy for need. All else being equal, patients who have already incurred higher costs for the year are more likely to have more serious health needs today, possibly because of underlying chronic conditions or risk factors. On that basis, prior costs make sense to include in an assessment of which patients to refer for additional care or resources. But rarely is all else equal. In this case, the black patients and white patients differ in their prior healthcare utilization and treatments. This led to an overall trend where a black patient with a given condition had incurred less cost than a white patient with the same condition, even though their actual needs might have been the same. Whether because of different utilization, being treated differently, having access to different levels of care or other causes, prior cost turned out to be unreliable for assessing current risk or need.

From the sound of it, the company providing the algorithm has been fairly open to revising their software to be less biased, albeit while maintaining that there was no problem in the first place. Hopefully a genuine resolution is achieved which brings more equity to healthcare delivery going forward.

Of course, this problem is not restricted to just healthcare algorithms. It has emerged in a variety of settings, including law enforcement. NPR recently covered biases in methods used to identify potential spies. (H/T to Tom Grosh for the link.) The article also has a good summary of the general problem.

And since the biased data training those algorithms comes from biased human decisions, we’ve got work to do beyond just improving our machine learning practices. For example, folks are making an effort to highlight the scientific contributions of a more representative array of scholars. I’m not sure how I missed it last year when she was inducted into the Air Force Space and Missile Pioneers Hall of Fame, but I came across the story of Gladys West recently. She developed computer algorithms for processing satellite data that became the foundation for GPS technology. This seemed like as good an opportunity as any to make sure you didn’t miss her story either.


You also don’t want to miss the opportunity to participate in a video conference with Bob Kaita, Dave Vosberg, Ciara Reyes-Ton and Hannah Eagleson tomorrow (10/31/19) at 12pm EDT. More details on how to participate here.

Print Friendly, PDF & Email
drandrewwalsh@gmail.com'

Andy Walsh

Andy has worn many hats in his life. He knows this is a dreadfully clichéd notion, but since it is also literally true he uses it anyway. Among his current metaphorical hats: husband of one wife, father of two elementary school students, reader of science fiction and science fact, enthusiast of contemporary symphonic music, and chief science officer. Previous metaphorical hats include: comp bio postdoc, molecular biology grad student, InterVarsity chapter president (that one came with a literal hat), music store clerk, house painter, and mosquito trapper. Among his more unique literal hats: British bobby, captain's hats (of varying levels of authenticity) of several specific vessels, a deerstalker from 221B Baker St, and a railroad engineer's cap. His monthly Science in Review is drawn from his weekly Science Corner posts -- Wednesdays, 8am (Eastern) on the Emerging Scholars Network Blog. His book Faith across the Multiverse is available from Hendrickson.

More Posts

Follow Me:
Twitter

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.