These Algorithms Look at X-Rays—and Somehow Detect Your Race
By Tom Simonite,
Wired
| 08. 05. 2021
A study raises new concerns that AI will exacerbate disparities in health care. One issue? The study’s authors aren’t sure what cues are used by the algorithms.
Photo licensed for use by CC BY-SA 4.0 on Wikimedia Commons
Millions of dollars are being spent to develop artificial intelligence software that reads x-rays and other medical scans in hopes it can spot things doctors look for but sometimes miss, such as lung cancers. A new study reports that these algorithms can also see something doctors don’t look for on such scans: a patient’s race.
The study authors and other medical AI experts say the results make it more crucial than ever to check that health algorithms perform fairly on people with different racial identities. Complicating that task: The authors themselves aren’t sure what cues the algorithms they created use to predict a person’s race.
Evidence that algorithms can read race from a person’s medical scans emerged from tests on five types of imagery used in radiology research, including chest and hand x-rays and mammograms. The images included patients who identified as Black, white, and Asian. For each type of scan, the researchers trained algorithms using images labeled with a patient’s self-reported race. Then they challenged the algorithms to predict...
Related Articles
By Nick Paul Taylor, Fierce Biotech | 01.09.2026
Menlo Ventures has made a $16 million bet that the “baby KJ” custom CRISPR therapy success story is repeatable. The funding has enabled CRISPR co-inventor Jennifer Doudna, Ph.D., and baby KJ scientist Fyodor Urnov, Ph.D., to launch Aurora...
By Stephanie Pappas, LiveScience | 01.15.2026
Genetic variants believed to cause blindness in nearly everyone who carries them actually lead to vision loss less than 30% of the time, new research finds.
The study challenges the concept of Mendelian diseases, or diseases and disorders attributed to...
By Andrew Gregory, The Guardian | 01.11.2026
Google has removed some of its artificial intelligence health summaries after a Guardian investigation found people were being put at risk of harm by false and misleading information.
The company has said its AI Overviews, which use generative AI to...
By Michael Rossi, The Los Angeles Review of Books | 01.11.2026
This is the 10th installment in the Legacies of Eugenics series, which features essays by leading thinkers devoted to exploring the history of eugenics and the ways it shapes our present. The series is organized by Osagie K. Obasogie in...