The Bill & Melinda Gates Foundation has long been criticised for championing the trend of socially reductive, ‘magic bullet’ technical ‘solutions’ to the complex, historically shaped, politically conflicted problems at root of global health inequities.1–5 Their August 9th announcement of the launch of a new US$5 million, 48 project funding push6 to launch new ‘artificial intelligence (AI) large language models (LLM) in low-income and middle-income countries to improve the livelihood and well-being of communities globally’ is set to continue this hegemonic global health trend. And, as much as ‘magic bullets’ can solve issues, they, as bullets, are also capable of wounding and causing harm.
There are at least three reasons to believe that the unfettered imposition of these tools into already fragile and fragmented healthcare delivery systems risks doing far more harm than good.
We are not Luddites. New tools of technology, biomedicine, scientific knowledge and population care have often made life better and safer for those with access and control over their use.7 LLMs and AI, however, will not be so equity-advancing despite the Gates Foundation’s overheated...