The notoriety of the Tuskegee syphilis study is unparalleled in the field of bioethics. Last week marked the 42nd anniversary of the horrific experiment’s termination, and many people took the opportunity to recall Tuskegee and examine its relevance to the treatment of human research subjects today.
Half a century ago, what the US Public Health Service did in Tuskegee was considered acceptable medical practice. Its researchers willingly endangered the lives of hundreds of African American men in rural Alabama, leading them to believe that they were being treated for “bad blood.” They could have been treated for the syphilis they actually had, since penicillin had become an available treatment by then.
But in the name of improving scientific understanding of the disease, all relevant information and treatment were purposefully kept from them. They were unknowing participants in a 40-year-long medical study to test the natural progression of syphilis and the full extent of its toll on black bodies.
Wives and children of the men contracted the disease, and numerous people died, but it was not until there was a leak to the press in 1972 that the study finally came to an end.
According to Alexander Cockburn in Whiteout: The CIA, Drugs, and the Press, “the lead researchers remained unapologetic.” Dr. John Heller, the Director of the Public Health Service's Division of Venereal Diseases, said, “For the most part, doctors and civil servants simply did their job. Some merely followed orders, others worked for the glory of science.”
Anyone familiar with the twentieth century’s record of other medical experimentation horrors will recognize this sentiment. In many ways, Tuskegee is just one among many examples of how easy it is for good people to believe they are doing good science. It also demonstrates that it can take decades before those in power will see, or say, otherwise.
That day finally did come for the Tuskegee experiment, and one of its legacies is the Belmont Report, which lays out fundamental principles for safeguarding human research subjects. These include protecting the autonomy of all people and treating them with honesty and respect; following the philosophy of “do no harm” in order to minimize potential risks; and ensuring that procedures are non-exploitative.
Despite the history of grave abuses in medical research, the notion of objective science never quite seems to go away. Twitter users talked about the danger of this in a great discussion thread on the day of the Tuskegee anniversary:
“myths of objectivity and value-free science are not only false but also harmful”
“We inadvertently reinforce the erroneous idea that policies tht arent explicitly detrimental must not b harmful at all”
“Science in and of itself is not an inherently noble value or cause. Applying #bioethics allows what we do to be noble”
A common thread throughout the discussion was the need for more work to ensure that people of color and other vulnerable communities are not exploited in medical experiments. In other words, we need to hone our historical perspective, but we also need to open our eyes and see what is happening all around us, right now, even though we tend to think that “now we know better.”
But do we? A report by Carl Elliott published late last month details the extent to which the pharmaceutical industry routinely tests new drugs on people who are homeless or mentally ill. Companies are well aware that many people in these situations will comply with a lot, for very little in compensation. Elliott uncovered accounts of people starting to take addictive drugs just to qualify for a particular study; of drug study recruiters approaching residents right outside a homeless shelter; of negligent care in what became a fatal incident.
Elliott points out a troublesome trend that has taken place since Tuskegee [bold added for emphasis],
Not long ago, such offers would have been considered unethical. Paying any volunteer was seen as problematic, even more so if the subjects were poor, uninsured, and compromised by illness. Payment, it was argued, might tempt vulnerable subjects to risk their health. As trials have moved into the private sector, this ethical calculus has changed.
In the 1970s, after a series of notorious research abuses, legislators pushed for a central federal agency with the power to protect human research subjects. The medical research establishment fought this idea, however, and when the National Research Act was passed in 1974 a very different alternative followed: a patchwork system of small ethics committees known as Institutional Review Boards. The boards were originally located in hospitals and medical schools, but clinical research has since moved into the private sector. Many are now for-profit companies that review studies in exchange for a fee.
With ethics now being determined on a case by case basis, often in a setting ripe with conflicts of interest, and utterly opaque due to companies’ view that clinical trials are commercial secrets, is it any wonder that lack of trust is widespread, especially among vulnerable communities?
This is not a problem of ignorance.
On the contrary, how could anyone who is paying attention trust our current for-profit patchwork system to ensure that medical experiments are undertaken in an ethical and just way?
Previously on Biopolitical Times: