On Meta-Research and the STAP Fiasco
Posted by Pete Shanks on July 7th, 2014
|A partially disputed Figure |
from the STAP paper
On July 2nd, Nature announced the retraction of the two high-profile papers (1, 2) published in January that described what came to be known as STAP cells, as well as a related commentary. The Editorial announcing the retractions describes the underlying process:
Between them, the two papers seemed to demonstrate that a physical perturbation could do what had previously been achieved only by genetic manipulation: transform adult cells into pluripotent stem cells able to differentiate into almost any other cell type. The acronym STAP (stimulus-triggered acquisition of pluripotency) became instantly famous.
The errors, some of them identified during the institutional misconduct investigation (pdfs, 1 & 2), others by the authors of the papers (the retractions are combined), include several misrepresentations of Figures, sloppy handling of data that may have been deliberate, reuse of material from an earlier thesis that used a different process, switched samples and plagiarism (which might have been due to a simple omission of citation). In the court of public opinion, however, the crucial fact is that no one has been able to duplicate the results.
The official retractions are linked from the papers (it is Nature's policy to annotate rather than delete retracted material). Three articles by David Cyranoski (1, 2, 3) in Nature's News section, which is editorially independent, provide more details, and Paul Knoepfler has compiled a linked timeline covering the five-month controversy. (See also Science, The New York Times, Time, etc.) Knoepfler also obtained answers to half a dozen questions he posed to the journal, which provide some more detail about the process that, clearly, failed.
The two principal scientists, however, are maintaining that their work is fundamentally sound. From the retractions (signed by the eight and eleven co-authors):
We apologize for the mistakes included in the Article and Letter. These multiple errors impair the credibility of the study as a whole and we are unable to say without doubt whether the STAP-SC phenomenon is real. Ongoing studies are investigating this phenomenon afresh, but given the extensive nature of the errors currently found, we consider it appropriate to retract both papers.
Charles Vacanti, who first had the idea, issued a statement saying "that he still believed the concept would be proven right." Haruko Obokata, who developed it, "will attempt to recreate the widely-trumpeted findings" under video surveillance over the next five months.
Other scientists don't give them much of a chance, according to the Boston Globe. Rudolf Jaenisch considers the question "finally settled" and criticizes Harvard for its "deafening" official silence. Yoshiki Sasai, one of the co-authors of both papers, says that "it has become increasingly difficult to call the STAP phenomenon even a promising hypothesis." Harvard's Leonard Zon is even blunter:
"I don't think there's any shred of hope for these cells."
This is a rapid about-face for Nature, which is the butt of deserved criticism for missing some of the issues that post-publication review revealed. Most such retractions used to take much longer; the Science retraction of Hwang Woo-suk's two human-stem-cell papers took almost two years from the publication of the first, though the second was only eight months old. Social media made a difference, as Knoepfler has noted, and we can expect similar speedy scrutiny in future.
For it's almost certain that something like this will happen again. Nature and Science and presumably other journals, are tightening their review processes, but it remains true that, as a Washington Post survey of mishaps and worse said:
Science is open to error, misinterpretation and even fraud
Indeed, almost a decade ago John Ioannidis published what became the most-accessed article in the history of Public Library of Science, with over a million hits:
Why Most Published Research Findings Are False
Now, he hopes to do something about that. In April, Ioannidis and Steven Goodman launched a new center at Stanford:
Scholars at the Meta-Research Innovation Center, or METRICS, will focus on conducting research about research.
The effort is timely, and Ioannidis seems to be both smart and appropriately cynical, according to The Economist:
Dr Ioannidis plans to run tests on the methods of meta-research itself, to make sure he and his colleagues do not fall foul of the very criticisms they make of others. "I don't want", he says, "to take for granted any type of meta-research is ideal and efficient and nice. I don't want to promise that we can change the world, although this is probably what everybody has to promise to get funded nowadays."
Previously on Biopolitical Times:
Advancing the Disability Rights Perspective on Bioethics Issues
Posted by Marcy Darnovsky on June 26th, 2014
The first-ever Disability Rights Leadership Institute on Bioethics (DRLIB for short) brought together about 65 U.S. disability rights advocates to discuss a wide range of issues. The Institute, held in April in Arlington, VA, included presentations – and vibrant discussions – on:
- Withholding Medical Treatment, Diane Coleman (Not Dead Yet)
- Assisted Suicide Laws, Marilyn Golden (Disability Rights Education & Defense Fund)
- Keynote speaker, Liz Carr (comedian and BBC drama series actor; UK Not Dead Yet activist)
- International Perspectives in Europe and Canada on Assisted Suicide and Euthanasia, Nic Steenhout (Vivre dans la Dignité) and Amy Hasbrouck (Toujours Vivant / Not Dead Yet Canada)
- Key Issues in Reproductive Technologies, Marcy Darnovsky (Center for Genetics and Society) and Silvia Yee (Disability Rights Education & Defense Fund)
- Wrongful Birth/Wrongful Life Torts, Samantha Crane (Autistic Self Advocacy Network)
The speakers’ presentations, powerpoints, and recommended readings are now available online at the DRLIB website, and lead organizers Diane Coleman and Marilyn Golden have each written a description of and reflection on the event.
Previously on Biopolitical Times:
Quantified and Analyzed, Before the First Breath
Posted by Jessica Cussins on June 26th, 2014
|Razib Khan with his baby boy|
Razib Khan, a PhD student at UC Davis studying the evolutionary genetics of cats, admits that he has “an obsession with genetics.” Two years ago, he sent 23andMe a genetic sample of his 2-month-old daughter so that she could be “easily slotted into the bigger genomic family photo album.”
At the time he predicted that “in the very near future, parents will be able to avail themselves of precise and accurate genomes of their future child in utero.” And just last month he declared, “The future is here, deal with it.”
So it is.
When Khan’s wife became pregnant with a boy, they didn’t waste any time. She had a biopsy of tissue taken from her placenta sent for testing of chromosomal abnormalities. The test showed that the boy was healthy, but Khan wanted to know everything.
After some difficulty, he obtained the original sample from the lab that had tested it and, using his own lab’s high-speed sequencing machine (usually reserved for plants and animals), sequenced the fetus’ entire genome. Using free online software, he was then able to determine 7,000 “genetic variants of interest.” Apparently there was nothing to worry too much about because Khan reported, “It’s mostly pretty boring. So that is good.”
And so his son was born in California earlier this month, becoming the first known healthy baby in the US to have had his entire genome sequenced before birth.
What will it be like to grow up with 7,000 “genetic variations of interest”? At what age will he be told about which? Will he be treated differently because of any of them? Or encouraged to develop specific skills or behaviors? The limited guidelines available for dealing with the genetic testing of children have already been flouted.
Although Khan told MIT Technology Review that sequencing his son “was more cool than practical,” he also “did it to show where technology is headed.” Is this really what most parents will want?
Khan is blunt about the rationale for extensive sequencing in utero – parents will still have a choice about whether to carry out the pregnancy. And he freely acknowledges that this technology throws us headlong into “the second age of eugenics.” But he believes that “the ability to select for quantitative traits” is “a major goal.” And though he regrets that perfection may still be far off, he notes that whole genome sequencing allows one to “select for mutational load” and exclaims,
The marketing pitch for this writes itself: imagine you, but bright of mind, and beautiful of face!
When a technology is so directly imbued with the values that motivated recent human atrocities, what are the avenues for responsible integration? The question of which lives are worthy of existence is one that, in my mind, should never be uncontroversial.
Kevin Mitchell, an Associate Professor in Developmental Neurobiology in Ireland, discussed some of these issues in a blog post last year and concluded,
In the meantime, before we go proposing scientifically impractical and morally questionable extreme measures, we have a proven and powerful tool to make people smarter: education.
But Khan isn’t buying it. His newest blog? Reading to Newborns Is Probably Useless.
Previously on Biopolitical Times:
Implications of Genetic Diversity in Mexico
Posted by Pete Shanks on June 25th, 2014
The category Latino is a valid cultural artifact, and often self-identified. But it's not really a race in any modern sense of the term, and the genetic evidence surely shows that it is far too broad a grouping to be scientifically appropriate without serious qualification. Yet it is used, even in some current peer-reviewed papers.
One that does not use the term is an article published in Science this month on the genetics of Mexico. The country's population is large and ethnically, linguistically, geographically, economically and culturally diverse. It is also genetically complex, and this article by a large and distinguished team of scientists provides new details. It also suggests some important implications for genomic research and likely for personalized medicine in general:
The genetics of Mexico recapitulates Native American substructure and affects biomedical traits
The study included 511 Native Mexican individuals from 20 indigenous groups, and 500 mestizo (mixed-race) individuals from ten states; nearly a million SNPs were analyzed for each. The variation was striking. From the abstract:
Some groups were as differentiated as Europeans are from East Asians. Pre-Columbian genetic substructure is recapitulated in the indigenous ancestry of admixed mestizo individuals across the country. Furthermore, two independently phenotyped cohorts of Mexicans and Mexican Americans showed a significant association between subcontinental ancestry and lung function.
The first implication for research is clearly that a lot more samples are needed. If this much variation was hidden in Mexico, how much may there be in pockets of Europe and Asia, let alone Africa? Somehow "the" human genome seems more elusive than ever.
That, in turn, carries implications for personalized medicine, as well as for the apparently hard-to-kill concept of genetic race. Consider another paper published this month (there is a bit of overlap among the authors) in JAMA:
Association of a Low-Frequency Variant in HNF1AWith Type 2 Diabetes in a Latino Population
This is another substantial study, and it did tease out a rare allele that is associated with an increased risk for diabetes. However, it "was observed in 0.36% of participants without type 2 diabetes and 2.1% of participants with it." In other words, the five-fold increase in risk leaves 98% of patients unaccounted for. Indeed, this may be an example of "geneticizing disease" as Michael Montoya discussed in his 2011 book Making the Mexican Diabetic.
It's worth noting, as co-author Karol Estrada points out at the Genomes Unzipped blog, that
The variant … was not found in publicly available genetic databases, including 1000 Genomes, Exome Sequencing Project, and dbSNP. Therefore, we would have missed this variant even if we had used the latest genotyping array technology and imputed (i.e., inferred the presence of) variants that were not directly genotyped.
That's yet another argument for more extensive genomic research, in particular (as Estrada stresses) among non-European populations. But the ellipses hide this apparently positive statement:
The variant was found only in people who live in Mexico or the southern U.S. and identify as Latino.
Culturally, they probably do so identify. But is it really appropriate to turn that sociopolitical category into what seems to be used as a genetic category? Even culturally, the variant may be associated with a sub-population (in which it may perhaps be significantly more common); the article suggests that all 52 carriers have "at least 1 segment of inferred Native American ancestry." It seems that the use of Latino is sloppy, at best.
Still, there may be thousands of people who have that allele, and they may have particular treatment needs. That would certainly be an appropriate use of genetic analysis in personalized medicine. But the practical difficulties remain substantial. Just for a start, and setting aside privacy and related issues: who do you test, how do you test them, who pays? Will insurance companies cover a 1-in-50 shot? And what about those with the allele but no symptoms?
Eventually, genomic analysis is likely to make an important contribution to routine medical treatment. But clearly there is quite some distance to go.
And we would all be wise to avoid the tendentious use of imprecise terms.
Previously on Biopolitical Times: