Slate, August 14, 2012
As a young biologist, Elizabeth Iorns did what all young biologists do: She looked around for something interesting to investigate. Having earned a Ph.D. in cancer biology in 2007, she was intrigued by a paper that appeared the following year in Nature. Biologists at the University of California-Berkeley linked a gene called SATB1 to cancer. They found that it becomes unusually active in cancer cells and that switching it on in ordinary cells made them cancerous. The flipside proved true, too: Shutting down SATB1 in cancer cells returned them to normal. The results raised the exciting possibility that SATB1 could open up a cure for cancer. So Iorns decided to build on the research.
There was just one problem. As her first step, Iorns tried replicate the original study. She couldn’t. Boosting SATB1 didn’t make cells cancerous, and shutting it down didn’t make the cancer cells normal again.
For some years now, scientists have gotten increasingly worried about replication failures. In one recent example, NASA made a headline-grabbing announcement in 2010 that scientists had found bacteria that could live on arsenic—a finding that would require biology textbooks to be rewritten. At the time, many experts condemned the paper as a poor piece of science that shouldn’t have been published. This July, two teams of scientists reported that they couldn’t replicate the results.
Nobody got harmed by believing that a species of bacteria in a California lake could feed on arsenic. But there are lives on the line when scientists like Iorns can’t replicate a medical study. Nor is Iorns’ experience a fluke. C. Glenn Begley, who spent a decade in charge of global cancer research at the biotech giant Amgen, recently dispatched 100 Amgen scientists to replicate 53 landmark experiments in cancer—the kind of experiments that lead pharmaceutical companies to sink millions of dollars to turn the results into a drug. In March Begley published the results: They failed to replicate 47 of them.
Outright fraud probably accounts for a small fraction of such failures. In other cases, scientists may unconsciously ignore their own negative evidence and focus on the findings that provide a positive result. They may set up their experiments poorly. They may have gotten positive results thanks simply to chance.
There’s nothing wrong with being wrong in science. Science is supposed to move forward as scientists test out one another’s ideas and results. But 21st-century science struggles to live up to this ideal. Scientific journals prize flashy, original papers (in part because journalists like me write about them). A disappointing follow-up simply doesn’t have the same cachet.
After her own rough experience with replication, Iorns went on to become an assistant professor at the University of Miami. Last year she also became an entrepreneur, starting up a firm called Science Exchange that brings together scientists with companies that can perform the services they need—everything from sequencing DNA to producing a genetically engineered mouse. And today she’s using Science Exchange to launch a service called the Reproducibility Initiative. If it works, it could be a strong medicine for what ails science these days.
Here’s how it is supposed to work. Let’s say you have found a drug that shrinks tumors. You write up your results, which are sexy enough to get into Nature or some other big-name journal. You also send the Reproducibility Initiative the details of your experiment and request that someone reproduce it. A board of advisers matches you up with a company with the experience and technology to do the job. You pay them to do the job—Iorns estimates the bill for replication will be about 10 percent of the original research costs—and they report back whether they got the same results.
Why would you do this? For one thing, you’ll get a second paper out of the experience, and scientists are judged in part by the number of papers on their CV. Scientists often find it hard to publish replication studies. Iorns had to send her SATB1 paper to a number of journals before getting it published, despite the fact that it revealed that investigating SATB1 for a cancer cure would be a waste of time. The journal PLoS ONE has agreed to publish any study that comes out of the Reproducibility Initiative.
A number of other journals have also agreed to add a badge to all papers that have been replicated through the Reproducibility Initiative.* Think of it as a scientific Good Housekeeping seal of approval. It shows that you care enough about the science—and are confident enough in your own research—to have someone else see if it holds up.
If the project takes off, it’s possible that funding agencies would require a validation test. That way they’d avoid wasting grant money on following up on findings that don’t withstand an independent test. The badge could also become important for commercial development of scientific research. If you want to license your cancer research to a pharmaceutical company, you may have to show your badge as evidence that the investment won’t evaporate.
Iorns has gotten some big names on board as advisers, such as John Ioannidis of Stanford University, author of the game-changing 2005 paper, “Why Most Published Research Findings Are False.” It will be fascinating to see how Iorns’ team fares. They mostly come from the world of biomedical research, and it’s easiest to see how their strategy could work in that community. Iorns thinks that her strategy could work outside that world as well. The arsenic life researchers could have hired an outside firm to culture the bacteria and sequence its DNA, for example.
The Reproducibility Initiative will probably fare best when the science involved can be easily duplicated with standard pieces of technology. The more cutting edge the research, the fewer people will be able to replicate it. In June, Jay Shendure of the University of Washington and his colleagues sequenced the entire genome of an 18-week fetus based only on blood from the mother and saliva from the father. It’s a tour de force of DNA isolation and computer analysis—one that only a few labs in the world today could manage.
Other scientific fields could use replication studies, too. Experimental psychology—which grabs lots of headlines about the secret workings of the mind—is in particularly dire need. As journalist Ed Yong has reported, psychologists often goose up the statistical significance of their research to get it published, while failed replication studies regularly get rejected by psychology journals. In January some psychologists tried to reverse this trend by setting up a website called PsychFileDrawer, where researchers can post unpublished attempts to replicate psychology studies. So far they’ve attracted only 13 entries.
At the moment, the Reproducibility Initiative can’t help the mess in psychology. There are no psychologists at the Science Exchange ready to offer their services. But perhaps that, too, might change. Maybe the Reproducibility Initiative—and efforts like it—can transcend these logistical limits. Iorns and her colleagues are trying to reprogram the incentives in science. Right now, a lot of the incentives to take extra care rather than rushing to publish research are on the stick side of the carrot and stick equations—first and foremost, the fear that your paper gets retracted.
“If you are retracted, it’s career-breaking, and you’re a fraudulent scientist. It’s very negative,” Iorns says. “We said, ‘Why don’t we reward scientists who use high-quality data?’ Eventually the culture shifts from just funding originality, and instead we shift to rewarding things that are really true.”
Copyright 2012 Slate. Reprinted with permission.