Scientific researchers push for sharing data and greater transparency in experiments
Diederik Stapel, a professor of social psychology in the Netherlands, had been a rock-star scientist - regularly appearing on television and publishing in top journals. Among his striking discoveries was that people exposed to litter and abandoned objects are more likely to be bigoted.
And yet there was often something odd about his research. When students asked to see his data, he couldn't produce it readily. Colleagues would sometimes look at his data and think: It's beautiful. Too beautiful. Most scientists have messy data, contradictory data, incomplete data, ambiguous data. This data was too good to be true.
Then, in late 2011, Stapel admitted that he'd been fabricating data for many years.
The Stapel case was an outlier, an extreme example of scientific fraud. But this and several other high-profile cases of misconduct resonated in the scientific community because of a much broader, more pernicious problem: Too often, experimental results can't be reproduced.
That doesn't mean the results are fraudulent or even wrong. But in science, a result is supposed to be verifiable by a subsequent experiment. An irreproducible result is inherently squishy.
And so there's a movement afoot, and building momentum rapidly. Top-tier journals, such as Science and Nature, have announced new guidelines for the research they publish.
"We need to go back to basics," said Ritu Dhand, the editorial director of the Nature group of journals. "We need to train our students over what is okay and what is not okay, and not assume that they know."
The pharmaceutical companies are part of this movement. Big Pharma has massive amounts of money at stake and wants to see more rigorous pre-clinical results from outside laboratories. The academic laboratories act as lead-generators for companies that make drugs and put them into clinical trials. Too often these leads turn out to be dead ends.
Some pharmaceutical companies are willing to share data with each other, a major change in policy in a competitive business. "It's really been amazing the last 18 months, the movement of more and more companies getting in line with the philosophy of enhanced data sharing," says Jeff Helterbrand, Global Head of Biometrics for Roche in South San Francisco.
Scientific errors get a lot of publicity, but these embarrassing cases often demonstrate science at its self-correcting best.
Consider "cold fusion": In 1989, two scientists claimed to have achieved nuclear fusion at room temperature, previously considered impossible. It was a bombshell announcement - but no one else could replicate their work. Cold fusion didn't take off because mainstream scientists realised it wasn't real.
A more recent case involved "arsenic life". In 2010 a paper in Science suggested that a bacterium in Mono Lake, California, used arsenic instead of phosphorous in its genetic code and represented a new form of life. Rosemary Redfield, a scientist, cast doubt on the conclusion, and other researchers couldn't replicate the finding. The consensus is that it was a misinterpretation.
In early 2014, the scientific world was rocked by a tragedy in Japan. A young scientist, Haruko Obokata, claimed to have found evidence for a phenomenon called Stap, for Stimulus-Triggered Acquisition of Pluripotency - a way to manipulate ordinary cells to turn them into stem cells capable of growing into a variety of tissues.
But no one else could reproduce the experiment. An investigation found Obokata guilty of misconduct and she later resigned from her institute. The journal Nature retracted the Stap papers, and then the case took a horrific turn in August, when Obokata's mentor, the highly respected scientist Yoshiki Sasai, hanged himself.
Some veteran scientists have sounded a cautious note.
"Look, science is complicated, because the world is complicated," says Eric Lander, head of the Broad Institute at Massachusetts Institute of Technology and co-chair of the President's Council of Advisers on Science and Technology.