How to tell when scientific 'breakthroughs' are bogus
If a discovery or study sounds too good to be true, it often is, and hard scrutiny is necessary to separate fact from fiction
It was the Financial Times that on March 23, 1989 announced: "Nuclear fusion in a test tube". Seven years later, CNN ran a story titled "Ancient meteorite may point to life on Mars", complete with sub-heading "Biggest discovery in the history of science". In each case, the reports were part of a flurry of worldwide media attention, of the kind normally reserved for royal births and movie-star shenanigans rather than scientists' announcements.
Alas, despite the hoopla, neither report proved to be true. Claims of deuterium (heavy hydrogen) atoms fusing together in a tabletop experiment have been widely discredited. Likewise, the scientific community is highly sceptical that tiny structures in a meteorite from Mars are fossilised micro-organisms.
These cases show that even with science reports, it's worth remembering the adage: If it seems too good to be true, it probably is.
I came across another spectacular-if-true situation some years ago, when magazine editors suggested I might write about a Chinese report of DNA being recovered from a dinosaur fossil. Especially in the wake of the movie Jurassic Park, this could make for a huge story.
"That's impossible!" I announced - or words to that effect, based on some knowledge of fossilisation and organic molecules. It was indeed another case of researchers being fanciful.
Yet issues with published science are rarely so clear-cut. At one end of the spectrum are subtle issues, which only other scientists may spot - as may be the case with experiments by Johann Gregor Mendel, a 19th-century monk now known as the "Father of Genetics".
Mendel grew peas, noting how characteristics were dependent on the parent plants. He had a firm grasp of the principles involved, and this may have guided his results, which some people have since argued were insufficiently random.
Mendel's notebooks were burned, but the notes of Robert Millikan have been scrutinised over a similar controversy. Millikan performed an "oil-drop experiment" to determine the charge of an electron, and won the 1923 Nobel Prize for Physics. He has been accused of performing "cosmetic surgery" on his data, rejecting observations so his overall result would appear more accurate.
But at least Mendel and Millikan aimed for valid science. Occasionally, there are cases of downright fraud. One of the most famous was the Piltdown Man skull, which was made public in 1912, and hailed as a "missing link" between apes and humans. In 1953 it was proved to be a forgery.
No one knows just who created Piltdown Man. But the scientist behind a recent famous fraud was a Korean scientist hailed as a national hero: Dr Hwang Woo-suk.
In 2005, he was leader of a team that claimed to have extracted material from cloned human embryos that were obtained from 11 people. This came soon after Hwang claimed to have achieved the first cloning of a human embryo. After a colleague said the research was faked, a panel investigated and announced: "This is a serious wrongdoing that has damaged the foundation of science."
Though the full mix of reasons for the scandal were unclear - with enthusiastic government support playing a role - one was surely the fact that the scientific world seeks novel results. This in turn means there is rarely reward for verifying someone else's work. Which is a pity, as inaccurate results slip through even supposedly rigorous peer-review processes.
In an article noting that two pharmaceutical companies had tried reproducing 110 studies, and achieved the same results in less than a fifth of them, The Economist bleakly commented, "There are errors in a lot more of the scientific papers being published, written about and acted on than anyone would normally suppose, or like to think."
I'm not so pessimistic about the overall situation, though this may be due to a background in "hard science". If science is to make the headlines, results must be unusual, which in turn makes them unlikely.
So if you hope to spot dud science in the media, bear in mind cautionary tales like cold fusion, and carefully check for the extraordinary evidence required to substantiate extraordinary claims.
Other signs to look for include whether the research was published in a reputable publication such as Nature, or a journal with low entry barriers.
Ask: Does the experimental data really support the conclusion made? Also, importantly: Where did the research funding come from? Results may be skewed to support funding bodies, such as research sponsored by the tobacco industry.
Here in Hong Kong, I believe the "who pays the piper calls the tune" principle applies to environmental impact assessments, which are paid for by would-be developers. Indeed, when conducting a bird survey for an EIA, I told the proponent that he owned an area of outstanding biodiversity, and afterwards was wryly amused when birds were excluded from the next round of surveys.
Another point that may seem obvious is whether the experiment had a large enough sample size. I had long believed that drinking coffee or tea is dehydrating, as it seemed to be established common knowledge. Yet lately I was surprised to find this was based mainly on a 1920s study involving just three men, and more recent assessments suggest drinking moderate amounts of coffee or tea hydrate you just as much as water.
Phew. I'll drink - tea - to that!