Medical research into brain disorders using animals is often biased: study

Researchers say brain disorder therapies first tested on animals often fail in human trials

PUBLISHED : Sunday, 21 July, 2013, 12:00am
UPDATED : Sunday, 21 July, 2013, 1:58am


Medical research that uses animals to test therapies for human brain disorders is often biased, claiming positive results and then failing in human trials, US researchers said.

The findings by Stanford University researchers may help explain why many treatments that appear to work in animals do not succeed in humans. Bias also wasted money and could harm patients in clinical trials, said the study in PLOS Biology.

Researchers examined 160 previously published meta-analyses of 1,411 animal studies on potential treatments for multiple sclerosis, stroke, Parkinson's disease, Alzheimer's disease and spinal cord injury, all done on more than 4,000 animals.

Just eight showed evidence of strong, statistically significant associations using evidence from more than 500 animals.

Only two seemed to lead to "convincing" data in randomised controlled trials in humans, it said. The rest showed a range of problems, from poor study design, to small size, to an overarching tendency towards publishing only studies in which positive effects could be reported.

Statistically, just 919 of the studies could be expected to show positive results, but almost twice as many - 1,719 - claimed to be positive.

"The literature of animal studies on neurological disorders is probably subject to considerable bias," the paper concluded. "Biases in animal experiments may result in biologically inert or even harmful substances being taken forward to clinical trials, thus exposing patients to unnecessary risk and wasting scarce research funds."

Animal studies make up a "considerable portion" of biomedical literature, with some five million papers in the medical PubMed database, it said.

While animal research existed to test safety and efficacy before new treatments were tried in humans, most interventions failed when they reached human clinical trials, the researchers said. They said the bias probably originated when scientists conducting the studies chose a way of analysing the data that appeared to give a better result.

Solutions may include stricter guidelines for study design and analysis, pre-registration of studies so that the results must be published whether positive or negative, and making raw data available to other scientists.