SI
SI
discoversearch

We've detected that you're using an ad content blocking browser plug-in or feature. Ads provide a critical source of revenue to the continued operation of Silicon Investor.  We ask that you disable ad blocking while on Silicon Investor in the best interests of our community.  If you are not using an ad blocker but are still receiving this message, make sure your browser's tracking protection is set to the 'standard' level.
Politics : PRESIDENT GEORGE W. BUSH

 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext  
To: DuckTapeSunroof who wrote (699103)9/1/2005 12:59:17 PM
From: goldworldnet  Read Replies (4) of 769670
 
Most scientific papers are probably wrong

Kurt Kleiner - NewScientist.com news service
02:00 30 August 2005

newscientist.com

Most published scientific research papers are wrong, according to a new analysis. Assuming that the new paper is itself correct, problems with experimental and statistical methods mean that there is less than a 50% chance that the results of any randomly chosen scientific paper are true.

John Ioannidis, an epidemiologist at the University of Ioannina School of Medicine in Greece, says that small sample sizes, poor study design, researcher bias, and selective reporting and other problems combine to make most research findings false. But even large, well-designed studies are not always right, meaning that scientists and the public have to be wary of reported findings.

"We should accept that most research findings will be refuted. Some will be replicated and validated. The replication process is more important than the first discovery," Ioannidis says.

In the paper, Ioannidis does not show that any particular findings are false. Instead, he shows statistically how the many obstacles to getting research findings right combine to make most published research wrong.

Massaged conclusions
Traditionally a study is said to be "statistically significant" if the odds are only 1 in 20 that the result could be pure chance. But in a complicated field where there are many potential hypotheses to sift through - such as whether a particular gene influences a particular disease - it is easy to reach false conclusions using this standard. If you test 20 false hypotheses, one of them is likely to show up as true, on average.

Odds get even worse for studies that are too small, studies that find small effects (for example, a drug that works for only 10% of patients), or studies where the protocol and endpoints are poorly defined, allowing researchers to massage their conclusions after the fact.

Surprisingly, Ioannidis says another predictor of false findings is if a field is "hot", with many teams feeling pressure to beat the others to statistically significant findings.

But Solomon Snyder, senior editor at the Proceedings of the National Academy of Sciences, and a neuroscientist at Johns Hopkins Medical School in Baltimore, US, says most working scientists understand the limitations of published research.

"When I read the literature, I'm not reading it to find proof like a textbook. I'm reading to get ideas. So even if something is wrong with the paper, if they have the kernel of a novel idea, that's something to think about," he says.

Journal reference: Public Library of Science Medicine (DOI: 10.1371/journal.pmed.0020124)

* * *
Report TOU ViolationShare This Post
 Public ReplyPrvt ReplyMark as Last ReadFilePrevious 10Next 10PreviousNext