Academic Research is a Dud by Brian Simpson
For some time now a small group of scientists have been raising sophisticated doubts about the soundness of virtually all statistically-based research in the social sciences and medicine, with a now classic paper being John Ioannidis, “Why Most Published Research Findings are False,” PLoS Medicine, 2, (2005).
The problems are many and relate to a misuse and abuse of P-value significance tests to systematic biases arising from small, unrepresentative samples.
Yet another addition to this literature is by A. D. Higginson and M. R. Munafo, “Current Incentives for Scientists Lead to Underpowered Studies with Erroneous Conclusions,” PLoS Biology, November 10, 2016. These scientists attempt to account for the Ioannidis-inspired research, arguing that funding mechanisms reward small studies that are likely to produce false results.
Using an ecological analysis, they show that “fitness,” that is academic success, is produced in this way rather than by careful large studies that take more time and resources.
The problem is not merely academic. About 50 percent of research findings in medical research cannot be replicated in human trials, putting a big question mark over the use of some Big Pharma drugs.