2017 ESA Annual Meeting (August 6 -- 11)

PS 88-237 - Assessing the reliability of the empirical literature in ecology and evolution

Friday, August 11, 2017
Exhibit Hall, Oregon Convention Center
Timothy H. Parker, Biology, Whitman College, Walla Walla, WA
Background/Question/Methods

Effective scientific progress requires a reliable scientific literature. For scientific literature to be reliable, researchers must report methods and findings in an unbiased fashion. This is not a controversial idea, but lack of transparency appears common and many disciplines are suffering from an unreliable or biased empirical literature. What is the status of the empirical literature in ecology? I synthesized evidence from a diverse set of sources in ecology and evolutionary biology to assess the extent to which bias appears problematic in these disciplines.

Results/Conclusions

Empirical evidence and theory combine to suggest that bias is likely common in the ecology and evolutionary biology literature. I show that incomplete transparency regarding results is often evident (in up to 85% of papers in some samples, and over 40% of total effects in other samples) in a diverse set of samples of the ecology and evolutionary biology literature. Worryingly, there are sound reasons to expect that many other cases of insufficient transparency are not so readily observed, and so rates of insufficient reporting may be much higher. I also bring together evidence that small sample sizes and resulting low statistical power are common in ecology and evolution, and other evidence that the rates of statistically significant results we would expect based on this low power are much smaller than reported rates of statistical significance from this literature. This elevated rate of statistical significance is more strong indication of bias. Small samples not only create low power in statistical tests, they also often generate inflated effect sizes. This is particularly worrying in light of other evidence that these inflated effect sizes frequently end up in high-impact journals, thus biasing perception of typical strengths of biological effects. Finally, I present graphical models illustrating how low statistical power, testing of unlikely hypotheses, and insufficient transparency of results are expected to create high rates of false positive results in the literature. Taken together these diverse sources of evidence suggest that many published conclusions are unreliable and rather than contributing to scientific progress, are hindering progress by leading other researchers (and their research funding) down blind alleys. Fortunately there is growing recognition of these problems, as well as a host of ideas for reducing bias. Individual researchers can take important steps to reduce bias in their own work, but journals, funding bodies, and universities are particularly well-positioned to promote transparency and reduce bias.