2017 ESA Annual Meeting (August 6 -- 11)

PS 88-235 - How common and acceptable are questionable research practices in ecology and evolution?

Friday, August 11, 2017
Exhibit Hall, Oregon Convention Center
Fiona Fidler1, Tim Parker2, Shinichi Nakagawa3, Ashley Barnett4, Hannah Fraser4 and Steve Kambouris4, (1)School of BioSciences, University of Melbourne, Melbourne, Australia, (2)Whitman College, (3)The University of New South Wales, (4)The University of Melbourne
Background/Question/Methods

Questionable Research Practices (QRPs) are research practices that aren’t quite scientific misconduct, and definitely aren’t fraud, but are arguably undesirable, such as only reporting conditions that showed a statistically significant result, stopping data collection once a desired result was reached, deciding to exclude data because it would help achieve the desired results, and reporting an unexpected finding as being predicted from the start. These practices can seriously distort the false positive rate in the literature and lead to irreproducible results.

We are undertaking a survey of researchers in ecology and evolution who have published in leading journals. Our survey concerns 10 QRPs, asking researchers whether they have engaged in a practice, whether others they know have, and whether they thought the practice was acceptable. We offered space for open-ended defences of each practice. We currently have ~400 responses and expect up to 600 by the August ESA meeting.

Results/Conclusions
Here are our preliminary results. Below are the QRPs we asked about, followed by the percentage of respondents who said that they had used this QRP at least once, and the percentage of respondents who said that it was acceptable at least some of the time.

1) Not reporting studies or variables that failed to reach statistical significance (e.g. p ≤ 0.05) or some other desired statistical threshold. 65% / 66%
2) Not reporting covariates that failed to reach statistical significance (e.g. p ≤ 0.05) or some other desired statistical threshold. 44% / 55%
3) Reporting an unexpected finding or a result from exploratory analysis as having been predicted from the start. 48% / 50%
4) Reporting a set of statistical models as the complete tested set when other candidate models were also tested. 52% / 61%
5) Rounding-off a p-value or other quantity to meet a pre-specified threshold (e.g., reporting p = 0.054 as p = 0.05 or p = 0.013 as p = 0.01). 27% / 21%
6) Deciding to exclude data points after first checking the impact on statistical significance (e.g. p ≤ 0.05) or some other desired statistical threshold. 24% / 33%
7) Collecting more data for a study after first inspecting whether the results are statistically significant (e.g. p ≤ 0.05). 38% / 82%
8) Changing to another type of statistical analysis after the analysis initially chosen failed to reach statistical significance (e.g. p ≤ 0.05) or some other desired statistical threshold. 52% / 64%