Thursday, August 10, 2017: 4:00 PM
B110-111, Oregon Convention Center
Background/Question/Methods: Benchmark papers in conservation tend to address trends or patterns at global scales, and to produce maps or figures that deliver striking conclusions. However the data behind these large-scale analyses are inevitably sequestered in supplemental materials, and are often very hard to find. In some cases, instead of data, there is expert opinion. We used Web of Science to identify the ten most cited conservation papers in NATURE and SCIENCE between 2000 and 2017. We then examined in detail the foundational data underlying those papers.
Results/Conclusions: In over half of the ten most cited papers key “data” were either based on judgements, indices, or models and were hard to relate back to actual observations. In one third of the cases it would be difficult to go back and examine the data to see if the conclusions were warranted. For the global examinations, the relationship between local and global could well be discordant. We ask to what extent this trend diminishes the pressure to collect original data, or runs the risk of yielding erroneous conclusions that cannot be refuted or tested.