2017 ESA Annual Meeting (August 6 -- 11)

COS 41-10 - Descriptive vs. predictive: Complex data and models in ecology require rigorous evaluations

Tuesday, August 8, 2017: 11:10 AM
B113, Oregon Convention Center
Volker Bahn, Department of Biological Sciences, Wright State University, Dayton, OH
Background/Question/Methods

Science is advanced by creating useful models, be they qualitative or quantitative. Usefulness is determined by model evaluation. A common error is to equate goodness-of-fit with explanatory or predictive power. When models are overfit and not tested on independent data this leads to overly optimistic evaluations and erroneous conclusions about model usefulness. The standard solution is to hold out data for independent evaluation. However, ecological data often have dependence structures such as spatial autocorrelation invalidating a random hold-out. Incorrect evaluations can also lead to erroneous variable and model selection in particular if there are latent variables that are either directly correlated with included variables or share a common dependence structure. I elucidate consequences of improper model evaluation on error estimates and variable selection in a simulation model and demonstrate techniques for rigorous evaluation. Since the correct error is typically not known in empirical data, it is important to investigate the principles first in simulated data, which needs to be complicated enough to allow for overfit models, latent variables, dependence structures and intercorrelations. The simulations draw upon the field of distribution modeling for realistically complicated ecological systems and a set of typically applied analytical methodology.

Results/Conclusions

I found that standard evaluation methods such as resubstitution, random hold-out or random cross-validation led to substantial underestimation of error. In addition, strategies using model evaluation for variable or model selection systematically lead to well-fitting models that predicted poorly and that rely on functionally unrelated variables, in particular when latent variables are correlated with functionally unrelated variables via dependence structures such as spatial autocorrelation. While sharing dependence structures doesn’t automatically lead to unrelated variables being correlated, it increases the probability of random correlations. In summary, a failure to identify overfit models, and to consider the effects of latent variables and shared dependence structures, leads to overly optimistic model evaluations, the selection of variables that are descriptive rather than predictive, and the impression of functional connection when there is none. Advancement in ecology, a science with complex data and models, depends on a thorough differentiation between descriptive models and explanatory/predictive models, which can only be achieved by evaluation on independent data.