2022 ESA Annual Meeting (August 14 - 19)

COS 22-5 Multimodel community forecasts of vegetation phenology: Results from year 1 of the NEON Forecasting Challenge

4:30 PM-4:45 PM
513A
Michael C. Dietze, Boston University;Kathryn I. Wheeler,Boston University;Min Chen,University of Wisconsin;Raphaela E. Floreani Buzbee,University of California, Berkeley;Ben R. Goldstein,UC Berkeley;Jessica S. Guo,University of Arizona;Dalei Hao,Pacific Northwest National Lab;Mira Kelly-Fair,Boston University;David LeBauer,University of Arizona;Haoran Liu,University of Wisconsin;Chris M. Jones,North Carolina State University Center for Geospatial Analytics;Charlotte Malmborg,Boston University;Naresh Neupane,Georgetown University;Debasmita Pal,Michigan State University;Andrew D. Richardson, PhD,Northern Arizona University;Leslie Ries,Georgetown University;Arun Ross,Michigan State University;Yiluan Song,University of California, Santa Cruz;McKalee Steen,University of California, Berkeley;R Quinn Thomas,Virginia Tech;
Background/Question/Methods

Vegetation phenology plays an important role in regulating ecosystem processes, and is a key bellwether for detecting climate change impacts. However, the skill of different phenological modeling approaches has yet to be tested in a true predictive context. To address this, the Ecological Forecasting Initiative (EFI) Research Coordination Network (RCN) organized an open community forecasting challenge that asked teams to predict Phenocam greeness at eight temperate deciduous NEON (National Ecological Observatory Network) sites. The NEON Challenge asked teams to submit daily forecasts of green chromatic coordinate (GCC) 35-days into the future, with NOAA’s weather forecast available for teams to use as model inputs. These were true phenology forecasts, with teams being asked to predict future GCC values that could be compared to new observations every day. All together, over the first year of the challenge, teams submitted over 770k predictions from 17 different models. Here we assess the predictive skill of the community of forecasts over the first year of the Challenge using the Continuous Rank Probability Score (CRPS), a metric that accounts for both accuracy and precision, and comparing forecasts to both random walk (persistence) and historical means null models.

Results/Conclusions

Across models, CPRS increases approximately linearly as a function of forecast lead time before asymptoting for forecasts >25 days in the future, with a forecast error that is ~40% larger than the one day forecast. All teams were able to beat the random walk “persistence” null model (tomorrow is the same as today plus uncertainty), but only two teams performed better than the historical means null forecast, when compared across all sites and time, and this difference was not statistically significant. When normalizing phenological time across sites, relative to the date of 50% greenup, the across-model mean CRPS was consistently lowest from the start of year through 40-days pre-greenup, then rose to a peak around one week prior to green-up, before declining to a second higher “summer” asymptote around 20 days post-greenup. These results suggest that predicting initial bud-burst remains challenging. Clear patterns in how model structure affected patterns of predictability have not yet emerged, but remain a key priority as we expand the forecasting challenge to new sites and ecosystem types in 2022 and beyond.