Thu, Aug 18, 2022: 10:45 AM-11:00 AM
514B
Background/Question/MethodsWhen faced with too little data to guide major environmental decisions, we often rely on expert scientific judgement. But how can expert judgement work in a complex decision-making process when there are too many options to ask the experts about each one? One solution is to distill expert judgment and bottle it into a more generalizable format: causal models.King County, Washington (Seattle area) is evaluating options for major stormwater and wastewater projects to improve water quality. The utilities are estimating pollutant reductions for each option; we relied on expert judgement to evaluate how those reductions translate into benefits for people and wildlife: swimming, fishing, shellfishing, and orca and Chinook salmon populations. We needed to evaluate combinations of different management options in many individual watersheds, compare tradeoffs among the benefits, and iterate through new possibilities with utilities and decision-makers. To meet these needs, we worked with panels of experts to build a toolkit of causal models, both conceptual models and quantitative Bayesian network models. The causal models have been scrutinized and improved through structured peer review and broader community feedback (though not yet validated with empirical data).
Results/ConclusionsDistilling expert judgement into causal models allowed us to evaluate many more options than we could feasibly ask the expert panels about. In addition, using models helped us to combine different areas of expertise, to ensure consistency across all the evaluations, and to have a transparent process we can explain and defend. Most of our models turned out to be fairly simple, requiring no special software or knowledge of Bayesian statistics (e.g., no need for priors), just a willingness to think in terms of how probable various outcomes are.This talk shows some examples of model structures and outcomes, but focuses mainly on the lessons we learned in developing and using these models, as examples and advice for building sound expert-based environmental models. Our main take-away lessons include:Different management frameworks need different endpoints.Be explicit about timeframes and variability.Check for frequency distributions pretending to be probability distributions.Let continuous inputs stay continuous.Define everything in enough detail to use as a field or lab protocol.Design with validation in mind.
Results/ConclusionsDistilling expert judgement into causal models allowed us to evaluate many more options than we could feasibly ask the expert panels about. In addition, using models helped us to combine different areas of expertise, to ensure consistency across all the evaluations, and to have a transparent process we can explain and defend. Most of our models turned out to be fairly simple, requiring no special software or knowledge of Bayesian statistics (e.g., no need for priors), just a willingness to think in terms of how probable various outcomes are.This talk shows some examples of model structures and outcomes, but focuses mainly on the lessons we learned in developing and using these models, as examples and advice for building sound expert-based environmental models. Our main take-away lessons include:Different management frameworks need different endpoints.Be explicit about timeframes and variability.Check for frequency distributions pretending to be probability distributions.Let continuous inputs stay continuous.Define everything in enough detail to use as a field or lab protocol.Design with validation in mind.