Blog‎ > ‎

Challenges to Forest Landscape Modeling Under Climate Change: Model parameterization and validation

posted Jul 1, 2013, 11:08 AM by Robert Scheller

Parameterization and model evaluation (aka validation) are among our greatest challenges.  Parameterization is the process of finding data for all the various processes, both statistical and mechanistic, for the model.  For this discussion, I will not consider model initialization - capturing the demographics and spatial distribution of tree species in particular - as this challenge is common to any forest modeling exercise, regardless of the motivating driver.  Parameterization for climate change forecasting presents its own unique challenges.  


For forests, each species or functional group may require its own set of parameters.  If the parameters have units, you might be lucky enough to find reasonable values in the literature.  Or you may need to collect the data yourself which is always an expensive proposition.  If the parameters are unit-less (such as the slope or intercept in a statistical model or the shape parameters common in many non-linear functions), calibration is often used to arrive a reasonable ‘best guess’ whereby the parameters are adjusted to meet an expected outcome.


The parameterization of forest models could immediately be improved by forging stronger links with experimental ecologists and long term research sites.  For example, results from the Ameriflux sites in North America could inform how tree species growth and respiration is represented in climate change studies.  Such links will be particularly useful for shorter-term processes including tree species recruitment (i.e., establishment) and growth.  This is the route we have taken in our research in the New Jersey Pine Barrens (using eddy covariance flux towers) and Wisconsin (eddy flux towers).  In the Lake Tahoe Basin, we are using tree ring data to determine annual growth and growth responses to drought and insect defoliation.  Doing so, we decided to alter a shape parameter that dictates sensitivity of growth to drought.


Longer-term processes, such as tree mortality, will require the analysis of long-term data sets, such as national inventories or long-term research plots (e.g., LTER or Smithsonian plots).  In the Lake Tahoe Basin, we have access to long-term records of insect outbreaks that have provided many critical parameters such as insect outbreak frequency, average number of hectares affected per year, average patch size of an outbreak.  More challenging is forging the link between climate change and insect outbreaks.  PDSI appears to be the best indicator available.


The flip side of parameterization is validation: How well does the model perform as compared to independent data?  The challenge here is that given the expense or difficulty of finding parameters, there is rarely ‘left over’ data available for independent validation.  If we focus instead on longer-term emergent behaviors (the most interesting anyways), we must patiently wait (or build a time machine) to discover whether our model is behaving in a reasonable fashion.  Models forecasts or projections of the effects of climate change simply cannot be fully validated. Backcasting is often suggested as a solution but presents its own problems (lack of data, lack of models that run backwards).  


This could be regarded as a total failure of the enterprise - why should anyone trust unvalidated results - but I believe we need to reframe the problem.  Validation is only one dimension of building confidence in modeling results.  Perhaps it is the gold standard as double-blind trials are to clinical medicine.  However, as we know from medicine, double-blind trials are not always an option (in their case due to ethical concerns) and other measures are employed.  


Instead of further hand-wringing, we need to re-define the basis for accepting or rejecting model results.  Rather than strict validation, we need to emphasize the role of confidence building when evaluating models.  We all know that all models are wrong (some are useful).  But how do we provide a more useful narrative that will help non-modelers discern the quality of model results?  I suggest a multi-pronged approach to building model confidence:  1) Validation of model components:  Are the model pieces adequately validated against current day conditions?  Space-for-time substitution may be a necessary but not sufficient step towards validation.  2)  Does that model have a history of application and publications that lend weight to the accumulated wisdom embodied in the model?   3) Has the model undergone rigorous sensitivity testing?  Sensitivity testing is generally required for a model to be broadly accepted as it provides critical information about which parameters have the largest influence on model results. Why invest in improving unimportant parameters?  4)  Model transparency:  Is the model code open for public review?  Is the code well documented, both internally and via accessible documentation? 5) Is the model robust and reliable?  Has it been well verified?  In other words, was it well constructed?  Modern software engineering practices are critical for ensuring reliability.  Any new model can make significant strides towards achieving greater confidence by attending to items 3, 4, and 5 at a relatively early stage.


Notice what we have excluded:  the often applied ‘reasonable behavior’ (also known as the ‘passes the sniff test’) form of validation.  A modeler may argue that the model generally behaves as expected under the simulated circumstances.  Under a changing climate, however, the unexpected or surprising results may ultimately be more accurate and the validity of personal experience can be expected to decline.  Together these criteria can help users and policy makers place a model along the continuous spectrum from fully validated (not possible given climate change) to completely speculative.  


Together, parameterization and ‘confidence building’ represent the bulk of the work ahead of us.  (Creating new models is relatively by comparison.)  Climate change is making our work ever more difficult - and exciting!


- Robert Scheller


Comments