There have been several attempts to canonize principles by which to guide (and judge) the use of uncertainty analyses in environmental risk assessments. Important publications include Morgan and Henrion (1990), Burmaster and Anderson (1994), Firestone et al. (1997), APHIS (2001), Hart et al. (2004), and various “framework” and guidance[1] documents from US EPA. Some of these reviews of guiding principles have become obsolete after recent methodological advances and some contain ideas that appear misguided or unnecessary.
We've tried to create a concise and up-to-date list of principles that should be helpful in specifying and developing environmental risk assessments. These principles address the formulation, analysis and communication of (forward) risk assessments. They cover some of the most important issues and standards of practice. Following them will help risk analysts feel comfortable that they've created a good risk assessment.
Ask a question that’s both relevant and specific.
Be prepared to change the question if it can’t be answered.
Analyze uncertainty.
You can always do an uncertainty analysis.
Having too little empirical information is surely not an argument to skip the uncertainty analysis.
Poor data may require using non-probabilistic methods.
Use bounding rather than intentionally biasing numerical values to account for incertitude.
Biasing cannot simultaneously reveal how bad and how good the outcome can be.
Avoid entanglement in purely mathematical problems.
Real risk assessments should not involve distributions with infinite ranges.
Don’t worry about results that hinge on a set being “closed” or “open”.
Admit what you don’t know.
Characterize the uncertainty for every input parameter.
Respect and use all available data, without believing their hype.
Do not assume the observed range is the possible range.
Do not assume a precise distribution without sufficient empirical or theoretical justification.
Don’t set “unimportant” variables to constants; but you can characterize their uncertainty coarsely.
Characterize the interaction or dependence among all the parameters.
Do not assume independence among variables without theoretical justification
Do not assume variables are merely correlated without reasonable justification.
Take account of any impossibility of certain combinations.
If possible, insist on empirical confirmation.
Characterize the uncertainty about the model itself, including its structural form.
Express model uncertainty as parametric uncertainty when possible.
Use prediction intervals rather than confidence intervals from regressions.
Account for the different kinds of uncertainty.
Represent incertitude with bounding methods.
Incertitude includes plus-and-minus ranges and other forms of measurement uncertainty.
Incertitude includes data censoring and uncertainty arising from laboratory non-detects.
Incertitude usually includes doubt about model structure and other kinds of scientific ignorance.
Model variability using probability methods (i.e., mixture models).
Variability usually includes spatial variation and temporal fluctuation.
Variability usually includes genetic differences and heterogeneity among individuals.
Variability usually includes inconsistencies in fabrication and sometimes material nonuniformity.
Variability usually includes any of the consequences of natural stochasticity.
Treat incertitude and variability separately, and differently.
The uncertainty in inputs should not confuse the two kinds of uncertainty.
Treating incertitude as variability is much worse than treating variability as incertitude.
The convolutions of the risk model should preserve the two kinds of uncertainty in the outputs.
Avoid making assumptions.
The more assumptions you make, the less credible your conclusions are.
Just because a model is simple doesn’t mean it’s not making strong assumptions.
Linearity is a very strong assumption.
Independence is a very strong assumption.
Lognormality, normality, uniformity and triangularity are very strong assumptions.
Relax strong assumptions that are not supported by evidence or argument.
Non-parametric and distribution-free techniques avoid making assumptions.
Bounding and enveloping avoid making assumptions.
Fréchet inequalities and Fréchet bounds avoid making assumptions about dependence
What-if scenarios avoid making assumptions about models.
Discharge untenable assumptions.
Sensitivity analyses can be used to mollify an assessment.
Don’t average together incompatible models or use Bayesian model averaging.
Audit the analysis.
Check the correctness of the structure of the model.
Does the model obey the rules of dimensional soundness and do the units of parameters conform?
Is there a population or an ensemble explicitly specified for every distribution?
Are the ensembles conformant among combined distributions?
For instance, is spatial variation never confused or combined with temporal variation?
Are there no repeated uncertain parameters that would fallaciously inflate uncertainty?
Are there no multiple instantiations of probability distributions that underestimate uncertainty?
Are you dividing by or logging distributions that can take zero as a value?
Do the moments specifying each distribution satisfy the positive semi-definiteness condition?
Do the distributions obey other moment-range constraints (e.g., min £ max; variance £ range2/4)?
Is the matrix of correlation coefficients positive semi-definite?
Do correlations conform with functional relationships (e.g., if C=A+B, C isn’t independent of A)?
Does the structure of the model make sense and conform with scientific knowledge?
Does a food web model have a topology lacking intervals sensu Cohen?
Does the support of each uncertain number jibe with the theoretical range of its parameter?
Check the faithfulness of the model’s implementation.
Check the range of the outputs against a range analysis of the inputs.
Check the mean and variance of the outputs against a moments analysis of the inputs.
Was the random seed randomized? Does another value yield qualitatively similar results?
Are range checks satisfied?
Answer the question and address the audience.
Make the analysis as transparent as possible to reviewers.
Specify the intention[2] of the analysis.
Spell out all assumptions and indicate their likely qualitative influence on the results.
Explicity list all variables with their units and specify the uncertain numbers used to model them.
Recount how the correlations and dependencies among variables were modeled.
Explicitly state the model used to combine them, giving pseudocode or actual code where useful.
Say what propagation approach[3] was used.
Describe checks and sensitivity analyses employed.
Express answers (whether distributions, bounds or bounds on distributions) graphically.
Answer the “so what?” questions.
Speak to the specific concerns of the manager or the public.
Explain the import of the results obtained.
Quantitatively characterize the robustness of the conclusions.
Admit what you didn’t know.
Although such honesty can sometimes engender mistrust, it is the only sustainable approach.
Always ask the audience for help and avoid suggesting you know best.
Risk analysts employ many different kinds of mathematical methods, including uncertainty propagation, sensitivity analysis, backcalculation, calibration, model evaluation, decision analysis, compliance determination, visualization, etc. The principles above assume that a particular model structure has already been chosen for the problem, whether by regulatory precedent or fiat. If the risk assessment includes the selection of the model too, there would be additional principles to consider.
[1] The earliest EPA guidance was abstract and written more like the guiding principles one finds in a constitution than detailed statements common in regulatory guidance today. For instance, the original discussion within EPA of the two-stage framework for cancer risk assessment, which preceded any other agency's consideration, is recounted in (Albert et al. 1977). The guidelines were published in their entirety in the Federal Register in 1976.
[2] Project uncertainty in a risk estimate, explore possible remediation strategies, choose among management strategies, calculate a cleanup goal (remediation target), evaluate compliance to some risk goal or criterion, determine how best to allocate future empirical efforts, calibrate or validate a model for use elsewhere, etc.
[3] Probability theory (Monte Carlo simulation or analytical derivation), interval analysis / worst-case analysis, possibility theory / fuzzy arithmetic, probability bounds analysis / Dempster-Shafer theory / evidence theory / random sets, imprecise probabilities, combination or hybrid methods, etc.
Albert, R. E., R. E. Train, and E. Anderson. 1977. Rationale developed by the Environmental Protection Agency for the assessment of carcinogenic risks. Journal of the National Institute 58:1537-1541. doi:10.1093/jnci/58.5.1537
APHIS [anonymous], 2001. Risk Assessment Review Standards. Animal and Plant Health Inspection Service, US Department of Agriculture, Washington, DC.
Burmaster, D.E. and P.D. Anderson. 1994. Principles of good practice for the use of Monte Carlo techniques in human health and ecological risk assessments. Risk Analysis 14: 477-481.
Cooke, R.M. 1991. Experts in Uncertainty: Opinion and Subjective Probability in Science. Oxford University Press, Oxford, United Kingdom.
Cullen A.C., H.C. Frey. 1999. Probabilistic Techniques in Exposure Assessment: A Handbook for Dealing with Variability and Uncertainty in Models and Inputs. Plenum Press, New York.
Elith, J., M. A. Burgman, H. M. Regan. 2002. Mapping epistemic uncertainties and vague concepts in predictions of species distribution. Ecological Modelling 157: 313-329.
Finkel, A.A. 1994. Stepping out of your own shadow: a didactic example of how facing uncertainty can improve decision-making. Risk Analysis 14: 751-761.
Firestone, M., P. Fenner-Crisp, T. Barry, D. Bennett, S. Chang, M. Callahan, A.-M. Burk, J. Michaud, M. Olsen, P. Cirone, D. Barnes, W.P. Wood, S.M. Knott. 1997. Guiding Principles for Monte Carlo Analysis. US Environmental Protection Agency, EPA/630/R-97/001, Washington, DC.
Gigerenzer, G. 2002. Calculated Risks: How to Know When Numbers Deceive You. Simon & Schuster, New York.
Hart, A. et al. (eds.) 2004 [forthcoming]. Procceedings of the Workshop on the Application of Uncertainty Analysis to Ecological Risks of Pesticides [tentative title]. SETAC Press, Pensacola, Florida. See especially, the chapter “How to detect and avoid misleading results”.
Kahneman, D., P. Slovic and A. Tversky (eds.) 1982. Judgment under Uncertainty: Heuristics and Biases. Cambridge, England: Cambridge University Press.
Kammen, D.M. and D.M. Hassenzahl 1999. Should We Risk It? Princeton University Press, Princeton, New Jersey.
Meyer, M.A. and Booker, J.M. 1991. Eliciting and Analyzing Expert Judgment: A Practical Guide. Academic Press, London.
Morgan, M.G. and M. Henrion. 1990. Uncertainty: A Guide to Dealing with Uncetainty in Quantitative Risk and Policy Analysis. Cambridge University Press, Cambridge.
Kmietowicz, Z.W. and A.D. Pearman. 1981. Decision Theory and Incomplete Knowledge, Gower Publishing Company, Hampshire, England.
Paulos, J.A. 1988. Innumeracy: Mathematical Illiteracy and Its Consequences.Farrar, Straus, and Giroux.
Saltelli, A., K. Chan and E.M. Scott. 2000. Sensitivity Analysis. Wiley, New York.
Vose, D. 1996. Quantitative Risk Analysis: A Guide to Monte Carlo Simulation Modelling. John Wiley, New York.
Warren-Hicks, W.J. and D.R.J. Moore. 1998. Uncertainty Analysis in Ecological Risk Assessment. SETAC, Pensacola, Florida.