"The issue of what constitutes credible evidence isn't about to get resolved. And it isn't going away.This book explains why. The diverse perspectives presented are balanced, insightful, and critical for making up one's own mind about what counts as credible evidence. And, in the end, everyone must take a position. You simply can't engage in or use research and evaluation without deciding what counts as credible evidence. So read this book carefully, take a position, and enter the fray."
- Michael Quinn Patton, Author of Utilization-Focused Evaluation, 4e
"Donaldson and colleagues have assembled an insightful and timely collection of papers on the complex issues regarding what constitutes credible evidence in evaluation. This important book offers readers the latest thinking on generating actionable evidence for policy and program decision-making from a wide variety of philosophical perspectives. The book is an indispensable resource for evaluation scholars and practitioners on this longstanding and central debate in the evaluation field."
- Robin Lin Miller, Michigan State University & Editor of American Journal of Evaluation
“I found this text to be very interesting and useful in capturing and presenting varying perspectives in the field. There are some very good points and considerations for students and practitioners in this book.”
- Michael Schooley, Centers for Disease Control
American Journal of Evaluation, Book Review, June 2009

      What counts as credible evidence? It depends. But then as an evaluator and applied researcher, you probably know that already. The nature of credible evidence and the role of randomized experiments are two major debates that are analyzed in this edited volume. But what it really achieves is a diversity of perspectives on how context shapes and modifies what counts as credible evidence—the mediation of context and method is a strong, pervasive theme. In examining experimental and nonexperimental approaches as pathways to credible and actionable evidence, the authors demonstrate the robust evidentiary pluralism of evaluation and applied research by examining strengths and limits, alternative methods and definitions, messy complexities, democratic underpinnings, and social benefits all with appeal to varying justifications for making their case. Their thoughtful, contrasting views create a strong and vigorous interplay between what constitutes credible evidence and credible evaluation.
     The chapters represent a compilation of symposium presentations that took place at Claremont Graduate University, in 2006. As such, the chapters seem to be intended as commentaries that recapitulate the key historical, foundational, methodological, and political issues concerning the ongoing debates about evidence used to draw conclusions about the impact of programs, policies, and practices. There is a lot packed into short chapters that all conclude with compact summaries answering the million-dollar question, ‘‘So what counts as credible evidence?’’ For students and practitioners somewhat familiar with some aspects of the thorny debates, it provides a summary of how to think about evidence and how we are where we are today in a manner that is both discerning and fair-minded. It offers a way of situating one’s practice choice in the larger scheme of things. For practitioners who have closely followed and endured the method wars and ideological debates, the book may smack of yet another rendition in the fray. However, what differs in this recap is that it diligently serves as a jumping-off point to redirect the many debates in ways that could be more fruitful and inclusive.
     How does the book move beyond the longstanding debates about who is on first? Schwandt offers a theory of evidence for evaluation based on the work of Upsher in academic medicine. A critic of evidenced-based medicine’s reliance on a limited, hierarchical definition of evidence ordered by methodology, Upsher examined types of evidence used in clinical practice. He has conceptualized evidence as integrating methods used in the production of evidence and contexts of its use. In other words, evidence in practice is a negotiation between method and context. Oddly enough, this clearly mirrors the theme of context that emerges from the authors and pervades the book. As Schwandt describes in chapter 11, a model using a vertical axis is used to account for the range of methods used in practice to generate evidence (from causal effectiveness and predictive power that is focused on quantities and properties of an evaluand to interpretation of meaning that is focused on qualities, perspectives, beliefs, attitudes, and values). A horizontal axis accounts for the context of evidence that ranges from individual perspectives to populations and policies. This creates a taxonomy of evidence that integrates varying kinds of evidence, approaches, and epistemologies, thus recognizing that in conducting evaluation a variety of evidence is used to establish a variety of claims. In addition, Upshur work includes the structure and typologies of argumentation in using evidence to establish and defend claims as found in informal logic. Schwandt’s knitting of such evidentiary and inferential components provides a useful lens for understanding the perspectives of the authors and generates thought-provoking questions.
      A second way in which the book tries to move beyond the debates about who is on first is Mark’s chapter 12 that reframes the critical issues regarding the two main debates in the book. In analyzing the various perspectives of the authors on evidence and their underlying assumptions, Mark urges evaluators, for example, to sidestep the entrenched disagreements about randomized experiments and instead focus more on the types of evidence needed in different contexts and its appropriate use in drawing actionable conclusions about value. He further suggests refocusing debates on contextual factors that influence the nature of evidence such as the nature of the phenomenon, existing knowledge about causal relationships and patterns of change, and degree of confidence needed for causal inferences, given the policy context. In examining the various perspectives on randomized experiments and the ensuing debate about its credibility or superiority, Mark suggests spending more time on things like the capacity for the robustness of study conclusions across different settings. He contends that developing ‘‘theories of context’’ could be valuable rather than continued debates about the credibility of various designs. ‘‘Imagine debates about the proper uses of evaluation, about whether and when use can be more contextual versus when it must be nonindividuating, and about the corresponding preference for some methods over others’’ (p. 232). There is much more packed into chapter 12 that cannot be covered here. Suffice it to say that Mark’s chapter, along with Donaldson’s epilogue, offers a wide range of questions to pause and ponder in taking stock of one’s practice and the broader community of practitioners.
      It is recognized that the book entails a set of symposium presentations that serve as commentaries on key issues, thus intentionally limiting its scope in such a vast sea of complex issues. I do, however, want to venture to highlight a few issues that might have been beneficial to include in some chapters, albeit in a concise and abbreviated way, and perhaps befitting the nature of the book. These issues focus on the Donaldson’s general aim to illuminate a blueprint for an evidence-based global society in the opening and closing chapters. He contends that Campbell’s vision of an experimenting society has been morphing into an evidenced-based global society. Notions of blueprint and global society raised expectations about a broader viewpoint than the localized ones presented in some aspects of the book.
     For example, in his aim to design a blueprint for an evidence-based global society, Donaldson establishes the magnitude of evidenced-based reforms in setting the stage in chapter 1. However, he avoids summarizing at least some of the basic arguments advanced by converts and detractors, so as to set the arguments in the broader context he seeks. This might have also included just a brief point about the surge of debates on the cultural and geographic bias of Western-derived evidence that large task groups like Health Canada, Center for Disease Control, World Health Organization (EURO), and International Union for Health Promotion and Education claim may not be well suited in health promotion evaluation and policy. Further, he avoids a summary of the basic tenets of evidenced-based reforms and their failings to live up to their promise. Since 1992, reviews have shown that modifying one’s existing practice with a new approach or adopting methods of evidenced-based practice is clearly not just a matter of evidence. His focus on the large number of evidenced-based reforms is an ad populum argument that establishes that it is widely held, but it cannot establish the veracity or validity of evidenced-based reform that he argues for in the book. Again, an outline or a few brief points touching upon these key issues might have strengthened his case for a blueprint for an evidence-based global society.
      Similarly, Gersten and Hitchcock in chapter 5 provide a useful description with examples regarding the role of the What Works Clearinghouse, standards for evidence of effectiveness, technical review of studies, summarizing evidence, and contribution to program evaluation and policy. Again, the aim to illuminate a blueprint for an evidence-based global society could have been further reinforced if the authors had briefly pointed out the other ‘‘supreme courts’’ of evidence standards that also share the same goal to uncover the broadest scope of potential sources for systematic reviews. Inclusion of these other organizations may have shed light on the global issue of different hierarchies of evidence and synthesis efforts, as well as the hierarchy of evidence losing its rally point over time and how it has given way to a series of alternative hierarchies. A comparative summary, or perhaps added resources included in their appendix at the end of the chapter, could have highlighted other US and international organizations like the Best Evidence Encyclopedia, Comprehensive School Reform Quality Center, Evidence for Policy and Practice Information and Co-ordinating Centre, Comprehensive School Reform Quality Center, Campbell Collaboration, and Cochrane Collaboration.
      In chapter 2, Christie presents a valuable overview of some concepts in the philosophy of science directly related to evidence such as the ontological, epistemological, and methodological aspects of theories of knowledge. She points out that evaluator perspectives on evidence are shaped by training traditions and assumptions about knowledge that directs the evaluation questions posed and methods used (i.e., what counts as credible evidence is consistent with a researcher’s chosen theory of knowledge). It provides a much needed lens for understanding and comparing the perspectives of the authors and Mark’s ideas about reframing the debates at the end of the book. But what is also most relevant is justification (foundationalism), or how their views convey, ‘‘How sure do we have to be that our evidence and claims correspond to the actual world?’’ The authors’ marshaling of evidence to build their case entails notions of justification that drive different types and amounts of evidence. This implies that concepts of foundationalism regarding justification of claims, warrantability of claims, fallibility, probability, etc., may have provided yet an important lens in understanding the authors. Still yet, supporters of evidenced-based reforms are usually equated with foundationalism, thus the aim to illuminate a blueprint for an evidence-based global could have been reinforced by including some brief points about justification and foundationalism.
      In stepping back and considering the entire book, clearly ‘‘credibility is in the eye of the beholder’’ (p. 236). However, what is also clear is the message to forego the debates about the merits and demerits of randomized experiments over other methods because it constrains the capacity of advancing practice. The historical and collective aspects of evidence presented by the authors show that the gathering, interpretation, and use of evidence is underscored by unarticulated and unacknowledged extra-evidential considerations such as preferences, assumptions, use, and values that fundamentally impact what constitutes evidence and the way we use evidence to justify claims and support decisions. Beneath the diversity in perspectives, the editors impose a degree of coherence on the whole and offer ways in thinking about contingencies to maximize credibility across contexts in light of trade-offs typically faced in practice. In grappling with the issue of credible evidence, there is at least some semblance of an underlying epistemological similarity that emerges. The examination of credible evidence in this book is a call to change the terms of deep-rooted disagreements that have long served as bottlenecks and a call to a theory of evidence for evaluation that is perhaps long overdue. ‘‘Put somewhat differently,’’ says Schwandt in his chapter on developing a practical theory of evidence for evaluation, ‘‘concerns about the character of evidence, the ethics of evidence, the contexts in which it is used, and the kinds of arguments we make in which evidence plays an important role signify problems of action. Such concerns invite us to answer questions of how we act together meaningfully’’ (p. 210). So in reading this edited volume, consider yourself invited, consider yourself called upon.
- Deborah M. Fournier, Boston University