July, 2015, Dr Paul R Kelly, Lecturer in Organisation Studies, Essex Business School, University of Essex, UK
p.kelly@essex.ac.uk | Originally published at "Politics and Ideas" site: http://www.politicsandideas.org/?p=2771
Welcome attention is now being paid to how evaluations can better engage with and contribute to policy processes, beyond dissemination alone. One example of this trend is Aquilino and Estevez’s work on influencing public policy assessments (2015). To add to this kind of work, I would also like pose some questions about the relation of power and knowledge in the growing landscape of evaluation and in our visions of bridging evaluation and policy, and bridging to other cultural sites.
Evaluation knowledge has penetrated the diverse worlds of international development in recent decades. We now design evaluations for before, during and after new policies or programs. We use baseline studies, monitoring, mid and final evaluations. We have monitoring, monitoring and evaluation (M&E), monitoring, evaluation and learning (MEL), monitoring, evaluation, learning and accountability (MELA, or MEAL), value for money (VFM) and transparency and accountability (T&A). And we must not forget our long-standing evaluations of financial performance, organisational performance, personal performance, efficiency, and effectiveness.
We use quantitative and experimental methodologies such as control groups, randomized control trials (RCTs), before and after trials, and various statistical approaches. Qualitative approaches include participatory action research, appreciative enquiry, most significant change, and outcome mapping. There are over 100 specific technical approaches today, some say nearer to 1,000.
So, in a nutshell, with our diversity of functions, timings, methods, tools and approaches, one might presume that we are in good shape to evaluate our policies and our projects? But what if more knowledge is not the answer to social problems? What if knowledge is also part of our problems?
How can knowledge be part of our evaluation problem, part of our policy conundrum? Surely, any new knowledge, if “robust” and relevant is useful? But there are at least three difficulties with this simple view.
Firstly, any knowledge is culturally constrained. There is a long history to studying knowledge and power, and important texts open critical doors, including Kuhn’s seminal 1962 book “The Structure of Scientific Revolutions”. This book explored how different scientific communities called upon different knowledge, evidence and “facts” to argue their cases.
Foucault, in books such as “Discipline and Punish” (1977), went much further arguing that power is not just held by single people such as chiefs, evaluators, politicians or policy directors, but is actually diffused across society, part of our facts, truths, or our “regimes of truth”. Issues such as auditing, evaluation or gender identity for example are actually composed of these knowledge forms. This knowledge / power relationship involves whole institutions and permeates how we perform our everyday work. Any claims to knowledge, evidence, or data integrity involve criteria that legitimates some knowledge, and erases others.
Secondly, contemporary researchers have argued that good policy might in fact be unimplementable (Mosse, 2004). The argument runs that the policy world and policy frameworks are not best suited to understanding complex practices. Indeed, social development may be as much about disjunctures and mess, as about the power to order, manage or rationally plan (Lewis and Mosse, 2006). Our plans might not reflect out there practice. Others have suggested that particular kinds of knowledge lend themselves to ordering and controlling. We then use our evidence in “ontological politics” (Law, 2004) – in other words in fights over truths, values and expertise. This raises questions about democratic voice and challenges the view that technical knowledge alone will solve our social problems.
Thirdly, critical studies of policy culture break down the idea that all policy environments are politically neutral and rational. Sumner (2006: 648) calls for more alternative voices around the complexities of policy and practice, noting “policy is shaped by political infrastructure”. Beeson and Islam (2005: 197) argue that policies “evolve independently of their intellectual merit and empirical credibility.” In environmental work, various authors have argued that we need to know how social, economic, scientific and political knowledges mix, merge and dominate each other in policy-making processes.
These three problems with knowledge are practical. Can marginalised groups get, read or edit evaluations? Or policies? How do evaluations and policies seek to order out there practices? Answers require a close scrutiny of policy and evaluation cultures.
Much of my concern about knowledge comes from a critical angle, borrowing from Foucaultian work, post-structural analysis (not the best friend of modernisers), or from Social Studies of Science, Technology or Accounting. Such work can be insightful, problematic, and progressive, but can also lead to a kind of critical paralysis, a numbness that leaves us in a new world of indecision and inaction. How can we move past such a “knowledge” impasse? There is no one answer to this, but there are many avenues for exploration, experimentation, re-design and perhaps re-knowledging.
In conclusion, we have glimpsed the paralysing affect that a critique of power and knowledge has on well-intentioned evaluation and policy-making. But these are issues we must understand; we cannot ignore these problems if we seek enlightened progress. For now, let’s pause, and reflect on three final questions.
· Is my knowledge partial?
· Is it primarily related to my field and institution, my silo?
· And importantly, does knowledge in my institution marginalise anyone else, particularly any vulnerable groups?
For this article to be of any positive effect, these questions should take us to an uncomfortable place. And this new place is a great vantage point for seeking alternative evaluation and policy knowledge.
Aquilino, N. and Estevez, S. (2015). Lessons Learned and Challenges on influencing public policy impact assessments in Latin America. Buenos Aires: CIPPEC. http://www.vippal.cippec.org/wp-content/uploads/2015/04/Serie-Think-tanks-4_Lecciones-aprendidas-y-desaf%C3%83-os-sobre-la-incidencia-en-pol%C3%83-ticas-p%C3%83%C2%BAblicas-de-las-evaluaciones-de-impacto.pdf
Kuhn, T. S. (2012). The structure of scientific revolutions. University of Chicago press. https://en.wikipedia.org/wiki/The_Structure_of_Scientific_Revolutions
Foucault, M. (1977). Discipline and punish: The birth of the prison. Vintage. https://en.wikipedia.org/wiki/Discipline_and_Punish
Mosse, D. (2004). Is good policy unimplementable? Reflections on the ethnography of aid policy and practice. Development and change, 35(4), 639-671. http://onlinelibrary.wiley.com/doi/10.1111/j.0012-155X.2004.00374.x/abstract
Lewis, D., & Mosse, D. (2006). Encountering Order and Disjuncture: Contemporary Anthropological Perspectives on the Organization of Development. Oxford Development Studies, 34(1), 1–13. http://personal.lse.ac.uk/lewisd/images/ODS-Lewis&Mosse.pdf
Law, J. (2004). After method: Mess in social science research. Routledge. http://www.routledge.com/books/details/9780415341752/
Sumner, A. (2006). What is Development Studies? Development in Practice, 16(6), 644–650. http://www.jstor.org/stable/4029921?seq=1#page_scan_tab_contents
Beeson, M., & Islam, I. (2005). Neo-liberalism and East Asia: resisting the Washington consensus. The Journal of Development Studies, 41(2), 197-219. http://www.tandfonline.com/doi/abs/10.1080/0022038042000309214?src=recsys#.VaFTrEVWlpY