1. Monitoring & Evaluation Overview

What is Monitoring and Evaluation? Why is it important?

"If you don’t care about how well you are doing or about what impact you are having, why bother to do it at all? Monitoring and evaluation enable you to assess the quality and impact of your work, against your action plans and your strategic plan."

M&E in a nutshell

In a nutshell, monitoring and evaluation refer to measuring and learning from a project's efficiency, effectiveness, and impact. The two are related, but not identical:

    • Monitoring: "The systematic collection and analysis of information as a project progresses. It is aimed at improving the efficiency and effectiveness of a project or organisation." It involves:

        • Establishing indicators of efficiency, effectiveness and impact;

        • Setting up systems to collect information relating to these indicators;

        • Collecting and recording the information;

        • Find a more detailed description here.

    • Evaluation: "The comparison of actual project impacts against the agreed strategic plans. It looks at what you set out to do, at what you have accomplished, and how you accomplished it. [2]

        • Looking at what the project or organisation intended to achieve – what difference did it want to make? What impact did it want to make?

        • Assessing its progress towards what it wanted to achieve, its impact targets.

        • Looking at how it worked. Was there an efficient use of resources? What were the opportunity costs (see Glossary of Terms) of the way it chose to work?How sustainable is the way in which the project or organisation works?

        • What are the implications for the various stakeholders in the way the organisation works.

        • Find a more detailed description here.

    • For a simple, accessible overview of monitoring and evaluation, take a look at this presentation by Andy Mosher/CIMA.

Strategies range from extremely quantitative (surveys, data collection, fieldworker measurements) to extremely qualitative (interviews, focus groups, community meetings, participant observation). While many organizations have acknowledged the importance of combining the two approaches (for great guides on how to do this, check out this report by the Governance and Social Development Resource Centre and this guide by USAID)

Current challenges and problems:

    • Developing useful baselines and indicators, especially for qualitative data: Currently, quantitative data has received the most monitoring and evaluation attention, largely because numerical indicators are easy to measure and track. As Mary Myers put it in this DFID report, "social change is often too fluid, long-term, and intangible to measure." However, given the sheer amount of funding and effort that goes into less measurable projects (for example, projects focusing on human rights, democracy, press freedom, cultural values, etc.), it is crucial to develop ways to measure the effectiveness of these projects. As basic as it may seem, many projects fail to ask the basic questions that drive successful monitoring and evaluation: what are we measuring, how, and why? Although it's geared towards larger organizations with lots of funding and flexibility, this report by the OECD touches upon how to avoid common pitfalls and leverage best practices.

    • Effectively using multiple methods: Although the necessity of triangulating qualitative and quantitative approaches in development work has been widely noted, few initiatives have been able to successfully bridge this gap. While smaller organizations and start ups have an especially hard time gathering the funding and wide expertise required by mixed-method approaches, larger organizations are often tied to conventional approaches and find it difficult to innovate and combine methods. This paper from the Brooks World Poverty Institute provides a great look at how (not) to combine mixed methods.

    • Developing a common set of indicators and conducting comparative studies: Instead of sharing data, comparing techniques and outcomes, and developing common baselines and indicators, organizations have been using fragmented, one-off systems for monitoring and evaluation, leading to duplication and failure to learn from others' mistakes. These challenges were identified and discussed in this infoDev report, which also has great resources and analysis of challenges faced by M&E practitioners.

    • Getting beyond self-reported data: At present, few organizations are able to (or willing) to have outside organizations monitor and evaluate their projects, which often leads to biased reports that are motivated by the desire to appear successful and garner more funding. For a great analysis of self-report bias in organizational settings, take a look at this report by Stewart I. Donaldson and Elisa J. Grant-Vallone.

    • Defining target audiences: While it is easy to determine target audiences in certain campaigns (for example, a campaign focusing on distributing medication to new mothers in a specific region), it is much more difficult to define a target audience for initiatives that have wider effects. Take, for example, a broadcast campaign giving maternal health tips and information. It would be much more difficult to get a concrete idea of the audience base for this type of diffused campaign.

    • Measuring long-term effects: Most development projects aim to create long-term change, and many of the desired changes are slow and gradual. Unfortunately, donor pressure and the need for immediate data often makes it impossible to track these long-term changes.