Background and Purpose: This FAQ is a “live” google document created in response to requests for a dedicated space for Spotlight Initiative M&E colleagues across the Initiative to ask questions and discuss all things M&E related. Spotlight Initiative's M&E Global Team will regularly update this page with frequently asked questions and answers. Please feel free to ask questions below or propose content for this FAQ!
How to use the M&E FAQ?
Step 1: Check if your question is answered below.
Step 2: If your question has not been answered, feel free to ask us via the "Ask us your question!" box below. All questions are welcome!
Step 3: The Spotlight M&E Global Team will answer your question within a few days via email. And then, with your permission, we'd be happy to add your question and answer to the FAQ below.
Note: Should your question be urgent, please send an email directly to Spotlight Initiative's Global M&E team by emailing us at natalie.raaber@un.org, michelle.unda@un.org and briana.yerbury@un.org.
Q: Where I can find key M&E documents, tools and resources?
A: Spotlight Initiative houses its guidance and resources to support Spotlight programme teams' day-to-day work in the Initiative's Virtual Library. All M&E guidance and resources – including the three key foundational documents listed below – can be found here In the Virtual Library.
Spotlight Initiative M&E Strategy, which presents the Initiative-wide (global) approach to M&E at Spotlight Initiative.
Programme Guidance: Participatory Monitoring, Evaluation and Reporting, which includes a definition of participatory monitoring, evaluation and reporting (PMER) as well as key considerations, tools, and additional resources to support successful implementation.
The newly launched (in October 2025) Spotlight Initiative Results Framework. Developed through a year-long evidence based, participatory process, this revised Initiative-wide results framework offers Spotlight country and regional programmes a menu of indicators (and output and outcome statements) from which to build their individual programme results frameworks. Please note that the regional level framework will be posted soon (by December 2025).
In addition to the above, we hope the following M&E guidance and related templates are helpful, as well:
How to develop a Spotlight Initiative Programme Results Framework. This note is intended to support incoming Spotlight programmes develop their programme results frameworks. Programmes are encouraged to use this guidance – together with the newly revised Initiative-wide Results Framework as they develop their programme results framework.
How to use and fill out the results framework template in your Programme Document. This guidance note provides step-by-step instructions on how to develop your programme results framework for your Country (or Regional) Programme Document.
Baseline Studies and Sample Term of Reference - Tips and Guidance. This document offers an overview of how to conduct a baseline study, which helps set a programme's starting point, and ability to track progress (against a goal or target) well. The guidance covers recommended steps, approaches, and key considerations. It also includes a sample Terms of Reference for recruiting consultants to conduct your baseline study.
Q: I have recently joined Spotlight Initiative in an M&E role. How can I link up with and speak to other M&E colleagues across the Initiative?
A: Great question! You can connect via:
COSI: The Spotlight Secretariat created the COSI (the Community of Spotlight Initiative) Email Group to foster learning and exchange, share evidence-based resources, and discuss challenges and successes in our collective efforts to end violence against women and girls. To ask any questions related to the Initiative and its work (including any M&E related questions) please feel free to email the group at: community@spotlightinitiative.org
Whatsapp: You can join the Global M&E WhatsApp Group: Join by emailing Natalie Raaber (natalie.raaber@un.org) and Michelle Unda (michelle.unda@un.org) and asking to be added. You’ll then be added!
Email the M&E Global Team: Feel free to email the M&E Global Team anytime with any M&E related questions: briana.yerbury@un.org, natalie.raaber@un.org and michelle.unda@un.org
Or, visit this M&E FAQ and ask your questions here!
Q: I've recently joined Spotlight Initiative as M&E Officer or Spotlight Programme Coordinator and I would like to know where I can find information on indicator reporting?
A: At Spotlight Initiative, all information on reporting – including on indicator reporting – can be found in this folder in the Virtual Library Feel free to take a look a look through the reporting guidance pack (in the same folder). The Guidance Pack includes important reporting information, deadlines, and ready to use templates. Additionally, please review the methodological notes associated with each indicator in your results framework, and align your approach to measurement, data collection, and reporting accordingly.
Spotlight Initiative programmes report annually against the indicators in their respective programme results frameworks (indicator reporting). Programmes do so through Spotlight Initiative's online reporting platform, access to which is granted to each programme in January of the reporting year. Please email michelle.unda@un.org with any questions or to request a training on the platform.
Q: Where can I find methodological notes for the indicators in Spotlight Initiative's Results Framework?
A: Methodological notes are currently being developed for all indicators in the newly revised Spotlight Initiative-wide Results Framework. Methodological notes define the indicator, detail the approach to measurement, and lay out reporting and disaggregation requirements. These notes can be found in the Virtual Library by Pillar. They are being uploaded to these folders on a rolling basis (with all notes expected to be completed by December 2025).
Please note that Spotlight Initiative 1.0 (the first phase of the Initiative, which ran from 2017-2023) featured a 1.0 global results framework (now archived) and an accompanying set of methodological notes. We’ve archived these as well, but please feel free to review them if you have Indicators that align while we finalize the new methodological notes.
Q: Are there resources to help Spotlight programmes measure their contribution to improved capacity to prevent and respond to violence against women and girls?
A: Yes! Capturing Spotlight programmes’ contributions to Improved capacity is often quite difficult. It tends to require looking beyond quantitative indicators to understand more nuanced, qualitative shifts over time. Improved capacity by present as Improved confidence or stronger leadership, strengthened coalitions and alliances, improved ability to hold duty bearers accountable to their commitments (including financing commitments), or improved ability to develop or influence laws, policies, and action plans.
Capturing these types of changes calls for rights based, context-specific data collection tools and approaches — for example, pre- and post-surveys (of programme participants) – though these have their temporal limitations – or broader community- or population-based surveys, though these tend to be costly and difficult to implement, and can carry risks of harm if not conducted ethically and in line with international standards.
We've recently launched a revised Spotlight Initiative Results Framework, which features indicators to capture shifts in capacity. Concurrently, we're developed revised methodological notes which detail the approach to measuring shifts in capacity for relevant indicators (i.e. those measuring shifts in capacity).
Additionally, BetterEvaluation provides a helpful list of resources on how to think about capacity development more broadly and how to evaluate programmes’ contributions to it. To complement this, we’ve collected a list of additional resources, and hope these are helpful:
AWID has a feminist wiki on M&E and a series on capturing change in women’s realities and their rights (not focused specifically on the capacity of organizations themselves, though).
Gender at Work’s analytical framework may be helpful in thinking through the various levels at which change needs to happen (at the institutional, individual, formal, and informal levels, and at t heir iintersections) in order to shift power and advance gender equality.
UNFPA + UNICEF with Drexel University developed the ACT Framework (under the joint programme on FGM), which offers a framework to measure changes in social norms related to FGM. Though not specifically focused on measuring shifts in capacity, the framework can help us think through change in gendered norms more broadly.
Q: How do we develop milestones and targets for our Spotlight programme results framework?
A: For Spotlight Initiative programmes, milestones are annual "milestones" or goals that help monitor progress toward achieving your target (the aimed for endline achievement of the programme). As a general principle, programmes should set ambitious yet realistic milestones and targets. Milestones and targets should reflect what your Spotlight programme can reasonably achieve each year (and overall), considerate of the context, strategy, programme duration and available resources.
Setting milestones and targets (and indeed developing your programme results framework) should be a collaborative exercise engaging all RUNOs under a Spotlight programme.
Here’s how to approach setting your milestones and targets:
Start with the Baseline Data: Milestones should be informed by the baseline values established for each indicator. If baseline data is not available at the time of developing your results framework, milestones may be marked as “TBD” and finalized after the baseline study is completed. The baseline study should provide recommendations for both annual milestones and end-of-programme targets. You can review the Baseline Studies and Sample TOR - Tips and Guidance for more help.
Consult Methodological Notes: As mentioned, the Initiative is developing methodological notes for each indicator in Spotlight Initiative's Results Framework. These provide detailed guidance for each indicator, including how it should be measured, what data is needed, reporting requirements, as well as disaggregation requirements. Please feel free to draw on these notes to help you think through a given indicator's milestones and targets. If your programme results framework includes "unique" indicators – those not taken from the Initiative's Results Framework – please refer to those indicators' methodological notes (as is relevant / should they exist).
Align with your programme's budget and workplan: As noted above, milestones should reflect what is feasible, given your programme’s timeline, duration, and financial resources, and corresponding interventions.
Adjust as Needed: Targets – and by extension, milestones – are not static. They may need to be adjusted based on monitoring data collected during implementation. Any revisions to your programme results framework (and attendant milestones and targets) should follow proper review and approval procedures (Including approaval by your national or regional steering committmee) to maintain transparency and integrity.
Use Supporting Tools: In addition to the methodological notes, Spotlight teams can consult:
The Learning Centre for tools and guidance on developing a monitoring and learning plan
IFRC’s Project Programme Monitoring and Evaluation Guide for practical insights on planning and managing milestones and targets
Q: What are effective ways to measure changes in prevalence or prevention outcomes in Spotlight programmes?
A: National surveys like the DHS (Demographic and Health Surveys) or MICS (Multiple Indicator Cluster Surveys) can offer useful data. However, they are typically conducted periodically and at the national, making it difficult to establish contributions / attribute changes in these surveys' data to Spotlight Initiative's interventions across several districts or communities.
To better assess changes and programme-specific contribution and Impact, some programmes opt to conduct pre and post-programme surveys in the areas where Spotlight Initiative is implemented prevention programming. These can help capture shifts in attitudes, behaviors, or experiences among participants and affected populations. Population based surveys (of a community in which your programme is working) can be considered as well, but they are challenging (and quite costly) to roll out safely and ethically (ensuring no harm is done).
As a general rule: primary data collection on violence againist women and girls and harmful practices must, as noted be done safely and ethically, ensuring no harm Is done.
Spotlight Programmes should:
Embed robust ethical and safety protocols in the design and implementation of any survey or primary data collection process.
Conduct a thorough risk assessment to determine whether the benefits of the data collection outweigh potential risks to participants or communities.
Prioritize “do no harm” principles and participant safety at all times, including through strict confidentiality, informed consent, appropriate referral pathways, and comprehensive enumerator training.
In short, while pre and post programme surveys (or population based surveys) can generate incredibly valuable insights, they must be carefully planned, ethically justified, and feasible given the context. Where risks cannot be adequately mitigated, alternative approaches – such as qualitative assessments, outcome harvesting, or secondary data analysis – may be more appropriate.
Q: Can I have support on the development of our programme results framework?
A: Absolutely! We've developed a guidance note on how to develop a Spotlight Initiative Programme Results Framework. Please use it as you develop your programme results framework.
Please draw indicators for your programme results framework directly from Spotlight Initiative's Global (or Initiative-wide) Results Framework, ensuring that you include all core indicators (highlighted in pink, and as detailed in the framework itself).
We’ve also developed a Spotlight Initiative 2.0 Programme Results Framework Template (to be used in your Programme Document), which provides step-by-step guidance and a layout tailored for Country (and Regional) Programme Documents.
If you need further support, the Secretariat is happy to help. Feel free to reach out to your programme focal point or contact michelle.unda@un.org and natalie.raaber@un.org directly.
Q: Are programmes required to work across all Pillars (Pillars A-D) of the Initiative's comprehensive model, and reflect this in their programme results framework?
A: Yes! Spotlight Initiative champions a evidence-based comprehensive approach to ending violence against women and girls – working simultaneously and sequentially across all four pillars or outcome areas (Pillar A - D).
Why? Evidence shows that comprehensive approaches which work to promote progressive laws and policies, strengthen institutions, and improve the GBV data ecosystem, while also expanding access to quality services, promote gender equitable social attitudes, behaviors, and norms, and engaging deeply with civil society (particularly women’s rights organizations) – are significantly more effective in preventing and responding to violence against women and girls than siloed or single-pillar approaches.
The final evaluation of the first phase of Spotlight Initiative reinforced this finding, demonstrating proof of concept and confirming that the Initiative’s comprehensive model delivers meaningful results for women, girls, and communities.
Your Spotlight programme (and its accompanying programme results framework) should reflect this model – working across all four pillars of the Initiative, and tracking progress and results across the same.
Q: Are there any indicators that capture quality dimensions, not just quantity or existence?
A: Yes! The Initiative-wide Results Framework includes many indicators (primarily at the outcome level) that assess quality dimensions.
For example, indicators look at:
Whether multi-stakeholder coordination platforms include diverse representation
Whether national action plans (NAPs) are costed and include M&E framework
Whether legislation is aligned with international human rights standards and developed with input from women’s rights organizations
Not just whether a law exists, but whether it’s implemented and enforced
Not just whether services exist, but whether they meet core standards (e.g. safety, accessibility, confidentiality, survivor-centeredness)
Whether prevention programmes are evidence-based, age- and culturally-appropriate, and designed with community input
These quality elements help us assess whether interventions are rights based and likely to contribute meaningfully and sustainably to reducing violence against women and girls.
Q: Our donor has requested that we include a set of their own indicators in our Spotlight programme results framework. Is this possible?
A: We strongly discourage this. Where possible, we would advise against including any custom indicator in your results framework. Instead, please try to draw indicators from the Initative's Results Framework. We advise a discussion with the donor, where you may share the Initiative's Results Framework, and see if an indicator among the 70+ available the would suit. You could also propose that the change be instead be captured through narrative reporting. Narrative reporting often provides an opportunity to document progress in a more nuanced and meaningful way, and can help promote deeper learning and adaptation.
Q: What if we want to track progress through an indicator not in Spotlight Initiative's Results Framework (the Initiative-wide framework)?
A: Please see the previous response above regarding donor-requested indicators, as the guidance is the same. As noted above, we strongly encourage Spotlight programs to select indicators from the Initiative's global results framework (which is extensively comprehensive) or report such changes through narrative reporting. Narrative reporting often allows for a more nuanced and meaningful understanding of progress, particularly when working in emerging or context-specific areas.
That said, Spotlight programmes may include additional indicators in their results framework if needed. In such cases, teams should develop methodological notes to accompany these indicators, using the guidance on developing methodological notes.
Q: What should we do if we can’t complete the Programme Results Framework template because we don’t have all the (baseline) data available during the design phase?
A: That’s expected and is OK! Your programme's situation or context analysis (presented In your programme document) helps outline the context in which the programme operates, ensuring the programme focus and Interventions are responsive and guiding the selection of indicators for your Results Framework. This step is essential for identifying data needs, but it doesn’t mean all baseline data will already be available.
Some indicators will rely on existing (secondary) data, while others may require new data collection (primary sources). Where existing data is unavailable, incomplete, or of poor quality, the programme will need to collect baseline data as part of a baseline study.
If data is not yet available:
You can enter “0” or “TBD” as the baseline.
Include a brief note explaining how and when the data will be collected (e.g., through the baseline study).
Make sure these data needs are reflected in your baseline study Terms of Reference.
The baseline study is your opportunity to gather the initial values for all selected indicators using a mix of quantitative and qualitative methods. Baselines establish the starting point for your programme, help inform milestone and target setting, and support the measurement of progress and impact over time.
And remeber even if some areas of work can’t yet be quantified or tracked through an Indiactors, you can still describe your starting point in your annual narrative report. The narrative provides valuable space to contextualize progress, challenges, and qualitative insights that may not be captured through indicators.
For further support, please consult, the Baseline Studies Tips and Guidance document outlines steps such as setting up the Results Framework, designing the data collection plan, ensuring disaggregation, and adhering to ethical standards. It also includes a sample ToR to help guide your process.
Q: Can we revise our Programme Results Framework during implementation?
A: Yes! Your baseline study will likely reveal data gaps and collection challenges, as well as highlight issues of coherence or relevance within your current framework when considered alongside your situation or context analysis. This is completely normal – baseline study findings often lead programmes to refine or adjust their results frameworks.
For example, you may discover that some indicators cannot realistically be measured, and it’s better to drop or replace them. Or you might identify new areas of measurement that warrant adding an indicator. These adjustments are part of maintaining a framework that is both relevant and feasible for your context.
Ideally, though, any revisions should focus on the indicators, rather than the outcome or output statements of your results framework. We recommend aligning any changes with the Initiative-wide results framework, which provides a set of tested, evidence-based indicators (which come with methodological notes). Selecting indicators from the Initiative-wide framework also enables aggregation of results across Spotlight programmes and helps the Initative demonstrate global impact.
Please remember that any changes to your programme results framework must be formally presented to your National (or Regional) Steering Committee for review and endorsement, and then shared with the Spotlight Initiative Global M&E Team (emailing michelle.unda@un.org and natalie.raaber@un.org).
And finally, remember that your annual narrative report is an excellent place to showcase progress, and results that may not be directly measured through an indicator, but are still significant contributions to ending violence against women and girls.
Q: Is it necessary to have a baseline for each indicator in my results framework?
A: Yes! A baseline value is required for all indicators in your programme results framework. Collecting baseline data is critical because it provides a clear starting point (before programme interventions are rolled out), and allows for the measurement of progress and impact over time.
Without baseline data, it would be difficult to determine whether a project or programme has achieved its goals or how effective the interventions have been. Additionally, baseline data helps identify gaps or areas in need of improvement, allowing for better decision-making and more targeted interventions. Ultimately, baseline values help ensure that Spotlight programmes are responsive to the context in which they are working, and their interventions' impacts are measurable, and aligned with the desired outcomes.
Consider the following example data for a fictional GBV counseling service:
Before the Project (Baseline): It took 11 months for survivors of GBV to access counseling services
After the Project: It took less than 1 month for survivors of GBV to access counseling services
Change: Counseling service wait time had reduced by 10 months
A single measurement, taken after your programme, would reveal that it takes less than one month for a GBV survivor to access counselling services. Without a baseline, you would not know how long it had taken before and therefore could not measure change (including improvement or regression).
In the case of Spotlight Initiative programmes, baselines will serve three key purposes:
1) it will be the source of information to identify milestones, targets and work plans and
2) it will be the basis for the monitoring and evaluation system and, as such,
3) will determine the measurement of the Spotlight Initiative’s contributions and impact for all stakeholders
In some cases, setting a baseline of “0” is valid, especially when a programme is starting from scratch or introducing a completely new intervention. If baseline data is missing or incomplete at the outset, please refer to the guidance above on how to set baselines, milestones, and targets in such scenarios.
Finally, please consult the methodological notes for the indicators in the Initiative's Results Framework. These provide helpful information on the approach to measurement, data collection and calculation, and reporting requirements.
Q: How do I calculate baselines for my indicators?
A: We encourage you to hire experts to conduct a baseline study at the start of your programme. The baseline study should detail the baseline values (and potential gaps).
In general, however, baselines can be calculated using a combination of secondary and primary data sources:
Secondary data refers to existing information, such as national statistics, institutional records, previously published studies, or information found in government databases. If relevant and reliable baseline data already exists, your (or the baseline study consultant's) task is to gather and validate it, ensuring that it's a sound, quality baseline source.
Primary data is required when secondary data is unavailable, incomplete, or of poor quality. This means you (or the consultant you hire to conduct the baseline study) may need to collect new information through methods such as surveys, interviews, focus group discussions, or community consultations. As discussed above in another FAQ, depending on the intervention you're implementing and what you are tracking through an indicator, it may be appropirate to set the baseline to “0.”
Start by identifying what data already exists and where the gaps are. This will help determine whether additional data collection is needed to establish an accurate and useful baseline. And feel free to visit the baseline study tips and guidance document here, as well as this powerpoint which provides an overview of baseline studies.
Q: How do I evaluate the quality of a baseline study inception report?
A: To evaluate the quality of a baseline study inception report, assess whether it clearly outlines (with adequate detail) the study’s objectives, methodology, sampling strategy, data collection tools, data disaggregation, and ethical considerations. The inception report should also include a realistic timeline and work plan. Check for clarity, coherence, feasibility and adherence to Spotlight Initiative’s technical principles (Do No Harm, LNOB, Survivor-Center, and Participatory) throughout. Please feel free to refer to this baseline study inception report quality assurance checklist – we hope it's helpful!
Q: How do I set milestones and targets?
A: Milestones and targets should be based on data collected through a baseline study, which establishes the current state of each selected indicator. Once this baseline is known, annual milestones can be set to reflect realistic year-on-year progress, and a final target can be set as the goal your programme aims to achieve by its end.
To set them effectively:
Conduct a baseline study: This should happen during the inception phase, using both secondary data (e.g., government databases, previous studies) and primary data (e.g., surveys, focus groups, interviews), where needed.
Analyze baseline data: Understand the starting point for each indicator and assess what level of progress is realistic and ambitious over the programme’s timeline.
Use a participatory approach: Work with stakeholders, including government counterparts, civil society, and communities to validate baseline data and collaboratively determine milestones and targets.
Follow methodological notes: Each indicator in the Spotlight Initiative results framework has a corresponding methodological note that explains how to measure it, what disaggregation is required, and how to calculate progress.
Well-set milestones and targets ensure that your programme can monitor progress meaningfully, demonstrate change, and make evidence-based adjustments when needed.
Q: Where can I find templates for reporting ?
A: Spotlight Initiative programmes regularly produce narrative reports, including an annual narrative report (due to the Spotlight Secretariat on 1 March of following year for review) and a final (cumulative) report, covering the cumulative achievements of your programme from its start to its end date.
You can find all reporting guidance and templates, as well as programmes' previous reports for learning, in the Initiative's Virtual Library here.
Q: How often do I need to share my results with the global Spotlight Initiative team?
A: Programmes are required to submit annual results-based narrative reports that demonstrate the programme’s contributions to change, using both narrative and indicator data. The Spotlight Secretariat supports programmes with official (annotated with guidance) reporting templates, a guidance pack (which details key reporting deadlines and features resources for results based reporting), and annual reporting webinars (these can be found here from past years). The Secretariat also maintains a Reporting FAQ (like this FAQ, but focused on reporting). Feel free to visit it for more detailed information on reporting (and to ask us your reporting questions).
Your programme's Programme Coordination Team (and specifcally the Spotlight Coordinator) is responsible for consolidating reporting information from all participating RUNOs, ensuring quality and coherence. RUNOs are responsible for collectively and collaboratively inputting into all reports.
Key reporting milestones:
Your narrative reports should be shared with the Spotlight Initiative Secretariat by 1 March for review and feedback.
Designed, copy-edited, final versions of reports must be endorsed by the Resident Coordinator (RC) and submitted to MPTFO by 31 May. Please share this version with us at the Secretariat too so we can provide another quick review, and then we recommend it's submitted to MPTFO a week prior to the 31 May deadline.
Please share your draft reports as you're writing them or after sharing the draft with us on 1 March with key programme stakeholders, including your Civil Society Reference Group and civil society and government partners for review and validation prior to finalization. Reports should highlight programme results and reflect alignment with core principles, including Leaving No One Behind (LNOB) and UN Reform.
Beyond annual reporting, programmes are expected to conduct regular monitoring activities and ensure coordination and quality assurance across all reporting stakeholders. Workshops, data entry meetings, and technical working groups have proven to be effective mechanisms for supporting this collaborative process.
Q: Our Spotlight programme donor has set aside funds to evaluate the programme. Is the programme also required to set aside funds for evaluations, and are we expected to conduct a separate exercise?
A: Evaluations and assessments at Spotlight Initiative are a critical dimension of the programme cycle. Evaluations assess the relevance, effectiveness, efficiency, and sustainability of interventions and support provided and, in so doing, contribute to learning, accountability, and evidence-based decision-making across the Initiative.
Spotlight Programmes are strongly encouraged to budget at least 1% of their total programme budget for evaluative exercises, including mid-term reviews or assessments and the end-line or final evaluation of your Spotlight programme. These evaluations should be commissioned directly by the Spotlight programme. Independent and impartial evaluators – with experience conducting gender-responsive evaluations and expertise in evaluating efforts to end violence against women and girls and harmful practices – should be hired.
Now to your question! Donors may also commission evaluations or other assessments of Spotlight Initiative programmes to meet their specific requirements. In such cases, it is important that Spotlight programme teams are deeply involved – providing input into the evaluation’s Terms of Reference and ensuring representation on the donor-commissioned evaluation’s reference group. Active participation helps maintain alignment with programme realities, ensures methodological rigor, and promotes shared ownership of findings and recommendations.
It remains equally important that Spotlight programmes commission and manage their own evaluations whenever possible. When financial constraints make this unfeasible, meaningful and collaborative engagement with the donor-commissioned evaluation becomes even more critical to ensure that programme perspectives, learning priorities, and accountability needs are fully reflected. For more information on evaluation at Spotlight Initiative, please visit the Initiative's Global M&E Strategy.
Q: What is Participatory Monitoring & Evaluation and Reporting (PMER)? How can I implement a participatory approach to M&E ?
A: Participatory Monitoring, Evaluation, and Reporting (PMER) is an inclusive, rights-based approach that actively involves a range of stakeholders, including community members, in deciding what is monitored and evaluated, how it’s done, and how results are interpreted and used. It reflects the core principles of the Spotlight Initiative: survivor-centredness, do no harm, leaving no one behind, and a human rights-based approach.
PMER is already integrated into Spotlight Concept Notes and Programme Documents as a core element of programme quality. Implementing it effectively requires a commitment to working differently: investing time, resources, and effort into building meaningful participation and local ownership throughout the M&E and Reporting cycles.
From the inception phase of programme implementation, it is recommended to form a PMER working group. More advice and practical guidance can be found in the PMER Guidance Note available in the Virtual Library.
Q: What’s the added-value of a PMER approach?
A: Participatory monitoring, evaluation, and reporting approaches (PMER) are crucial to foster the participation and inclusion of key stakeholders, including community members, in M&E activities and programming as a whole. PMER enables Spotlight programme stakeholders to engage in the monitoring, evaluation and reporting phases of the programme cycle, and fosters participation more generally (across all programme phases).
By regularly listening and learning from the perspectives and experiences of rights holders (those meant to benefit from Spotlight programme interventions), participatory approaches to monitoring (and evaluation, as well as reporting) surfaces a deepened and improved understanding of the specific context in which Spotlight programmes work. PMER also strengthen local ownership of programme's activities and, as an upshot, the sustainability of results. PMER can also surface critical feedback into programme strategies supporting the identification of lessons learned and promising practices.
By generating greater insight into the underlying structural factors that perpetuate discrimination, bias, and inequality, participatory approaches to monitoring and evaluation also enable a more comprehensive analysis of the theories of change underpinning programming, and provide programmes critical information to adapt and evolve, as needed, over time.
Q: How can PMER be integrated into the existing M&E framework and programming?
A: PMER is an approach to M&E, not a separate or parallel track.
To integrate PMER meaningfully:
Engage stakeholders at the onset of programme design, including in the development of your Spotlight programme results framework, and broader M&E tools and approaches.
Engage stakeholders continuously throughout the programme cycle. Even if stakeholders contributed to the development of your programme document, their continued participation throughout your Spotlight programme is critical. Rights holders and those engaged in or otherwise impacted by Spotlight programming should be consulted in the development of your M&E Plan, the baseline study, and the development of your programme results framework. The establishment of the PMER working group can offer a more formalized way to do this.
Broaden participation beyond the usual actors. PMER means going beyond UN agencies and donors. It requires meaningful engagement of rights holders and community members –t hose directly involved in or otherwise impacted by Spotlight Initiative programming – as well as national governments, civil society partners, and organizations working to end violence against women and girls and advance gender equality in your context. Participation must be genuine, not symbolic. This means dedicating time and resources to inclusive engagement.
Strengthen supporting structures. As noted, PMER takes planning, time, and resources (but helps ensure Spotlight programming is rights based, relevant and responsive. Establish a PMER working group to guide implementation and ensure coordination. PMER budgets should include funding for joint monitoring visits to communities and programme sites, enabling all stakeholders to observe progress, exchange insights, and reflect on results together.
Ensure civil society is equipped and included. Your Civil Society Reference Group brings critical knowledge, perspectives, and accountability. Their role in PMER should be clearly defined and adequately resourced. CSRG members should participate in coordination and decision-making processes and receive timely feedback on how their input informs decisions, reinforcing transparency and trust.
Create space within systems and timelines. PMER requires flexibility. Rigid UN procedures and short donor timelines can constrain participatory approaches. Effective PMER builds in time for feedback, reflection, and course correction, allowing engagement to happen at a sustainable pace that allows for inclusivity and diverse range of perspectives.
Ground PMER in rights-based principles. PMER is not just a technical approach or process. It is a political one. When grounded in rights-based commitments such as CEDAW, it becomes a tool to surface both progress and challenges, including resistance, backlash, and shifts in social norms that may otherwise go unnoticed.
When done well, PMER fosters stronger local ownership and more sustainable results. It transforms monitoring from a reporting exercise into a shared process of learning and accountability that centers the voices and experiences of those most affected.
Q: How can Spotlight programme's civil society reference group members be included in the M&E cycle of the programme?
A: It is strongly encouraged to include your programme's Civil Society Reference Group members, especially those interested in monitoring work, as active participants throughout the programme cycle, including in monitoring, reporting and evaluation.
This could, for example, include their participation in monitoring field visits, using qualitative, rights-based methods of collecting data such as Outcome Harvesting or Most Significant Change technique. The CSRG can also be included in the design and review of the evaluations or assessments carried out (through evaluation reference groups). As an example: Under Spotlight Initiative 1.0, in the Spotlight Programme in Malawi, the CSRG was involved throughout the midterm review process.
To facilitate this process, we recommend that you jointly form a PMER working group (discussed above, as well) to advise on the implementation of your PMER Strategy. The PMER Guidance Note has more information on this (on page 3).
Q: In addition to Civil Society Reference Group members, who else can be involved in PMER, and how can we facilitate their participation?
A: Great question! And one that gets at the heart of what Participatory Monitoring, Evaluation, and Reporting (PMER) is all about: shifting power dynamics and being intentional about inclusion. This kind of participation takes time, resources, and thoughtful planning, but it’s essential for meaningful monitoring and learning.
First, we encourage Spotlight programmes to establish a PMER Working Group early on, ideally before programme implementation begins. This group should ideally include representatives from all stakeholder groups engaged in or impacted by Spotlight programming, such as: Members of the CSRG, RUNOs, government and civil society partners (as well and civil society grantees), women’s rights and feminist organizations working to end VAWG in your context (including particularly those representing structurally marginalized communities), donor representatives (as relevant), and others.
The role of this group is to co-design and guide the participatory monitoring, evaluation and reporting processes, ensuring that diverse voices inform monitoring and learning.
How to Facilitate participation? Once the working group is formed, there are several key steps to ensure meaningful participation:
Identify participants with support from the National or Regional PMER Working Group, ensuring diversity and inclusion.
Clarify expectations: Ask participants what information they need and how they’d like to contribute. Create space for them to bring their experiences and perspectives.
Discuss resources: Acknowledge the time and cost involved in participatory processes and work together to find solutions for sharing and meeting those needs.
Address tensions: Be open about any contradictions or limitations and co-create strategies to navigate them.
Recognize the politics of participation: PMER is not just technical, it challenges traditional power structures. Embrace openness and transparency as core principles.
Ultimately, PMER isn’t just about collecting data, it’s about centering the people most affected by the work in shaping how we understand progress and impact. For more, consult the PMER Guidance, which provides detailed steps, tools, and examples to help you get started.
Q: What’s the difference between the Spotlight CSRG independent monitoring scorecard and PMER?
A: The Count Me In! Consortium in collaboration with a number of global, regional and national civil society reference group members developed an independent monitoring tool to hold Spotlight Initiative accountable. The tool can be used by Spotlight Initiative CSRG members to assess how Spotlight programme are engaging with civil society (including in governance, partnerships and financing). These scorecards are produced by civil society and for civil society and other stakeholders, providing an independent assessment on how well Spotlight Initiative is upholding its partnership principles and commitments. Civil Society Reference Groups should feel free to develop their own monitoring scorecard using the guidance and resources available here in the Initiative's Virtual Library. CSRGs can also find examples of completed monitoring scorecards here.
By contrast, participatory monitoring, evaluation, and reporting (PMER) is an approach (as discussed above), a way of designing and implementing a Spotlight programme's monitoring, evaluation, and reporting processes so that they are inclusive, consultative, rights-based, and co-owned. PMER involves civil society partners, rights holders, and community members not just as data sources, but as active participants who share control over what is monitored, how information is collected and interpreted, and how findings are used.
In short:
Monitoring Scorecards = independent civil society accountability mechanism – rolled out by CSRGs – to hold Spotlight Initiative accountable to its commitments to meaningful engagement with civil society.
PMER = collaborative approach to monitoring, evaluation and reporting used by Spotlight Initiative programmes.
Both are complementary: scorecards help hold the Initiative accountable, while PMER ensures the Initiative’s own monitoring and learning processes are inclusive, participatory, and grounded in the experiences of those most affected. PMER approaches can be used to collect information for the Scorecard!
Q: What methods and tools can be used to implement PMER?
A: Spotlight programmes can select the methods and tools that are the most appropriate to their context and specific needs. In the PMER guidance note, we've shared several suggestions, along with links to additional examples of participatory approaches.
Spotlight programmes have used key informant interviews, focus group discussions, stakeholder analyses, and have formed Communities of Practice. In the Spotlight Programme in Niger (under Spotlight Initiative 1.0), the following two tools emerged as particularly effective at engaging right-holders and fostering their active engagement:
Most Significant Change: The Most Significant Change (MSC) technique is a form of participatory monitoring and evaluation, with multiple stakeholders involved both in data deciding on type of change to be captured/recorded and in data analysis. MSC occurs throughout the programme cycle and provides useful monitoring information to help people manage the programme, and subsequently evaluate it. Broadly speaking, the process entails “the collection of significant change (SC) stories from participants/community members, and the systematic selection of the most important of these by panels of designated stakeholders or staff. The designated staff and stakeholders are initially involved by 'searching' for project impact. Once changes have been captured, various people sit down together, read the stories aloud and have regular and often in-depth discussions about the value of the reported changes.”
Outcome Harvesting: Using this approach, the evaluator or harvester identifies demonstrated, verifiable changes in behavior influenced by an intervention and how a project, programme or initiative plausibly contributed to them. Unlike other evaluation approaches, Outcome Harvesting does not necessarily measure progress towards predetermined outcomes or objectives. Rather, the evaluator collects evidence of what has been achieved, and works backward to determine whether and how the project or intervention contributed to the change.
Finally, Spotlight Secretariat recommends making use of several techniques and tools to suit the objectives of the work, the resources available as well as the local context. You can find additional information and resources in Spotlight Initiative's Learning Centre (under the "Get Started with M&E" section and "Use Participatory M&E Approaches" sub-section.