Performance & Results Hub
Reflect
Performance & Results Hub
Reflect
This page provides information and resources for Reflect
Please contact us with any questions or suggestions at: planningandadaptivemanagement@cgiar.org
Reflect is a structured learning and analysis phase within the CGIAR Adaptive Management Cycle, formally emphasized in Quarter 1 (February–March). It occurs after the previous year’s results have been reported and quality-assured, and before plans for the current year are adjusted. It should also be noted that while Quarter 1 provides a coordinated moment for consolidation, reflection is continuous throughout implementation, with the Quarter 1 phase synthesizing insights to guide targeted adaptations.
The purpose of reflect process therefore includes:
Evidence-Based Performance Assessment: Drawing on quantitative and qualitative evidence— MELIA findings, stakeholder feedback, financial information, and contextual analysis—Reflect enables Programs and Accelerators to assess what is working, what is not, and why. It supports examination of performance, risks, assumptions, and emerging opportunities, helping teams surface lessons and test the continued relevance of their strategies.
Learning-Driven Strategic Adaptation: It provides a formal mechanism for reviewing performance and strengthening contribution to impact through learning-driven adaptation. It enables Programs and Accelerators Directors to recommend reprioritization of ToCs and update to PORBs, clarify the contribution of W3 and bilateral investments, and reassess risk management plans for the year ahead.
Change Management: The process also creates space to document and justify significant changes—through appropriate approval pathways such as Change Management Logs—ensuring adjustments are evidence-based and strategically aligned.
Transparency and Accountability: Reflect provides Programs and Accelerators with the opportunity to demonstrate to funders, partners, and stakeholders that learning is actively informing implementation. This is formally documented in Section Four (4) of the Programs and Accelerators Annual Technical Reports, which captures how learning was generated, the insights that emerged, and how plans were subsequently adapted.
Adaptive Management Reflect Guidance for CGIAR’s 2025—2030 Science and Innovation Portfolio l PDF (Version 17 February 2026)
Adaptive Management Guidance for the Technical Report
Purpose: Show how the Program learned and adapted during the year.
Format: 1,000 words total for the Adaptive Management enablers you choose to mention:
Evidence and learning. You must include this enabler and explicitly reference the Evaluability Assessments recommendations you plan to address and the expected outcome.
W1/2 Funding and Financial Planning
W3/Bilaterals Data
Risk Management
Demand and Scaling Readiness
Guidance:
For each enabler, provide concise information on the approach, planned actions, and expected results.
Not all enablers need to be mentioned; only include the adaptive management enablers that are applicable.
If adaptive decisions relate to multiple enablers, select the most relevant and avoid repeating content across sub-sections.
Where relevant, this section may also draw on Program-specific data and metrics that informed learning and adaptation. Ensure that appropriate sources are cited.
For Risk Management, please use the Export PDF function from the Risk Management Module and paste it into your Technical Report.
If you need support in drafting this section, contact: planningandadaptivemanagement@cgiar.org
Evidence and Learning Enabler
Reflect on how evidence shaped strategic or operational adaptive management recommendations in 2025.
Guiding Reflect Questions
To what extent were planned outputs, outcomes, and impact achieved as set out in the ToC and PORBs?
Do the ToCs need to be updated with revised indicators or result types?
Where do MELIA findings and quality-assured results show strong performance, underperformance, or outliers—and why?
Is the MELIA plan appropriate and does it need to be revised with more realistic costings and studies underpinning key outcomes and impacts?
What IAES evaluability findings should inform future design or measurement?
What key lessons emerge from audits, ISDC reviews, and independent evaluations?
Based on evidence, what should be continued, adjusted, or stopped?
Are the KPIs sufficiently clear, concise, and accurate? Were the KPI review recommendations implemented ?
Reflect on the logic of how the 2030 Outcome targets were set? Do outcome targets need to be revised?
Are the plans to provide evidence for achievement of outcome and impact goals sufficient?
Resources
MELIA findings
Quality-assessed and reported results stakeholder feedback and demand signaling; audit reports
CGIAR’s Independent Advisory and Evaluation Service (IAES) and publications from ISDC, SPIA and the Evaluation function.
W1/2 Funding and Financial Planning Enabler
Reflect on how financial planning and resourcing were recommended to be adapted in response to evidence or funding conditions.
Guiding Reflect Questions
How well did budget allocations and actual resource use support delivery of program objectives?
Where did flexible funding mechanisms add value or face constraints?
Do expenditure and burn-rate data indicate under- or over-utilization of resources, and why?
What lessons from prior planning and replans should inform current-year adjustments?
What budget changes were made, and were approval processes clear and transparent?
What resourcing adjustments are needed for the replan phase?
Resources
Center Financial allocations
Analysis of flexible funding mechanisms; performance review evidence
Documentation of budget changes and approvals[AP1]
Lessons from previous planning and replan cycles
W3/Bilateral Data
Reflect on recommendations related to non-pooled funding and how it is integrated into planning and delivery.
Guiding Reflect Questions
To what extent are W3 and bilateral projects aligned with the current Program/Accelerator ToC and overall portfolio priorities?
How have the mapped W3/bilateral projects contributed to planned outputs, outcomes, and impact pathways? Where are contributions strong, weak, or unclear?
Where do pooled and non-pooled investments reinforce each other, and where are there gaps, overlaps, or fragmentation?
Does the distribution of pooled and non-pooled funding reflect strategic priorities and expected results? Are there mismatches between investment and contribution?
Is the current mapping of W3/bilateral projects to specific ToC results still valid, or does it require adjustment?
What coordination lessons have emerged, and what concrete adjustments are needed in the next planning or replan cycle?
Resources
Planned results and budgets for W3 and bilateral projects
Budget information
Documentation of coordination and collaboration practices
Lessons learned from previous cycles
Risk Management Enabler
Reflect on how risk identification influenced strategic or operational changes
Guiding Reflect Questions
Which risks materialized during implementation, and how effective were mitigation actions?
Which risks were well managed, and which require different approaches?
What new or emerging risks were identified during the year?
How did risk events affect delivery of results, timelines, or budgets?
Based on these reflected risks, what updates should be made to the risk registers in the PRMS Risk Management Module to strengthen future risk management?
Resources
Submitted risk registers
Documented risk mitigation actions
Information on risk materialization and outcomes
Lessons learned related to risk management
Note: we are working on updating the export function to provide you with a summary of your risks and related mitigating actions. It should be available by mid-March 2026.
Demand and Scaling Readiness Enabler
Reflect on how evidence and learning translates into decisions about responsible upscaling, adaptation, replication, or downscaling of innovations.
Guiding Reflect Questions
Which innovations progressed along scaling, adaptation, or replication pathways during the previous cycle, and what evidence demonstrates their effectiveness and relevance across contexts?
In which settings did scaling perform as expected, and where did contextual factors such as institutional capacity, gender and equity dynamics, environmental conditions, or political economy enable or constrain results?
What assumptions underpinning scaling strategies were validated or challenged during implementation, and what unintended consequences, positive or negative, were observed for people, systems, or environments?
What trade-offs emerged between scale, speed, depth of impact, inclusiveness, and sustainability, and how were these managed in practice?
Which innovations required adaptation, phased scaling, or responsible downscaling, and what lessons can be drawn from these decisions based on evidence and learning?
How did stakeholder and end-user feedback inform understanding of adoption, ownership, and system-level effects, and how should these lessons inform revised scaling pathways, resource allocation, and partnership strategies in the Replan phase?
Resources
S4I Scaling Task Team
Demand Intelligence Dashboard