Performance & Results Hub
2025 Reporting
Performance & Results Hub
2025 Reporting
This page provides information and resources for 2025 Technical Reporting.
Please contact us with any questions or suggestions at: performanceandresults@cgiar.org.
Reporting is the stage of the Adaptive Management Cycle where results — including W3/bilateral contributions — are compiled, quality-assured, and used to develop Annual Technical Reports and other key reporting products, such as the CGIAR Results Dashboard and the Portfolio Narrative.
This phase ensures results are accurate, consistent, and accessible for learning, accountability, and decision-making.
The templates, timelines, and guidance on this page will help you complete your reporting accurately and on time.
16 February 2026
Program and Accelerator 2025 Annual Technical Report templates are now available.
Program 2025 Annual Technical Report template (with comprehensive guidance notes)
Program 2025 Annual Technical Report template - for use (minimal guidance notes for easier content entry)
Accelerator 2025 Annual Technical Report template (with comprehensive guidance notes)
Accelerator 2025 Annual Technical Report template - for use (minimal guidance notes for easier content entry)
2025 Annual Technical Report design mock-up – indicative only, to illustrate overall style and layout. The templates have been updated since this mock-up was developed.
Timeline for final review and approval of P/A 2025 Annual Technical Reports
Notes
*PPT = Portfolio Performance Team
**Designed reports will be shared back on a rolling basis. P/As that receive their report on 13 April will have until 19 April to provide feedback or confirm approval. P/As that receive their report on 17 April will have until 24 April to provide feedback or confirm approval.
Reporting results
Programs and Accelerators: Detailed guidance on PRMS reporting data fields
W3/Bilateral Projects: Detailed guidance on PRMS reporting data fields
2025 TOC
PRMS Reporting Tool
Access the PRMS Reporting Tool here
MELIA glossary
Quality Assurance
Technical Reporting framework
Technical Reporting engagement plan
Engagement Plan: 2025 Technical Reporting (Full Word document)
Engagement Plan: 2025 Technical Reporting (PowerPoint summary)
2024 Technical Reports
Frequently asked questions
Where can I find the key dates for Technical Reporting for 2025?
An overall Technical Reporting timeline can be found on this page of the P&R Hub. Upcoming key reporting dates and deadlines are also presented on this page. Any updates or changes to the reporting timeline will be communicated via performanceandresults@cgiar.org.
Does the results submission deadline end in January or in February 2026?
The deadline depends on the type of results being reported:
Pooled results (reported by P/A under W1/W2 funding): must be submitted by end of January 2026. These results undergo a quality assurance (QA) process in February 2026.
W3/bilateral results: since these do not go through QA, they can be reported in PRMS until late February 2026.
Is there a CGIAR Style Guide to follow for preparing the narrative components of the Technical Reporting?
Yes, the CGIAR Style Guide can be used as a reference.
Who is responsible for approving changes to reporting categories and for balancing the benefits of collecting detailed information against the reporting burden on P/A?
Ahead of each reporting cycle, CGIAR organizes focus group discussions, workshops, and individual meetings with donors and stakeholders (e.g., Program leads/co-leads, MELIA experts, SO MEL, IPSR team, and Center representatives). Through these exchanges, feedback is collected on which reporting fields are valuable to keep, remove, or add, ensuring both alignment with CGIAR’s strategy and responsiveness to donor requirements.
This input is then prioritized based on feasibility and strategic relevance, with decisions documented in the annual Learning & Optimization (L&O) report. In practice, adjustments to reporting categories are made through this iterative process of consultation, review, and refinement.
For innovations specifically, the responsibility for defining reporting requirements lies with the IPSR team and the Scaling for Impact (S4I) program. They serve as a coordination and facilitation mechanism, not the source of the demand itself. They lead the process of collating and synthesizing demand from leadership, programs, scaling practitioners within and beyond CGIAR, as well as funders, generating innovation insights from reported results, and conducting innovation feedback sessions. In addition to providing the information needed for CGIAR’s core technical reporting products (e.g., P/A Annual Technical Report, Results Dashboard, Portfolio Narrative), PRMS also functions as a system-wide platform for capturing innovation-related metadata that responds to broader institutional needs. This data supports resource mobilization for innovation-focused work, informs S4I activities, and contributes to other cross-cutting initiatives.
During 2025, several dedicated engagements on innovation reporting were convened, including two IPSR feedback sessions on PRMSv2 (April 14 &15) with scaling facilitators and a session with S4I leaders (April 23) culminating in a PPU-led PRMS design workshop in Rome in 28-30 April 2025. Following that workshop, the IPSR/S4I team submitted revised proposals to PPU, which then finalized the current reporting arrangements through the established consultation and review process (Learning & Optimization process).
What role will Science Programs and Accelerators have in the decision making process?
A Performance & Results Management (P&RM) Steering Group is currently under development to provide strategic oversight of key, interrelated performance and results management topics. The group will oversee the implementation and optimization of the Performance and Results Management Framework, the Technical Reporting Arrangement, the PRMS system, and the Gates Foundation MELIAF Grant.
It will be chaired by the Chief Scientist and is expected to include representatives (TBC) from Programs/Accelerators, PPU, PCU, Center MELIA focal points, Communications & Advocacy, Business Development, IAES, and the Gates Foundation (for MELIAF-specific topics). The group will meet twice a year, with the first meeting planned for Q4/2025.
Who can access the PRMS Reporting Tool and where can I find the rules and user roles?
You can find the full details on user access, roles, and rules for the PRMS Reporting Tool here: User Roles and Rules for the PRMS Reporting Tool.
Can I retrieve a result deleted by mistake in the PRMS Reporting Tool?
In this instance, please write to prmstechsupport@cgiar.org with a request to retrieve it, providing information on the result for ease of identification (number if known, title, type and which Program/Accelerator entered it).
In the PRMS Reporting Tool, can I download a PDF for a result that is still being edited?
Yes. The PDF function is always available. The system will generate the PDF with the information that you have entered when you press the button, even if the result entry is not fully complete at the time.
What should I do if I encounter a bug in the PRMS Reporting Tool?
Please ensure you create technical tickets for the bugs you encounter by sending an email to prmstechsupport@cgiar.org. This will help the PRMS team to track requests and prioritize them.
When will our role in PRMS change? When I log in, it shows my role in the Initiative?
You will be able to see your new role assigned to the specific Program/Accelerator once the 2025 reporting phase opens in PRMS.
Is there guidance on what should be reported as a result?
Yes, this document provides some principles for ensuring a focus on quality, rather than quantity, in terms of results reporting: Guidance on ensuring quality over quantity.
Is there guidance available on how to distinguish between outputs and outcomes?
Yes, information on what to report as an output or outcome can be found here: Guidance on reporting outputs or outcomes.
Is there guidance available on reporting "other outputs" and "other outcomes"?
Yes, information is available here: Guidance on the “other output” and “other outcome” result categories.
How do I score results in relation to CGIAR's five Impact Areas?
Guidance on this is available here: Guidance on result-level Impact Area scoring.
What is the minimum number of pieces of evidence I can report?
Between 1 and 6 pieces are accepted by the system. Submit evidence in order of relevance, starting with the one most directly supportive of the result, and use the text box to indicate where the relevant information can be found within each source (e.g., page number, slide number, or table number).
How and where do I submit evidence for results?
Evidence for results can be submitted in the PRMS Reporting Tool. It is possible to submit a maximum of six pieces of evidence per result: However, this does not apply to:
Knowledge products; they are stored on CGSpace and do not require additional evidence uploads in the PRMS.
Capacity sharing for development results; these are exempt from multiple evidence uploads due to the associated time/resource burden and potential unresolved General Data Protection Regulation (GDPR) issues. By submitting a capacity sharing for development result, it is understood that supporting evidence is available and can be provided upon request if a sub-sample is needed.
To avoid issues with evidence access during quality assessment, all evidence that is not publicly accessible must be uploaded to the PRMS.
Additional information:
Evidence links and file uploads are both possible in the PRMS.
All links provided should be publicly accessible. All CGIAR publications should be shared using a CGSpace link.
Links to SharePoint, One Drive, Google Drive, DropBox, and other file storage platforms are not allowed. If you do not have a CGSpace or other public link available, use the “Upload file” option to upload your evidence to the PRMS repository.
For confidential evidence, select “Upload file” and then “No” to indicate that it should not be public.
If you add an evidence link, or indicate that the file being uploaded to the PRMS is public:
You confirm that the file is publicly accessible.
You confirm that all intellectual property rights related to the file have been observed. This includes any rights relevant to the document owner’s Center affiliation and any specific rights tied to content within the document, such as images.
You agree to the file link being displayed on the CGIAR Results Dashboard.
If you indicate that the file being uploaded to the PRMS is NOT public:
You confirm that the file should not be publicly accessible.
The file will not be accessible through the CGIAR Results Dashboard.
The file will be stored in the PRMS and will only be accessible by CGIAR staff (e.g. quality assurance assessors) with the repository link.
Documents uploaded to the PRMS will be view-only and cannot be edited.
Can I select multiple locations for a single result? For example, country and sub-national?
It is possible to select sub-national as an option, and then make multiple inputs. Sub-national inputs are available any time a result is mapped to a country (e.g. a regional result where a country is also specified).
In addition:
When country is selected, multiple countries can be selected, unless the selection makes up a specific region, set of regions, or global location, in which case “region” or “global” should be selected.
When regional is selected, multiple regions can be selected, but if the selections include every region, “global” should be selected.
Virtual is presented as an option only for capacity sharing for development results. It should be selected only if the output relates completely to virtual training. For blended virtual and in-person training, select the geographic location where most of the in-person training took place.
For knowledge products, use the geographic location, pulled from CGSpace, to indicate where the research was conducted or where the subject of the paper is focused.
For innovation development, choose the location where the innovation has been developed, not where there is potential for development.
What is CGSpace?
CGSpace is a repository of agricultural research outputs and results produced by several CGIAR Centers, Initiatives, Platforms, Science Programs and Accelerators and the CGIAR System Office. It indexes reports, articles, press releases, presentations, videos, policy briefs, etc.
Who should add knowledge products to CGSpace? The lead Center or the Program/Accelerator?
Currently, you should rely on Center staff for uploading knowledge products to CGSpace. Researchers should use their Center’s current knowledge management system to inform their Center library/communications/knowledge management/data/or curators mangers about knowledge products to be added to CGSpace.
Where can I find a list of Center curators for CGSpace?
The full list can be found here: CGSpace curators by Center.
How can metadata information in CGSpace be updated?
You should ask your Center library team to ensure that the CGSpace record is accurate. The PRMS Reporting Tool will then refresh the data at the end of the reporting cycle, re-syncing all CGSpace links. There is therefore no need for you to re-sync any links in the PRMS Reporting Tool to update information.
Can CGSpace records be updated during QA periods?
CGSpace is always open for updates, which can be made at any time. During the QA process, assessor comments will reflect the data available in CGSpace at the time of assessment. If the relevant CGSpace record has been updated since the assessor's review, Programs/Accelerators should still accept the assessor's comments as there is no need to respond.
6. What is MQAP for Knowledge Products?
MQAP (Monitoring-Quality Assurance Processor) is a tool designed to support the CGIAR Quality Assurance process for peer-reviewed publications and knowledge products. It extracts and validates publication metadata using APIs from Web of Science, Scopus, Unpaywall, Altmetric. MQAP helps ensure that publications with a Digital Object Identifier (DOI) are validated against these databases, supporting results reporting (including the CGIAR Results Dashboard) and quality assessment of peer-reviewed publications. The tool provides explanations and troubleshooting for different responses, supporting the quality assessment process. MQAP Tool and Documentation: MQAP General Information and Guide
Is it allowed to tag legacy publications (originating from Initiative outputs) only to the Initiatives acknowledged in the publication, or can they also be tagged to the successor Science Program(s) and/or Accelerator(s), even if these are not explicitly mentioned in the acknowledgements?
Yes, during this transition period between research portfolios, dual tagging is allowed, both to the original Initiative (as noted in the acknowledgements) and to the corresponding new Science Program(s) and/or Accelerator(s), even if they are not explicitly mentioned in the publication. This applies when requested by the researcher and aims to ensure continuity and alignment with the evolving research structure.
Who reviews and approves knowledge products that are not peer-reviewed journal articles, such as blog articles and working papers?
Knowledge products other than peer-reviewed journal articles should be reviewed using processes developed by Centers or Programs/Accelerators. The quality assurance team assumes that all knowledge products have gone through a standard review process before being reported.
Each Center knowledge management team can confirm Center guidelines for knowledge products. Please refer to the CGIAR Open and FAIR Data Assets Policy signed by all Centers.
Can I report preprints of articles in the PRMS?
Detailed guidance can be found here: Guidance on preprints.
How can I make a CGSpace entry limited access? And what types of knowledge products would need restricted access in CGSpace?
A curator can limit access to a knowledge product when submitting the material to CGSpace. If the access status for an entry needs to be changed after submission, the curator will need to contact a CGSpace administrator (Abenet Yabowork or Alan Orth) to make that change. It is possible to limit a document to CGIAR users (via Active Directory login) as well as to limit access generally until a certain date.
Published restricted articles can be entered into CGSpace. This is different from confidential evidence, where restrictions are imposed by a scientist on SharePoint or any other IT-recommended storage system at Center level. For confidential evidence, quality assurance assessors will require access to the confidential link.
Who will be contacted for the QA of knowledge products?
If you tag a knowledge product with several Programs/Accelerators, that product will be represented equally among those tagged (e.g., in exports or the Results Dashboard). However, the first Program/Accelerator reporting a knowledge product will be responsible for addressing any comments from a quality assessor (e.g., a publication that was erroneously tagged as ISI by the library team in CGSpace can be questioned and the record should be updated in CGSpace during the assessment process).
Should I report knowledge products in multiple languages separately?
Knowledge products in multiple languages should not be reported separately, unless necessary to evidence the ToC (for example if pathways are differentiated for actors, or geography/geographic scope, requiring the output in different languages).
How is the FAIR score for knowledge products calculated?
FAIR (findability, accessibility, interoperability, and reusability) scores were introduced to align reporting with the CGIAR Open and FAIR Data Assets Policy. These scores are derived from existing CGSpace metadata to minimize data entry efforts, with equal weight assigned to each criterion.
If you wish to enhance the FAIR score for a knowledge product, liaise with your Center’s knowledge management team to implement improvements.
Can PowerPoint presentations (PPTs) be reported as knowledge products (KPs)?
PPT presentations can be reported as KPs if they qualify as intellectual assets generated through research and development activities, and if they contribute to behavioral change among specific actors. It is important that these presentations are an integral part of the Science Program/Accelerator Theory of Change (ToC). The quality of such presentations must be reviewed by the Center curators, following the QA processes established by each Center. Additionally, presentations can serve as evidence for other results.
What type of knowledge products should I consider for reporting?
For reporting, users should only consider knowledge products that are integral to the Theory of Change (ToC). Knowledge products within a ToC are meant for use by Program/Accelerator/Project actors (e.g., a policy brief produced as an Initiative’s output to support a policymaker’s action). To be eligible for reporting, a knowledge product should be a finalized product and it should be stored in CGSpace, following a typology set by the CGSpace community, as outlined in the CGCore and international standards.
Are knowledge products published in early 2026 (January–February) eligible for reporting under the 2025 reporting cycle?
Only knowledge products with a 2025 publication date are eligible.
For journal articles, the PRMS Reporting Tool will verify the online publication date recorded in CGSpace (“Date Online”). If no online publication date is available, the issued date (“Date Issued”) will be used. Articles published online in 2025 but formally issued in 2026 will still be accepted for the 2025 reporting cycle.
For all other knowledge products, the issued date is mandatory in CGSpace. PRMS will validate this date, and only those with an issued date in 2025 will be accepted.
Do I report trainees only when they have finished their course?
Yes, both long-term and short-term training programs must be completed before reporting (to avoid reporting the same trainee multiple times across years).
How do I report the gender ratios if I am unable to determine this for the capacity sharing result I am reporting?
There is an option to enter numbers for “unknown” when gender disaggregation numbers are unavailable. There is also an option to enter numbers for non-binary trainees.
For PhD reporting can we report on the PhD completion from an Initiative if that Initiative was mapped to the P/A?
Yes, PhD training and completion can be reported, provided the PhD is reaching its closure year.
In CapDev, gender is disaggregated as Female/Male, while in other results levels it is shown as Women/Men. Is this distinction intentional?
No, the distinction was not intentional - it was an oversight. We will adjust and ensure consistency by using “men/women.”
How do I determine if I should report an innovation as an output (innovation development) or an outcome (innovation use)?
Innovations can be reported at both the output and outcome level. The development of an innovation (at various stages of design, testing and validation) is an output. The scaling or use of an innovation is an outcome. You may first report an innovation at the output level. If the innovation advances and starts to be used, it should then be reported as an outcome.
Do I need to update innovations that were submitted during previous reporting periods?
Yes, all innovations that were previously reported need to be updated/validated. Upon confirming that the innovation development is “active”, the submitter is asked to validate or update data already in the PRMS Reporting Tool, and provide additional information on missing or new data fields. When innovation development is “inactive” then the submitter is asked to indicate the reason of the inactive status.
When and how do I report the readiness of an innovation?
Detailed guidelines for selecting the readiness level are available (updated October 2024) and CGIAR has developed an innovation readiness calculator to determine the level. The Innovation team proposes the readiness score and provides evidence to support the score. This is reviewed by the CGIAR quality assurance team. Note that at the output level, you will have (only) a “generic innovation readiness score for the core innovation: If it is level 7 in Kenya, level 3 in Peru and level 5 in India, only the highest score for the generic rank is retained.
What is an innovation profile and who develops these?
Innovation profiles are summaries of innovations reported in the PRMS and can be downloaded as an automated pdf. Teams may also choose to create more polished versions of these profiles with support from their Center’s graphic designers. These enhanced versions can feature high-quality images, engaging visuals, and customized formatting. To support this process, the IPSR team can provide a sample of a polished profile as a reference.
Do we have templates (editable) for the innovation profile briefs?
The update of templates for innovation development results is ongoing, and the revised versions will be available on the CGIAR P&R Hub ahead of the October 2025 innovation-focused session.
Should I report an innovation that is at an early stage, when there is a chance that it may not develop further?
To set benchmarks and to demonstrate and track progress over time, it is important to track work across the portfolio at early as well as later stages of innovation development and use. If your Program/ Accelerator/ Project has invested time and financial resources into an idea for an innovation, it should be reported in PRMS.
Can innovations reported in 2022 or 2023 innovations (but not updated in 2024) be packaged and reported at innovaton use level using the IPSR approach?
Yes, you may include inactive innovations in an innovation package report in PRMS.
When should I mark an innovation as inactive or discontinued (versus no evidence of progress for the reporting year)?
“Inactive/discontinued” should be selected when no investment was made in advancing the innovation in the reporting year.
How do I report a previously submitted innovation that has been continued with additional investment, but with no significant changes in data?
This should be categorized as an "active/continued" innovation. Data fields can be reviewed and updated where necessary, but it is also acceptable to resubmit the 2024 data as is. Submitters may be prompted to provide data on new innovation data fields.
Are “inactive/discontinued” innovations considered submitted results?
No, they are not considered submitted results. Once an Innovation Development result is tagged as inactive, the submit button is disabled.
What will happen with results that are labelled “inactive/discontinued” in the PRMS Reporting Tool?
Inactive/discontinued innovations will still be visible in the PRMS Reporting Tool. All innovations (active or inactive) will be automatically replicated every year and be available to be updated once a new reporting phase is opened.
Do “inactive/ discontinued” innovations go through the quality assessment (QA) process?
Inactive innovations will not go through the QA process.
Can innovations marked "inactive/discontinued" be reopened, edited and submitted as continued/active?
Yes, inactive/discontinued innovations can be reopened, edited and submitted as continued/active.
How should I distinguish between the testing and validation readiness levels for innovations, and what kinds of evidence are required to justify each?
Levels 4, 6, and 8 represent testing stages, where the innovation is actively evaluated for its ability to achieve specific impacts under varying conditions:
Level 4 involves testing in a fully controlled environment,
Level 6 in semi-controlled conditions, and
Level 8 in uncontrolled, real-world settings.
For these testing levels, the appropriate evidence should include data and documentation from the ongoing trials or studies that will demonstrate the innovation’s performance under the specified conditions:
Level 4 requires controlled environment testing evidence, such as lab reports or experimental data.
Level 6 needs evidence from semi-controlled conditions, like pilot study results where not all variables are regulated.
Level 8 calls for field trial data or initial user feedback to demonstrate performance in real-world settings.
In contrast, levels 5, 7, and 9 are validation stages, where evidence from prior testing is used to confirm that the innovation can achieve the desired impact under specific conditions:
Level 5 confirms readiness based on results from fully controlled tests,
Level 7 validates readiness in semi-controlled environments, and
Level 9 establishes readiness in uncontrolled, real-world contexts, with limited or no involvement of CGIAR.
For these validation levels, evidence should provide conclusive reports that the innovation has successfully met impact criteria based on previous testing:
Level 5 requires a validation report that confirms performance in a controlled environment.
Level 7 needs documentation, like a summary of pilot results, showing validation in semi-controlled settings.
Level 9 requires evidence of validation in real-world conditions with limited or no involvement of CGIAR, such as field data.
In summary, testing levels require evidence of active evaluation under specific conditions, while validation levels require confirmation that testing results prove the innovation’s readiness for impact.
How do I determine if I should report an innovation as an output (innovation development) or an outcome (innovation use)?
Innovations can be reported at both the output and outcome levels. The development of an innovation (at various stages) is an output. The scaling, uptake and use of an innovation can be reported as outcome. In PRMS the results can be linked.
What are the two ways to report innovation use results (outcomes)?
You can report innovation use results (outcomes) via a non-IPSR pathway or via the IPSR pathway.
Non-IPSR pathway/reporting single innovation use: Any innovation use result can be reported using the non-IPSR pathway. In 2025 this pathway is 90% similar to how it was reported in 2024 (with a small change in specifying users and disaggregating youth, if applicable). The non-IPSR pathway: only records (i) innovation use type, (ii) innovation use quantity, and (iii) evidence to support innovation use reporting.
IPSR pathway/reporting innovation bundle/package use: The IPSR pathway is the advanced way to report innovation use as part of an innovation bundle/ package. Only those innovations that have been reported and quality assured (at the output level) in the PRMS can be reported as part of an innovation package (outcome level). A data template is available on the P&R Hub here. The IPSR pathway supports innovation teams and partners to co-design innovation bundles and packages, assesses these packages to identify key bottlenecks/opportunities, and creates a starting point for developing scaling strategies.
How do I package an innovation using IPSR that has not yet been reported and quality assessed in the PRMS?
Report this as innovation development at output level before the deadline for submission for the QA process. Once the QA process is finalized, this will enable innovation package reporting. To collect the required data, an IPSR workshop can be organized.
Which results can be reported for innovation use/IPSR pathway?
As part of the IPSR pathway, only innovations that have been reported and QA-ed at output level as innovation development can be selected for reporting of innovation use at outcome level. As part of the non-IPSR pathway, any innovation use can be reported.
Why should I choose the IPSR pathway?
Using the IPSR pathway allows you to demonstrate innovation/scaling synergies and collaboration with other CGIAR Initiatives and partners around innovation packages. It is recommended to report innovation use through the IPSR pathway as this supports the development of:
Scaling ambition (aligned with End of Initiative outcomes): an agreed-upon statement that includes information on where, with whom, for whom, and by when innovation scaling will contribute to outcomes and impacts.
Innovation Package: enabling conditions that will support achieving the scaling ambition in a specific context.
Scaling Readiness assessment: insight into how ready the innovation package is for scaling, what are the key bottlenecks and opportunities.
An initial light Scaling Strategy (a more mature, structured, Scaling Strategy is still coming) developed in multi-stakeholder discussions documenting the best ways forward for scaling an innovation.
The IPSR pathway collects all data that is also collected through the non-IPSR pathway, but generates valuable learning for CGIAR and partners to develop innovation and scaling strategies. Furthermore, the IPSR pathway can be used to show progress towards impact at scale against a scaling ambition, rather than just providing current innovation use numbers. Using the IPSR pathway also gives you access to Scaling Challenges/ Funds.
Is there guidance on adding complementary innovations/enablers/solutions as part of innovations use – IPSR pathway reporting?
Yes, please see this guidance note, which provides details on adding complementary innovations/enablers/solutions.
How do I determine my innovation’s use level?
Use the CGIAR Scaling Readiness framework (levels 0–9). You can either review the innovation use framework (link) and select the best fit, or use the CGIAR Scaling Readiness Calculator (link). The calculator will guide you through three sets of questions to generate the use level. Make sure you have evidence to support the level you report.
How to calculate/estimate and project innovation use numbers for 2030?
This guidance note outlines a practical process for estimating or projecting innovation use figures by 2030. It helps teams develop evidence-based, realistic, and traceable projections for both individual and organizational users.
Will there be guidance on innovation bundles: how they are created, managed and displayed in the dashboard?
Yes, an innovation-focused session will be held in late October 2025 to provide further guidance on reporting innovations across all levels. The session will explain how innovation bundles will work in practice, including the evidence required, how to establish linkages, and how this will be reflected in the Results Dashboard.
The session will also showcase cross-CGIAR projects that already use PRMS innovation data, demonstrating how reporting supports not only technical outputs but also resource mobilization, portfolio analysis, and strategic donor engagement. In addition, the IPSR team is assessing the feasibility of an analysis of innovations submitted by mapped Initiatives to Programs, which could serve as a baseline for each Program and strengthen continuity between Initiative-level and Program-level reporting.
Doesn’t reporting only the highest score remove the value of tracking innovation maturity at a localized level, since lower scores would be hidden?
The key point to note is that at the innovation development (output) level, the readiness score reflects the highest maturity reached anywhere the innovation is being developed. For example, if an innovation is at level 4 in Zambia, level 5 in Colombia, and level 8 in Kenya, the innovation development record will retain the score of 8, because it has already been proven to reach that level of maturity in at least one location.
This does mean that country-specific variation is not visible at the development stage. However, that detail is not the purpose of innovation development reporting. Localized readiness and use are tracked at the innovation use (outcome) level, which is always context-specific and records the readiness and uptake of the innovation under enabling conditions in a given geography.
In practice, this approach simplifies reporting at the development stage-saving time and effort-while still ensuring that the relevant, context-specific information is captured where it matters most: at the innovation use and outcome level.
Will we be asked to update all previously reported innovations? How will this work - will those innovations be mapped to the new Programs/Accelerators?
All innovations reported in 2024 will need to be updated. Updates will follow the mapping of Initiatives to Programs/Accelerators (P/As). For example, if Initiative X submitted an innovation in 2024 and that Initiative has since been mapped to Program Y, then Program Y will be responsible for updating that innovation.
If an Initiative has been mapped to multiple Programs, all mapped Programs will have visibility, but the designated lead Program will be responsible for ensuring the update. Notifications will be sent through PRMS to the Leads, Co-Leads, and Coordinators of each relevant Program.
Any innovations not updated by the deadline (to be communicated during the reporting cycle) will be marked as inactive after the reporting period.
Can we report on results from bilateral projects that are not mapped to a Program/Accelerator?
In 2025, reporting of W3/bilateral results should focus on projects mapped to and agreed by the Science Program/Accelerator. Please refer to this dashboard for the complete list of w3/bilateral projects mapped to and agreed by Programs and Accelerators in 2025.
If a bilateral project is mapped to and was agreed by more than one P/A, how should results be reported to avoid duplication?
Results should be reported under the P/A with which the project aligns most closely - that is, where it contributes to the objectives, impact pathway, or thematic scope of the P/A’s TOC and research agenda. For any additional P/As to which the W3/bilateral project is mapped, these can be recorded in PRMS as internal collaborators on the result.
What does “Quality Assurance” (QA) mean in the context of CGIAR’s reporting?
QA is the process used to verify the accuracy, credibility, and consistency of results reported through the PRMS. QA contributes to learning and continuous improvement and strengthens trust in the data used for decision-making and external reporting (e.g., Results Dashboard, 2025 Technical Reporting products, CGIAR Annual Report).
What data is covered/not covered by Quality Assurance (QA) in CGIAR’s Technical Reporting?
All pooled results submitted through the PRMS by Programs and Accelerators undergo QA. This includes both output-level and outcome-level data. Output categories include Knowledge Products, Innovation Development, Capacity Sharing, and “Other Outputs.” For knowledge products, those that are not peer-reviewed scientific papers or MELIA studies are quality-assured directly by CGSpace curators. Outcome-level results such as Policy Change and Innovation Use (including IPSR pathways), along with “Other Outcomes,” also go through QA.
To strengthen accuracy where it matters most, a subset of high-priority data fields undergo two rounds of review: first by lead assessors, then by a third-party reviewer for any unresolved disagreements.
Some fields are not QA’ed because the benefit would not justify the effort or is not feasible given the nature of the information. For example, collaborator details are considered low-risk and difficult to externally validate. Limiting QA for such fields helps maintain the right balance between rigor, feasibility, and value for decision-making.
High-priority fields undergoing the 2nd round
Result level
Result type
Evidence
KP type (if MELIA)
Innovation Readiness level
Policy Change stage
Core and complementary Innovation Use levels (evidence-based)
The full list of data fields included in QA will be shared through the 2025 QA Assessors’ Guidance.
What is the timeline for the QA process for 2025 Technical Reporting, and how long do I have to respond to comments?
The QA process runs in two rounds.
QA timeline (2025 reporting cycle)
30 January 2026: Deadline for data submission (except for Knowledge Products)
2-9 February 2026: QA Round 1 conducted
10-19 February 2026: Reporting teams address QA comments in PRMS (8 working days)
20 February 2026: Final deadline for submitting Knowledge Products
20-22 February 2026: QA Round 2 for priority/core fields
23-25 February 2026: Third-party broker review (only if disagreements remain)
26 February 2026: QA process completed
How is QA different for pooled funding compared to W3 and bilateral results?
All pooled results reported in the PRMS go through CGIAR’s central QA process. For W3 and bilateral results, QA is mainly carried out at the Center level and focuses on minimum data standards and supporting evidence (see Annex 3 of the TRA). Programs and Accelerators then review these results for alignment with their Theory of Change, geographic focus, and to avoid duplication before final submission.
In 2025 Technical Reports, W3 and bilateral results will sit alongside pooled results, but with clear visual differentiation. This helps readers see which results have undergone central QA and which have been verified through Center-led processes. As 2025 is a transition year, work is underway to gradually harmonize QA across all funding streams.
Q&A coming soon.