News at the IEA

May 9, 2018

Turning the lens on ourselves: Evaluating Evaluations

Working in a research for development organization, the similarities between evaluators and researchers has become more apparent to us in the IEA Team. Evaluation teams are set up to explore programs and projects, collecting evidence on what is working, what is not, teasing out the “why”, and drafting conclusions and recommendations on what can be improved.

How about assessing the effectiveness of the evaluation itself? What would be a good performance indicator for effectiveness of evaluations in producing useful information for management and decision making? Over the past four years, IEA has commissioned 10 evaluations of CGIAR Research Programs which are large, multi-year, global, research for development programs which make up the bulk of the CGIAR portfolio. The research programs submit proposals, which undergo independent scientific review prior to being considered for approval for the next cycle of funding. The period of designing research programs and developing proposals is a critical time to measure usefulness and effectiveness of evaluations.

IEA reviewed the use and utilization of evaluations during the program design, assessment, and approval process, distinguishing two types of use: validation, whereby reference to evaluation supports current program strengths and directions; or change, whereby a program refers to changes as a result of findings or recommendations from the evaluation.

We found significant use of evaluation by program management, with 129 distinct references across the 10 global program’ proposals. Quite significantly, the majority of the references (76) were made to support changes and adjustments to the program. Changes to program, with reference to the evaluation, were mainly in critical areas such as program strategy and priorities (53%), and on governance and management (20%), further illustrating the effectiveness and use of the evaluation.

Furthermore, the independent expert assessment of the program proposals also referenced and used evaluations in their review process (61 references), indicating yet another distinct factor of effectiveness and utility.

In reviewing the effectiveness of evaluations, such performance indicators can provide yet another source of information, alongside other integrated performance assessments. Such tools can further illustrate the usefulness of evaluations and reflect the learning and change as a result of an evaluation process

 

March 22, 2018

Evaluation of Results Based Management in CGIAR

IEA conducted a System-wide Evaluation of Results Based Management to learn lessons from the experience of introducing and implementing different aspects of RBM in CGIAR.  On the basis of international experiences, the evaluation team formulated ten good practice principles for RBM applicable to CGIAR’s context and proposed a Theory of Change for RBM in CGIAR.

March 21, 2018

Evaluation of Partnerships in CGIAR

This Evaluation was the first comprehensive assessment of partnerships in CGIAR and it focused particularly on the extent the 2009 CGIAR reform has led to strengthening of strategic partnerships.

The Evaluation highlighted the role partnerships have historically played in CGIAR, and found that the reform has had positive effects.  Main findings include evidence of more strategic relationships with an increased number of partners, illustrated by more explicit roles and clearly defined responsibilities.  For private-public partnerships, the Evaluation found ambiguity in understanding the strategy and methods of engagement with the private sector, which has a role both in the enhancement of science and delivery.

Recommendations focused on linking partnership strategies with research strategies, optimizing partnership models, addressing resource issues that influence partnerships and partners’ roles in managing research. The Evaluation recommended that the strategic role of multi-stakeholder partnerships be explored and guidance be prepared for engaging in public-private partnerships. It was recommended that CGIAR at the System level clarifies how partnerships are expected to be funded and what are the implications of current funding trends on partnerships. The Evaluation also recommended better staring of experiences about partnerships across CGIAR and more closely involving NARS with requisite capacity and commitment in research management.

The CGIAR System Management Board (SMB) welcomed the Evaluation findings and recommendations. In responding to the Evaluation, the SMB fully agreed to and supported all six recommendations. In its response, the SMB indicated areas where actions to improve partnerships and strengthen engagement with NARS can be taken, such as through CGIAR country collaboration activities and through better monitoring of progress on partnerships through the new performance-based management system.

Summary Report – Evaluation of Partnerships in CGIAR
Final Report – Evaluation of Partnerships in CGIAR
Final Report: Annexes – Evaluation of Partnerships in CGIAR
System Management Board Commentary – Evaluation of Partnerships in CGIAR

March 21, 2018

Evaluation of Intellectual Assets Principles of CGIAR

The review covered the IA Principles in a comprehensive manner regarding coverage, adequacy, and appropriateness of the Policy. The review assessed both appropriateness and effectiveness of the IA principles and the efficiency and transparency of their implementation. It also assessed reputational issues that may arise from the manner by which CGIAR manages and governs its IA.

Final Report – Review of the CGIAR Intellectual Assets Principles

System Management Board Commentary – Review of the CGIAR Intellectual Assets Principles

 

 

November 9, 2017

Evaluation of the Independent Science and Partnership Council (ISPC)

The Evaluation of the Independent Science and Partnership Council (ISPC) – CGIAR’s  independent scientific advisory body – was recently completed.  The Evaluation assessed the relevance and scope of the leadership and advisory functions, as well as the work of the ISPC.

Evaluation report has been finalized with receipt of the Management Response from ISPC.

October 27, 2017

Developing, Using, and Assessing TOC in CGIAR

What have been the lessons learned from developing and using Theories of Change (TOC) in CGIAR’s research for development programs? What are the main features and elements of a good TOC? How have they been assessed, and what input can they provide for assessing a program?

Read more from the IEA technical workshop on the “Development, Use, and Assessment of TOC in research for development programs”.

October 20, 2016

Cross-cutting Evaluations: Partnerships, Gender, Cap Dev

IEA has initiated the evaluations of three cross-cutting themes scheduled to be completed by early 2017. The evaluations will draw on the findings of the completed CRP evaluations, and review strategies and progress on these critical areas of work across CGIAR.

April 3, 2016

Gender in CGIAR – gender equity in research and in the workplace

The Evaluation of Gender in CGIAR was completed in 2017, and is the first independent, System-wide evaluation on this topic.  The Evaluation was conceived to cover two dimensions; Gender in CGIAR Research, and Gender at the workplace; since both contribute to the common objective of gender equity.

Both Evaluation reports have been finalized with receipt of the Management Response from the CGIAR System Organization.

April 1, 2016

CRP evaluations completed

IEA has completed the evaluations of 10 CGIAR Research Programs, which serve as  the first set of evaluative information on the research and organizational performance of CRPs since their formation as a new modality of conducting research for development and collaboration in CGIAR.