Turning the lens on ourselves: Evaluating Evaluations

Working in a research for development organization, the similarities between evaluators and researchers has become more apparent to us in the IEA Team. Evaluation teams are set up to explore programs and projects, collecting evidence on what is working, what is not, teasing out the “why”, and drafting conclusions and recommendations on what can be improved.

How about assessing the effectiveness of the evaluation itself? What would be a good performance indicator for effectiveness of evaluations in producing useful information for management and decision making? Over the past four years, IEA has commissioned 10 evaluations of CGIAR Research Programs which are large, multi-year, global, research for development programs which make up the bulk of the CGIAR portfolio. The research programs submit proposals, which undergo independent scientific review prior to being considered for approval for the next cycle of funding. The period of designing research programs and developing proposals is a critical time to measure usefulness and effectiveness of evaluations.

IEA reviewed the use and utilization of evaluations during the program design, assessment, and approval process, distinguishing two types of use: validation, whereby reference to evaluation supports current program strengths and directions; or change, whereby a program refers to changes as a result of findings or recommendations from the evaluation.

We found significant use of evaluation by program management, with 129 distinct references across the 10 global program’ proposals. Quite significantly, the majority of the references (76) were made to support changes and adjustments to the program. Changes to program, with reference to the evaluation, were mainly in critical areas such as program strategy and priorities (53%), and on governance and management (20%), further illustrating the effectiveness and use of the evaluation.

Furthermore, the independent expert assessment of the program proposals also referenced and used evaluations in their review process (61 references), indicating yet another distinct factor of effectiveness and utility.

In reviewing the effectiveness of evaluations, such performance indicators can provide yet another source of information, alongside other integrated performance assessments. Such tools can further illustrate the usefulness of evaluations and reflect the learning and change as a result of an evaluation process