Results 3 resources
Laying the Foundations for Impact: Lessons from the GCRF EvaluationVogel, I., & Barnett, C. - 2023 - The European Journal of Development Research, 35(2), 281–297
Research for development (R4D) aims to make a tangible difference to development challenges, but these effects typically take years to emerge. Evaluation (especially impact evaluation) often takes place before there is evidence of development impact. In this paper, we focus on opportunities for assessing the potential for impact at earlier stages in the research and innovation process. We argue that such a focus can help research programme managers and evaluators learn about the...
Learning about Theories of Change for the Monitoring and Evaluation of Research Uptake (No. 14; IDS Practice Paper In Brief)Barnett, C., & Gregorowski, R. - 2013 - IDS
This paper captures lessons from recent experiences on using ‘theories of change’ amongst organisations involved in the research–policy interface. The literature in this area highlights much of the complexity inherent in the policymaking process, as well as the challenges around finding meaningful ways to measure research uptake. As a tool, ‘theories of change’ offers much, but the paper argues that the very complexity and dynamism of the research-to-policy process means that any theory of...
Three Approaches to Monitoring: Feedback Systems, Participatory Monitoring and Evaluation and Logical FrameworksJacobs, A., Barnett, C., & Ponsford, R. - 2010 - IDS Bulletin, 41(6), 9
This article compares key attributes, strengths and weaknesses of three different approaches to monitoring development interventions: the logical framework approach, participatory monitoring and evaluation (PM&E) and feedback systems. Academic and practitioner literature describes how logframes meet the needs of senior decision-makers to summarise, organise and compare projects. PM&E meets the needs of field staff to work sensitively with intended beneficiaries and support their learning and...