Your search
Results 10 resources
-
This five-step framework, developed and tested by a foundation, embeds learning in emergent systems change strategies. It prioritizes the testing of hypotheses and assumptions, uses learning questions, and calls for examining both confirming and disconfirming evidence. --- A framework for embedding learning in systems change strategies and for testing strategic uncertainties. Learning and evaluation approaches that accompany systems change efforts need to fit with and support the emergent...
-
The Reduced Access Analytical Methods (RAAM) toolkit is a practical resource designed to help humanitarian practitioners overcome monitoring challenges in reduced access environments. Reduced access can be caused by natural disaster, conflict, political instability, or other factors, and typically makes it difficult to conduct normal monitoring of program implementation. The RAAM toolkit offers technical and managerial tools for a menu of analytical methods that can fill information gaps...
-
How can donors and grantees work together to create effective monitoring, evaluation, and learning (MEL) practices that drive field-wide transformation? The Open Society Foundation’s Fiscal Governance Program found success by focusing on six key approaches, including empowering grantees and relinquishing power. In 2021, an external close-out evaluation by Intention to Impact of the program (which ran for 7 years and gave over $150 million in grants) revealed something pretty remarkable—the...
-
This CDI Practice Paper by Tom Aston and Marina Apgar makes the case for ‘bricolage’ in complexity-aware and qualitative evaluation methods. It provides a framework based on a review of 33 methods to support evaluators to be more intentional about bricolage and to combine the component parts of relevant methods more effectively. It discusses two cases from practice to illustrate the value added of taking a more intentional approach. It further argues that navigating different forms of...
-
Calls for more ‘adaptive programming’ have been prominent in international development practice for over a decade. Learning-by-doing is a crucial element of this, but programmes have often found it challenging to become more learning oriented. Establishing some form of reflective practice, against countervailing incentives, is difficult. Incorporating data collection processes that generate useful, timely and practical information to inform these reflections is even more so.This paper...
-
In the CEDIL Methods brief, ‘Evaluating complex interventions: What are appropriate methods?’ we identify four types of complex development interventions: long causal chain interventions, multicomponent interventions, portfolio interventions, and system-level interventions. These interventions are characterised by multiple activities, multiple outcomes, multiple components, a high level of interconnectedness, and non-linear outcomes.
-
This landscape review on measuring and monitoring adaptive learning highlights the learning from five adaptive programming guidelines and toolkits and one implementation science framework to inform the monitoring and evaluation of adaptive learning. The introduction of adaptive learning processes and skillsets in global health programming is part of an emerging strategy to advance a learning culture within projects and teams to improve health program performance. The monitoring and...
-
This paper reviews promising methods for the evaluation of complex interventions that are new or have been used in a limited way. It offers a taxonomy of complex interventions in international development and draws on literature to discuss several methods that can be used to evaluate these interventions. Complex interventions are those that are characterised by multiple components, multiple stakeholders, or multiple target populations. They may also be interventions that incorporate...
-
Internal and external stakeholders have different information needs over a project’s life, for purposes that include adaptive management, accountability, compliance, reporting and learning. A project’s monitoring, evaluation, accountability and learning, or MEAL, system should provide the information needed by these stakeholders at the level of statistical reliability, detail and timing appropriate to inform data use. In emergency contexts where the situation is still fluid, ‘informal...
-
Most people agree that monitoring and evaluation (M&E) should be used for both learning and accountability. However, there is no consensus about which one is more important. The debate matters as there is sometimes tension between the two purposes. In the past there has often been a disconnect between M&E and learning. Many M&E systems are primarily designed to enable accountability to donors.
Explore
Theme
-
MEL4 Adaptive Management
- MEL in International Development
- After Action Reviews (1)
- Context Monitoring (1)
- Evaluating Multi-project programmes (2)
- Impact Oriented Monitoring and Evaluation System (2)
- MEL Bricolage (1)
- Portfolio Management (2)
- Randomized Controlled Trials (2)
- Sense-making (1)
- Strategy testing (1)
- Systemic Change (1)
-
Adaptive Approaches [+]
(7)
- Adaptive Learning (4)
- Adaptive Management (3)
- Adaptive Rigour (1)
- Agile & Lean approaches (1)
- Cases (1)
- Development Actors Perspectives (5)
-
Geography
(1)
-
Africa
(1)
-
Eastern Africa
(1)
- Mozambique (1)
-
Eastern Africa
(1)
-
Africa
(1)
-
Practical
(4)
- Tools (1)
- Sectors [+] (4)