Development programs and policies need systematic and credible evidence to inform the basis of these interventions. Impact evaluations have emerged as a means of providing an evidence base. Here we look at their adoption, and what we have learned about bridging the gap between research, policy, and practice.
- A challenge for adopting impacting evaluations is developing cultures of learning. There are tendencies in the aid sector to downplay mistakes and poor results, as well as policy driven research, and evaluations focused on accountability that serve a managerial function. Evidence-use also needs to be institutionalised at that national and local level, so that it is sustained despite changes in political leadership.
- The uses of impact evaluations are mixed and only partially accounted for. Best practice cases have been found to improve links between researchers and policy makers. Overall, they have mostly been used to inform program decisions, with less influence on planning and budgeting. However, it has been difficult to fully assess their use because incorporating recommendations into practice can be slow or indirect and there is little financing to track effects once projects have closed. More studies need to be conducted that trace effects from research to policy, and vice versa, as well as tracking complex processes to socio-economic impact.
- It is clear the link between impact evaluations and learning is not straightforward. To begin with, impact evaluations differ in quality, cost, and timeliness, which informs their uptake. There have been improvements in terms of how efficiently they are conducted and how user-friendly they are. However, impact evaluations could better inform use by considering cost outcomes over time, and subgroup analysis.
- Additionally, the usefulness of an impact evaluation depends on the appropriateness of the methodology used. Randomized Control Trials (RCTs) were initially the main approach used in impact evaluations but they are best suited to measuring average effects and are not able to capture what works and how. Depending on the needs and context, participatory and inductive methodologies apply better. Systemic reviews and existing research are also being drawn upon, and there is recognition that impact evaluations are not the only source of rigorous evidence.
- The volume of impact evaluations has increased but coverage is uneven. There are geographical gaps in West Africa, the Middle East, and North Africa. Impact evaluations have diversified beyond the health sector, with a growing number of evaluations in education and social issues, but fewer impact evaluations have been conducted in areas where most aid and government budgets are spent such as governance, transportation, energy, ICT, gender, and inequity.
- Funding for impact evaluations continues to be concentrated in a small pool of international funders. Few bilateral donors fund impact evaluations of their own programs, and many governments in middle income countries are not funding impact evaluations. This leads to concerns about the potential for impact evaluations skewing to donor interests and rather than addressing local needs and lacking the buy-in to put evidence into action.
Read full articles:
- How is impact evaluation contributing to evidence use in low and middle income countries (Video, 75 minutes)
- From Evidence to Impact: Development contributions of Australian aid funded research (Report, PDF 1.35MB, 2 hours)
- Learning how to learn: eight lessons for impact evaluations that make a difference (Background note, PDF 89KB, 10 minutes)
Photo by Paul Hanaoka on Unsplash