The stony path to greater knowledge
Every year, hundreds of billions of dollars flow into development collaborations around the world. But what impact does this money actually have? Critics are arguing for more experimental field research. Experts see an intelligent mix of methods as the ideal way forward.
“No responsible physician would consider prescribing medications without properly evaluating their impact or potential side effects. Yet in social development programs, where large sums of money are spent... no such standard has been adopted”. This sobering conclusion was made in 2006 by a working group of the Washington Center for Global Development, in a provoc ative report entitled: “When Will We Ever Learn? Improving Lives Through Impact Evaluation”. The experts complained about gaps in evaluating the impact of development collaboration, and called for the systematic establishment of evidence-based
decision-making.
The economist and poverty researcher Esther Duflo was one of those who worked on the Washington Report. She had already co-founded the Poverty Action Lab J-PAL back in 2003, a research institute based at the Massachusetts Institute of Technology. J-PAL focuses specifically on randomised field experiments in order to achieve clean measurements of the impact of developmental measures. For example, in a spectacular study, Duflo proved that while the much-praised microloans in India did help to reduce poverty, they did not serve to improve the lives of those affected to the degree anticipated.
This criticism of a lack of standards, along with a call for a greater evidence base, did not go unheeded in the professional community. One answer came in 2008 with the foundation of the independent International Initiative for Impact Evaluation (3ie). This NGO links together scientists with politicians and practitioners, organises conferences on topics such as ‘what works’, and promotes evidence-based evaluations. Since it was founded, it has supported more than 200 impact studies in 50 countries worth a total of USD 85 million.
OECD criteria are the international guidelines
In parallel with the efforts made by science, the donor and partner countries have in the last decade also been refining and professionalising their evaluation instruments. The Declaration of Paris in 2005, for example, created a basis for common quality standards to determine the effectiveness of development collaborations. The OECD Development Assistance Committee defined five evaluation criteria: relevance, effectiveness, efficiency, impact and sustainability. These are not binding, but they are recognised internationally as a guide for action.
These criteria are also monitored by the OECD itself in its own country reports. The lack of policy coherence among the donor countries is repeatedly criticised – such as when a country’s foreign policy runs counter to the goals of poverty alleviation. The donor countries themselves evaluate the effectiveness of their development methods. Sceptics cast doubt on the independence of these evaluation units, however, because in most countries they are situated within the same organisations that are actually giving the money.
Germany struck out on a different path, however, when it created a mandate for an autonomous institute. In 2012, the German Institute for Development Evaluation was founded (DEval). “We place a great emphasis on a scholarly approach and on independence”, insists DEval’s director, the political scientist Jörg Faust. “We are also strongly focussed on a hands-on approach and want to initiate learning processes”. The topics they evaluate are usually multi-layered and complex, and thus require a high degree of expertise regarding both content and methodology.
Qualitative methods are also in demand
The methodological challenge, says Faust, lies in the basic question as to “how a situation might have developed if the developmental intervention had not taken place”. In order to investigate this, his Institute combines quantitative and qualitative methods. “When we carry out an evaluation, it’s not just about identifying the impact, but about finding out why there is an impact”. For this, they need both rigorous impact research and elaborate qualitative methods. “An informed debate won’t play the one against the other”, emphasises Faust.
A few years ago, there was trench warfare between the ‘randomistas’ – the adherents of randomised field experiments as the scientific gold standard – and their critics. But today, the debate about methodology is kept more moderate, explains Faust. “Meanwhile there is greater acceptance of a position that asks more openly how quantitative and qualitative elements can be combined to form a mix of methods that achieves a maximum of knowledge production”.
Investing more in global knowledge
sabel Günther, a development economist and the Head of the Center for Development and Cooperation at ETH Zurich, also wants to find out what makes development collaborations effective, and isn’t confining herself to randomised field experiments. Experimental methods are best suited to the micro-level, she says. In order to analyse factors on the macro-level, such as the impact of tax policies, you often need other quantitative procedures. What is essential is that you always identify “what form of development cooperation has an impact in what context, and where it doesn’t”. This fact-based identification of effective interventions by means of scientifically recognised methods is in everyone’s interest. But this does not mean that “every single project or programme has to be evaluated”. Studies on the effectiveness of development aid should not just serve the accountability of one organisation, but should rather lead to a continuous improvement of the programmes, insists Günther. This learning process must take place above and beyond the boundaries of individual institutions. “The future lies instead in investing more in global knowledge on poverty alleviation, and in using this knowledge”.
There are no comparative figures to tell us just how much is spent across the world on evaluating development cooperation. According to Jörg Faust of DEval, not more than one to two percent of the OECD’s development aid money is spent on evaluation. “Given the learning and knowledge needs in fields such as global sustainability and how to deal with fragile states, this surely isn’t too much money”.
There are no comparative figures to tell us just how much is spent across the world on evaluating development cooperation. According to Jörg Faust of DEval, not more than one to two percent of the OECD’s development aid money is spent on evaluation. “Given the learning and knowledge needs in fields such as global sustainability and how to deal with fragile states, this surely isn’t too much money”.
Challenging sustainability goals
Both Günther and Faust point to the new UN goals in its 2030 Agenda for Sustainable Development, which has replaced ist Millennium Development Goals. The Agenda was adopted by the UN in 2015, and it has seventeen ‘Sustainable Development Goals’ and 169 ‘targets’. In future, development cooperation should no longer merely contribute to poverty alleviation, but should also cushion the consequences of climate change.
This brings new challenges with it – and not just for the assessors. Isabel Günther feels we have to ask the fundamental question as to whether all these challenges can be met using the instruments of development cooperation, when financial resources are in fact being reduced. “Development aid is not the solution to all global problems”.
Theodora Peter is a freelance journalist specialising in development cooperation.
Context is everything
n an initial phase, countries are identified as being suitable for case studies. Bergman and Jafflin combine different methods in their work, utilising both quantitative and qualitative components.
Bergman and Jafflin are supportive of the move towards more evidence-based development programmes, but they emphasise that the strengths and weaknesses of the different methods must be taken into account. “Impact evaluations and experimental methods are not a panacea in themselves”, they admit.
hey can also promote a ‘best-practice’ approach, which they both see as problematical because “the recipients of aid programmes are then defined as a white sheet of paper who are all equally receptive to the most varied of interventions”. But the recipients are “complex social groups with their own cultures, national contexts and living conditions”. What might work in one place won’t necessarily work everywhere. “We can’t design experiments for everything, or carry out impact evaluations everywhere”. The methods being discussed aren’t suitable for
all research questions. For example, it is impossible to evaluate just how donor and recipient countries actually work together.