Finance: Research, Policy and Anecdotes

The world of evaluation – from a development bank’s view

For the past six years I served on the advisory evaluation panel of the Dutch development bank FMO, a quite unique experience as it (i) gave me insights into the other side of the evaluation process that we academics are usually on and (ii) allowed me to serve as bridge between academia and the development finance world. As background: some 7 years ago, FMO was asked by the Dutch government to evaluate the impact of projects in developing countries that were funded by the Dutch government to thus ascertain whether taxpayers’ money was actually put to good use. Being relatively new to the evaluation world, FMO got up to speed admirably quickly on how best to go about this, which included forming an advisory panel, which I joined at the beginning of 2013.

 

I learned quite a lot about the challenges faced by evaluation departments in development banks: One, in several instances of evaluation requests, projects were already underway, which made a pre-intervention baseline all but impossible and an evaluation thus more challenging. And even where the projects had not started yet, a rigorous evaluation that addresses selection biases and endogeneity, is often not possible. Expanding electricity networks and/or building a bridge/road is hard if not impossible to randomise, so alternative evaluation methods are asked for. Where FMO had no contact with the ultimate beneficiaries (e.g., credit line cum technical assistance to financial institutions), a proper evaluation was often not possible and an effectiveness study (assessing how the resources were used at the level of financial institutions) was more appropriate.

 

Two, unlike in academic work, project evaluation (actually project preparation) starts with a theory of change, with the input provided by FMO or other donors (in the form of resources, technical assistance etc.), output being envisioned at the level of local counterparts, the outcome being at the level of beneficiaries (households and enterprises) and the ultimate impact being higher growth or poverty reduction. A first important question is which parts of the theory of change can and should be evaluated.  Another important question is the value added of individual evaluations – some questions have been extensively assessed before, while others are novel but might also be harder to assess (e.g., impact of expanding external finance for SMEs rather than for micro-entrepreneurs). Ultimately, this reflects also different interests and roles for academics and development banks’ evaluation departments: academics are interested primarily in what works best, a development bank like FMO also has to care about the value added that it can provide through its funding and the effectiveness of delivering this funding to counterparts (not necessarily the same as final beneficiaries) in the receiving country.

 

Being a bridge between academics and the evaluation department also provided interesting insights: undertaking an evaluation study for FMO can be interesting for academics mainly (apart from some additional income) for two reasons: one, being able to publish the results of the evaluation in an academic journal, and, two, being able to access to interesting data and possible future cooperation possibilities.  However, it is also clear that the interests of academics and development banks do not always match – as much as the latter might be interested in rigorous evaluation, they face budget and time constraints, while the former might get easily frustrated by institutional constraints and the difficulty of implementing adequate research methods (often coming from the implementing institution in the developing country).  Not surprisingly, there was thus a rich mix of consultancy companies focusing more on effectiveness or “light” evaluation studies and academic teams going all the way to econometrically rigorous evaluation. 

 

In a nutshell, there is so much more to the world of evaluation out there than randomised control trials. Alternative methods, such as quasi-randomised, discontinuity and propensity scoring, might help. However, even these might not be feasible in some instances, so that descriptive and qualitative methods might be necessary. While the latter might not be interesting for us academic economists, they can play an important role in completing the picture on “what works best” and – as important -  “what can each player do”.