In recent years, many development agencies have made intensive efforts to improve their efficiency and increase their impact on rural poverty. At the heart of this new strategic management process is the measurement of performance. With Household Food Security (HFS) and nutritional security now clearly identified as desired outcomes of many development projects, there is a need to assess the performance of investment projects in terms of their impact on the HFS and nutrition status of their targets groups.
When the target populations of development agencies are highly risk-prone, they require rigorous formulation and monitoring. Poorly thought-out evaluations may inadvertently act as an incentive to target better-off elements in the projects' zones of influence, who offer higher returns and promise faster disbursement of project resources. In addition, there is a clear danger of prioritizing more easily measurable outcomes or indicators, which fail to provide the information necessary to address broader objectives or to enhance the effectiveness of rural development projects for "the poorest of the poor." In addition, proper evaluations call for an increased awareness that less tangible objectives—such as the formation of social capital, for example—may pursue. Less tangible and broader development objectives do not, however, justify less rigorous evaluation methods. On the contrary, they call for subtler and more sensitive methodologies and indicators.
This guide emphasizes the design of quantitative impact evaluation exercises for HFS and nutrition, and provides development practitioners with the basic principles on why, when and how to choose and implement a particular evaluation system. We argue that two of the key features of a good impact evaluation study are the availability of accurate baseline information and a properly thought-out control group, respectively allowing for before-after and with-without comparisons. The importance of a joint temporal and cross-sectional comparison of the beneficiary group against a counterfactual is crucial to simultaneously control for the effects of all sorts of external factors likely to contaminate the impact evaluation results. We also argue that the involvement of the evaluation team in the earliest stages of project design stage is the most suitable way to ensure a proper and accurate evaluation without having to rely on more complicated statistical techniques, as well as to permit a sound learning process to ensue from the evaluation exercise. However, if the conditions dictate, statistical techniques can still provide the evaluation team with effective tools for a well-founded impact evaluation.