Home | 2009 Workshop


Innovative Evaluation Approaches for Youth Employment
August 23, 2011 - 3:45 p.m.

Presenters:  Kevin Hempel, World Bank
David Newhouse, World Bank
Mark Lynd, School-to-School International
Jeff Davis, School-to-School International

Moderator:  Daniel Oliver, International Youth Foundation

This session discussed innovative evaluation approaches for youth employment programs. Participants were divided into “speed dating” discussion groups in which specific impact evaluation designs were discussed, and participants were given the opportunity to ask questions. Moderator Daniel Oliver provided a comprehensive introduction to impact evaluations and set the tone for the discussion at each table. Oliver noted that impact evaluation is a distinct methodology of evaluation. An approach quickly growing in popularity in the development world, impact evaluations have been implemented in the United States by agencies like the U.S. Department of Education. Impact evaluations are seen as the gold standard, and are now filtering more and more into the development world. The World Bank is one of the leading donors in their application, and interest is quickly growing within USAID to expand use of this evaluation method. Impact evaluations use a sophisticated set of statistics to determine impact and to verify how interventions are affecting beneficiaries. In sum, they provide the evidence as to what is working or not, and allow us to better judge what is successful and what is not.
Mark Lynd and Jeff Davis from School-to-School International discussed an impact evaluation being conducted on a life skills, ICT training and job placement project in Kenya: ICT Training for Young Women from Informal Settlements around Nairobi. This evaluation implements a randomized controlled study with a pre-test/ post-test model. The objective of the evaluation is to measure the effectiveness of the intervention and to determine whether the program leads to increased employability of the women benefiting from the program. Results of the study are not yet available as the evaluation is still in progress.

David Newhouse of The World Bank briefly presented on the methods of constructing control groups. While impact evaluations with control groups are growing, they are still rare. Three main criteria include credibility, comprehensiveness, and political acceptability; knowing that pure randomization is the gold standard. If pure randomization is not politically attainable, bronze and silver standard methods can be used; these include lottery, discontinuity, difference-in-differences, matching, random promotion, and random phase-in. Newhouse shared a sample design of an impact evaluation for an upcoming program in Papua New Guinea. The program will start in 2012, using a lottery method for intervention.

Kevin Hempel from The World Bank spoke further about impact evaluations, clarifying that not all evaluations need to be impact evaluations. Hempel pointed out the most important question is to ask, what do we want to learn from this evaluation? Depending on the answer to this question, there are different types of evaluation to use: descriptive, normative, cause-and-effect, or impact. Impact evaluations require more time, money and a favorable political context.

Key take away points from this session include the growing interest around impact evaluations in the developing world and the requirements in order to properly implement such an evaluation. Impact evaluations determine what is and what is not working and also provide better detail of what success means. Impact evaluations with proper control groups are still rare. Credibility, comprehensiveness, and political acceptability are favorable conditions for a successful impact evaluation.

To view the presentations, please click on a link below:

Hempel PPT (410 KB)
Newhouse PPT (428 KB)
Procedures and Results from an Impact Evaluation in Kenya PPT (158 KB)

For questions related to the 2011 Education Workshop,
please contact Rachel Kozolup at rkozolup@jbsinternational.com