Applied Microeconometrics: Program Evaluation

Size: px
Start display at page:

Download "Applied Microeconometrics: Program Evaluation"

Transcription

1 Applied Microeconometrics: Program Evaluation Academic Year Katja Kaufmann Office Hours: TBA This course will introduce students to the most important approaches of program evaluation. These approaches have been widely used in the economics literature in diverse fields (development, labor, public economics, economics of education and health) and can be applied to a wide range of questions such as evaluating the effects of antipoverty programs (e.g. conditional cash transfer programs), educational and job training programs, preventative health care and family planning programs, of changes in laws such as minimum wage laws and minimum drinking age and so forth. The challenge in the literature on program evaluation consists of how to address the problem of the missing counterfactual: To evaluate the impact of a program it is necessary to not only know the outcome of the individual in the presence of the program, but to also know what the outcome would have been in the absence of the program ( counterfactual ). The problem is that individuals are never observed in both states (presence and absence of program) at the same time. Therefore the goal of all the approaches discussed in this course (and in fact one of the main challenges in economics in general) is to derive appropriate counterfactuals under different assumptions. The different available approaches require very different assumptions and have different data requirements. The goal of this course is to guide students in acquiring a solid econometric understanding of the different approaches, so that they are able to decide which approach is appropriate for evaluating a specific program and to carefully interpret the results. In terms of a broader perspective students will learn about fundamental issues of identification, i.e. what kind of variation identifies the effect of interest. Thus the purpose of the course is twofold: first, it aims at improving students understanding of econometric issues that arise in the program evaluation literature and at the development of analytical skills for how to approach econometric issues in general. Second, the course will teach students the skills that are necessary for applying the most important approaches of program evaluation in their research to address a variety of questions in different areas of economics, while raising their awareness of limitations. The second goal encompasses learning how to implement the different approaches with commonly used statistical software such as STATA. Structure of the class: In class I will present the theory of the different econometric approaches (see list of theory papers, textbooks and survey articles), which I will motivate by mentioning potential applications. For each approach this syllabus lists a selection of application papers from a variety of fields. Students are asked to read one application paper for each approach, depending on their areas of interest, and to present in class at least once (depending on the number of students enrolled). 1

2 Reading List Textbooks: o Cameron, A. and Trivedi, P. (2005) Microeconometrics, methods and applications, Cambridge University Press. o Heckman, J. and Robb, R. (1985) Alternative methods for evaluating the impact of intervention, in: Longitudinal analysis of labor market data, New York: John Wiley. o Wooldridge (2002) Econometric Analysis of Cross Section and Panel Data, MIT Press, chapter 18 Estimating Average Treatment Effects. Survey articles: o Angrist and Krueger (1999) Empirical Strategies in Labor Economics, in Handbook of Labor Economics, Vol. 3A, Orley Ashenfelter and David Card (Eds.), Amsterdam: North Holland, o Blundell, R., Costa-Diaz, M. (2002) Alternative Approaches to Evaluation in Empirical Microeconomics, cemmap working paper CMP10/02 o Card, D. (1999) The Causal Effect of Education on Earnings, in Handbook of Labor Economics, Vol. 3A, Orley Ashenfelter and David Card (Eds.), Amsterdam: North Holland, o Imbens, G., and Wooldridge, G. (2007) Recent Developments in the Econometrics of Program Evaluation, Lecture Notes for NBER Introduction to Program Evaluation: The Counterfactual Framework (and the problem of causality) o See textbooks o * Rubin (1974) Estimating Causal Effects of Treatments in Randomized and Nonrandomized Studies, Journal of Education Psychology, 66, Randomized and Natural Experiments o * Duflo, E., Glennerster, R. and Kremer, M. (2008) Using Randomization in Development Economics Research: A Toolkit, Chapter 61 in Handbook of Development Economics, vol. 4, pp from Elsevier. o Manski (1996) Learning about Treatment Effects from Experiments with Random Assignment of Treatment, Journal of Human Resources 31, o * Meyer, B.D. (1995) Natural and Quasi-Experiments in Economics, Journal of Business and Economic Statistics, 13 (2), o Rosenzweig and Wolpin (2000) Natural Natural Experiments, Journal of Economic Literature, 38 (4), o + Angrist, J.; Lang, D.; Oreopoulos, P. (2008) Incentives and Services for College Achievement: Evidence from a Randomized Trial, American Economic Journal (AEJ): Applied Economics, forthcoming. 2

3 o + Duflo, E. and Saez, E. (2003) The Role of Information and Social Interactions in Retirement Plan Decisions: Evidence from a Randomized Experiment, QJE, 118, o + Duflo, E. and Kremer, M. (2003) Use of Randomization in the Evaluation of Development Effectiveness, Paper prepared for Conference on Evaluation and Development Effectiveness. o + Duflo, E., Dupas, P., Kremer, M. and Sinei, S. (2006) "Education and HIV/AIDS prevention: evidence from a randomized evaluation in Western Kenya," Policy Research Working Paper Series 4024, The World Bank. o * Gertler o * Schultz, P. (2004), School Subsidies for the Poor: Evaluating the Mexican Progresa Poverty Program, Journal of Development Economics, 74(1), Difference-in-Difference Estimator o Abadie A. (2005), "Semiparametric difference-in-differences estimators", Review of Economic Studies, vol. 72. o * Bertrand, M.; Duflo, E.; and Mullainathan (2004) How much should we trust differences-in-differences estimates?, Quarterly Journal of Economics, 119(1), o * Meyer, B.D. (1995) Natural and Quasi-Experiments in Economics, Journal of Business and Economic Statistics, 13 (2), o Abadie, A. and Garbeanzabal, J. (2003) The Economic Costs of Conflict: A Case Study of the Basque Country, AER, 93, o + Besley T. and Case A., "Unnatural experiments? Estimating the incidence of endogenous policies", Economic Journal, vol.110, November o * Card, D. and Krueger, A. (1994) Minimum Wages and Employment: A Case Study of the Fast Food Industry in New Jersey and Pennsylvania, AER, 84 (4), o Duflo, E. (2001) Schooling and Labor Market Consequences of School Construction in Indonesia: Evidence from an Unusual Policy Experiment, AER, 91(4), o + Gruber, J. (1994) The Incidence of Mandated Maternity Benefits, AER 84 (3), o + Gruber, J. (1996) Cash Welfare as a Consumption Smoothing Mechanism for Single Mothers, NBER Working Paper o + Morduch, J. (1998) Does Microfinance Really Help the Poor? New Evidence from Flagship Programs in Bangladesh, mimeo. 4 Matching Estimator o Rosenbaum, P.R. and Rubin, D.B. (1983) The Central Role of the Propensity Score in Observational Studies for Causal Effects, Biometrika, 70 (1),

4 o * Heckman, J. J.; Ichimura, H.; and P. Todd (1998) Matching as an Econometric Evaluation Estimator, Review of Economic Studies, 65, o * Smith, J. and Todd, P. (2001), Reconciling Conflicting Evidence on the Performance of Propensity Score Matching Methods, American Economic Review, Vol. 91, No. 2 (May), pp o * Todd, P. (2006) Matching Estimators, Palgrave Dictionary of Economics. o Possible Extensions: propensity score with continuous treatment (Imbens (2000), Hirano and Imbens (2004)), propensity score analysis for the case of multiple treatments (Lechner (2001)) o + Angrist, J. D. Estimating the Labor Market Impact of Voluntary Military Service Using Social Security Data on Military Applicants, Econometrica, Vol. 66, N2 March. o + Behrman, J., Cheng, Y., Todd, P. (2004) Evaluating Pre-School Programs when Length of Exposure to the Program Varies: A Nonparametric Approach, Review of Economics and Statistics, 86 (1), o + Galiani, S., Gertler, P., Schargrodski, E. (2005) Water for Life: The Impact of the Privatization of Water Services on Child Mortality in Argentina, JPE, 113, o + Gertler, P., Levine, D., Ames, M. (2004) Schooling and Parental Death, Review of Economics and Statistics, 86 (1). o + Lavy, V. (2004) Performance Pay and Teacher Effort, Productivity and Grading Ethics, NBER Working Paper o * Todd, P. (2001) A practical guide for the implementation of matching estimators, manuscript, download from 5 Regression Discontinuity Approach o * Hahn, J.; Todd, P. and Van der Klaauw, W. (2001) Identification and Estimation of Treatment Effects with a Regression-Discontinuity Design, Econometrica, Vol. 69, No. 1 (January), pp o * Imbens, W. G. and Lemieux, T. (2007) Regression Discontinuity Designs: A Guide to Practice, Journal of Econometrics. o * Blundell, R.; Costa-Dias, M.; Meghir, C.; and Van Reenen, J. (2004) Evaluating the Employment Impact of a Mandatory Job Search Assistance Program, Journal of European Economic Association, Vol. 2, No. 4 (June), pp o + Carpenter, C. and Dobkin, C. (2008) The Effect of Alcohol Consumption on Mortality: Regression Discontinuity Evidence from the Minimum Drinking Age, American Economic Journal (AEJ): Applied Economics, forthcoming. o + Chen, Shapiro Do Harsher Prison Conditions Reduce Recidivism? A Discontinuity-Based Approach, American Law and Economics Review, Spring

5 o + Garibaldi, P.; Giavazzi, F.; Ichino, A. and Rettore, E. (2007) College Cost and Time to Complete a Degree: Evidence from Tuition Discontinuities, CEPR Discussion Paper N o + Ludwig, J. and Miller, D. (2006) Does Head Start Improve Children s Life Chances? Evidence from a Regression Discontinuity Design, NBER Working Paper o + Van der Klaauw (1996) A Regression-Discontinuity Evaluation of the Effect of Financial Aid on College Enrollment, mimeo, New York University. 6 Instrumental Variables Estimation: IV (ATE, ATT, LATE), Local IV (MTE) and IV-Quantile Regression IV o * Angrist, J., Imbens, G. and Rubin (1996) Identification of Causal Effects Using Instrumental Variables, Journal of the American Statistical Association, 91, , (with comments by J. Heckman and R. Moffitt). o * Angrist, J. and Krueger, A. (2001) Instrumental Variables and the Search for Identification: From Supply and Demand to Natural Experiments, Journal of Economic Perspectives, 15(4), o Heckman, J. J. (1997) Instrumental Variables: A Study of Implicit Behavioral Assumptions Used in Making Program Evaluations, Journal of Human Resources 32, o * Bound, J., Jaeger, D. and Baker, R. (1995) Problems with Instrumental Variable Estimation When the Correlation Between the Instruments and the Endogenous Variable Is Weak, Journal of the American Statistical Association, 90, Application o * Angrist, J. and Alan Krueger (1991) Does Compulsory School Attendance Affect Schooling and Earnings?, Quarterly Journal of Economics, Vol. 106 (Nov), pp o + Angrist, J., Bettinger, E., Bloom, E., King, E., Kremer, M. (2002) Vouchers for Private Schooling in Colombia: Evidence from a Randomized Natural Experiment, AER, 92 (5), o + Gentzkow and Shapiro (2008) Preschool Television Viewing and Adolescent Test Scores: Historical Evidence from the Coleman Study, QJE. o + Ichino, Andrea and Rudolf Winter-Ebmer (2004) "The Long-Run Educational Cost of World War Two, Journal of Labor Economics, 22 (1), LIV Theory o Heckman, J. J. and Vytlacil, E. (2005) Structural Equations, Treatment Effects, and Econometric Policy Evaluation, Econometrica, 73 (3),

6 o Heckman, J. J.,Vytlacil, E. and Urzua, S. (2006) Understanding Instrumental Variables in Models with Essential Heterogeneity. o Heckman, J.J. and Vytlacil, E. (2001) Policy-Relevant Treatment Effects, American Economic Review, Papers and Proceedings, 91 (2), Application and Theory o * Carneiro, P., Heckman, J. J.and Vytlacil, E. (2005) Understanding what IV Estimate: Estimating Marginal and Average Returns to Education. IV-Quantile Regression Theory o Abadie, A., Angrist, J., and Imbens, G. (2002) Instrumental Variables Estimation of Quantile Treatment Effects, Econometrica 70(1), o Chernozhukov, V. and Hansen, C. (2005) An IV Model of Quantile Treatment Effects, Econometrica, 73 (1). Application o Abadie, A., Angrist, J., and Imbens, G. (2001) Instrumental Variables Estimation of the Effect of Subsidized Training on the Quantiles of Trainee Earnings, Econometrica. o Djebbari, H. and Smith, J. (2008) Heterogeneous Impacts in Progresa, IZA Discussion Paper Comparison and Critical Evaluation of Approaches Literature for Comparison and Critical Evaluation: o + LaLonde, R. (1986) Evaluating the Econometric Evaluations of Training Programs with Experimental Data, American Economic Review 76, o * Heckman, J.J., Ichimura, H., Smith, J.A. and Todd, P. (1998) Characterizing Selection Bias Using Experimental Data, NBER Working Paper Literature: from Randomized Experiments to Structural Models o * Attanasio, O., Meghir, C. and Szekely, M. (2003) Using Randomised Experiments And Structural Models For 'Scaling Up': Evidence From The PROGRESA Evaluation, IFS Working Papers, EWP03/05. o Meghir, C. (2006) Dynamic models for policy evalution, IFS Working Papers, W06/08 o + Todd, P. and Wolpin, K. (2006) Assessing the Impact of a School Subsidy Program in Mexico: Using Experimental Data to Validate a Dynamic Behavioral Model of Child Schooling and Fertility, AER, 96(5),