Case study 1: What, why and how of impact evaluations

Size: px
Start display at page:

Download "Case study 1: What, why and how of impact evaluations"

Transcription

1 Case study 1: What, why and how of impact evaluations Shubhra Mittal Senior Policy and Training Manager, J-PAL South Asia Kathmandu, March 28, 2017

2 I. Introduction to J-PAL II. Why Evaluate: Case study of pricing of preventive health products III. Case Study 2: understanding impact of a programme

3 About J-PAL Who we are, what we do 2

4 Governments face multiple problems across various sectors A Typical Social Policy Goal Improving Learning Outcomes To address poor learning levels, there are many potential solutions: Libraries School equipment Free uniforms Technology Cash grants Information Campaigns Pedagogy changes The fundamental dilemma - How do we know which solution creates the most impact? How do we know which solution is the most costeffective? Impact evaluations identify the causal impact of a programme (e.g. a pedagogy intervention) on an outcome of interest (e.g. learning outcomes) by comparing what happened with the programme with what would have happened without the programme i.e. a counterfactual. 3

5 J-PAL s mission is to reduce poverty by ensuring that policy is informed by scientific evidence. EVALUATIONS J-PAL affiliates conduct randomised evaluations to test and improve the effectiveness of poverty reduction programmes across sectors. Agriculture Crime Education Environment and Energy POLICY J-PAL affiliates and staff disseminate research results and build partnerships with policymakers to ensure policy is driven by evidence and effective programmes are scaled up. Dissemination Partnerships Scale-ups Health Finance and Microfinance Labour Markets Political Economy and Governance CAPACITY BUILDING J-PAL trains implementers and policymakers on how to become better producers and users of evidence from impact evaluations. Courses Workshops

6 We are a network of 146 affiliated professors from over 49 universities across 7 global offices, working with local partners on local issues. J-PAL North America: 136 completed evaluations; 18 J-PAL Global: ongoing Headquarters supporting regional offices J-PAL Latin America and the Caribbean: 86 completed evaluations; 41 ongoing J-PAL Europe: 37 completed evaluations; 6 ongoing J-PAL Africa: 131 completed evaluations; 102 ongoing J-PAL South Asia: 98 completed evaluations; 68 ongoing J-PAL Southeast Asia: 28 completed evaluations; 16 ongoing Nearly 819 ongoing and completed evaluations across 8 sectors in over 78 countries 6

7 J-PAL South Asia est at the Institute for Financial Management and Research (IFMR), Chennai Over 166 completed and ongoing evaluations across SA; 124 across 13 states of India 19 different partnerships with various state governments Partners in India, Bangladesh, Nepal, Pakistan, and Sri Lanka policymakers & practitioners trained to conduct high quality Monitoring & Evaluation South Asia senior management team Esther Duflo (MIT) Iqbal Dhaliwal (ex-ias) Sanjoy Narayan Shobhini Mukerji Director, J-PAL Deputy Director, J-PAL Executive Director Executive Director Scientific Director, SA Scientific Director, SA J-PAL SA J-PAL SA 7

8 Fostering Evidence-based Policymaking State partnerships to institutionalise an evidence based approach with the government Supporting the scale-up of programmes that work Collaborating with Govt. departments and Ministries to rigorously evaluate innovative solutions Sharing evidence on what works (and doesn t) Assisting in M&E capacity building 8

9 Using evidence to improve effectiveness of policy Why evaluate? 9

10 Rigorous evaluations have produced important and surprising results Major programmes not as effective as previously thought Small interventions proved very cost-effective Fixing supply of health services, inputs to education Deworming Conventional wisdoms have been undermined Incentives for monitoring, community participation 10

11 Case study1: Choosing between alternative programme designs Pricing of preventive health products 11

12 Pricing of preventive health products The debate in the policy/donor community: free versus subsidised distribution Cost sharing? Free distribution? People who need something are more willing to pay for it Prices prevent access for people who need it the most Paying for goods makes people more likely to use them (sunk cost bias is significant) Sunk cost bias may be negligible Free samples help people learn about a good s benefits and they will be willing to pay for them later There are positive health externalities that warrant complete subsidisation for maximum public health benefits Giving away goods and services for free creates dependency (an entitlement effect) Charging fees helps programmes maintain financially sustainability 12

13 Understanding usage patterns for ITNs based on price paid Background In Kenya, malaria responsible for one out of every four child deaths ITNs shown to reduce child mortality in regions of Africa Less than 5 percent of children and pregnant women sleep under an ITN Programme ITN distribution to pregnant women who visited clinics for prenatal care Cohen and Dupas

14 Randomised evaluation design: understanding usage patterns for ITNs based on price paid Free Subsidy 97.5% Subsidy 95.0% Subsidy 90.0% Pregnant women visiting clinic offered ITNs Status quo Additional discount ($0 to posted price) for women interested to purchase an ITN 14

15 Results: Cost sharing reduced take-up of ITN Charging 60 cents (10% of actual price) for insecticide treated nets (ITNs) reduced takeup by 60pp relative to free distribution in Kenya (2007) No evidence that cost-sharing put ITNs in the hands of women who need it the most No evidence that the act of paying for a product makes a recipient more likely to use it (ITNs no more likely to be hanging in the house during a spot check) Cohen and Dupas

16 Take-up of preventive products drops as price increases 17

17 Rigorous evidence has informed pricing of ITNs J - PAL EVIDENCE FROM RANDOMIZED EVALUATIONS OF ECONOMIC INTERVENTIONS IN HEALTH 19

18 Case study2: Understanding programme s impact Targeting the Ultra-poor 20

19 The problem Ultra-poor women headed households make up 69.4 lakh households in India and have an average monthly income of Rs.1,250 Ultra-poor female headed households There is no evidence yet that they benefit from traditional credit-based interventions Defining the Ultra-poor 21

20 A potential solution the Graduation Model Carefully sequenced support for the poorest of the poor women in rural communities to graduate out of extreme poverty Programme costs Rs20,000 per beneficiary 22

21 How do we measure TUP s impact? Impact = what happened with the programme - what would have happened WITHOUT the programme Choosing the counterfactual 1. Option 1: Programme participants before the programme Why or why not? 2. Option 2: Programme non-participants Why or why not? 23

22 Designing a randomised evaluation to measure TUP s impact Period Poorest of the poorest women in village Status quo (#466) Offered programme Time since asset transfer Baseline Before programme implementation (February March 2008) - Endline 1 Completion of programme (January November 2009) 1.5 years Endline 2 One year after programme completion (June 2010-February 2011) 2.5 years Endline 3 Five years after programme completion (September March 2015) 7 years (#512) 24

23 Understanding Intent-to-Treat and Treatment-on-the Treated Programme take-up: 56 percent 1. Intent-to-treat Comparing people offered programme with those in the comparison group Consumption increases by 15 percent at the end of the programme, and nearly 25 percent five years after the end of the programme 2. Why may we be interested in understanding the treatment-on-the treated? 26

24 Households experience broad and lasting economic impacts % 90 20% % At programme completion One year later Comparison group Seven years later Programme Participants 46% higher consumption than comparison group five years after programme completion Consumption pattern changed spent more on dairy, protein-rich foods and durable goods Income increased Sources of income diversified Food security improved Household assets and savings increased 27

25 Replicated in ~20 countries, multisite randomised evaluations in 7 countries Pakistan Bangladesh Honduras India Ghana Ethiopia Peru Multisite RCTs funded by the Graduation Program Consortium, i.e., Ford Foundation and Consultative Group to Assist the Poor Basic programme components adjusted to fit the individual country context and implemented by local organisations 29

26 How have government s responded to this evidence? Government of Rajasthan Rajasthan State Livelihoods mission(rajeevika) is funding the THP programme implementation, by Bandhan Konnagar, in the Manohar Thana block of Rajasthan for 1,000 Government of Jharkhand Jharkhand Welfare Department, Government of Jharkhand is funding the THP implementation in 2,000 households in two districts (Dumka and Paschim Singhbhum). Bihar and Odisha, USAID funded DIV funding 4,350 Households in two states, Bihar and Odisha Other states Funded by foundation and donor funding West Bengal Rajasthan Madhya Pradesh Assam Bihar Jharkhand* Odisha

27 Concluding thoughts Social programmes + Rigorous evidence = Higher Impact Understanding the impact caused by the programme Are the people better off than they would have been otherwise? What are the reasons for success/failure? Causal effect can be determined through a rigorous evaluation Comparing programmes and choosing the best What is the most effective way to achieve an outcome? Are there common strategies that will succeed across fields? Using rigorous evidence of impact of your intended programme to inform decisions Expand coverage of programme? Withdraw programme? More evidence needed? 31

28 Thank you Shubhra Mittal