Creating an Effective Evaluation LOREN BELL EVALUATION CONSULTANT AUGUST 22, 2018

Size: px
Start display at page:

Download "Creating an Effective Evaluation LOREN BELL EVALUATION CONSULTANT AUGUST 22, 2018"

Transcription

1 Creating an Effective Evaluation LOREN BELL EVALUATION CONSULTANT AUGUST 22, 2018

2 Evaluation is an Important Component of SNAP Ed FNS Evaluation Framework Evaluation section and instructions in grant application Reasons for including evaluation in SNAP Ed programming We just want to show that something we do has some sort of positive effect on someone!

3 Purpose of this Presentation How you can include a strong evaluation component in SNAP Ed project planning. What you should be evaluating. What you should be measuring and how should you collect data. How you can do a quality evaluation with limited resources. Avoiding the pitfalls in planning and conducting an evaluation.

4 What we are not talking about today Unadjusted and adjusted odds of vendors overcharging at least once, logistic regression model findings, base study Base Study Registers and scanning equipment 0-2 registers, NO scanning equipment 0-2 registers, YES scanning equipment 3 or more registers Volume of WIC sales in FY 2011 (monthly average) Low (<$7,125) High ($7,125 or more) Identified as high risk by WIC State agency Yes No Benefit delivery and receipt requirement Paper FIs / NO receipt required Paper FIs / YES receipt required EBT / receipt required (true for all EBT states) *Model includes all significant predictors. OR 95% CI , ,6.88 REF p-value (t-test) AOR* 95% CI p-value (t-test) < < , ,7.06 REF < < REF 1.31, ,2.14 REF REF 1.35, ,2.05 REF REF 1.73, , , ,2.49 REF

5 First, You Must Recognize the Environment in which the SNAP ED Program Operates.

6 Lots of folks are working in the areas of obesity prevention and healthy eating hence your community connections and partnerships sections in your grant. There are limited resources for evaluation, and many types of evaluations can be very expensive. You may have been working in the same geographic area for years, and a single year evaluation may not be appropriate. There are multiple factors involved in SNAP Ed, such as direct education, PSE, and social marketing; all of these can have an impact.

7 Practical Approaches to Evaluation Assess whether or not your program was implemented as intended. Determine if the audience you are reaching is the intended target group. Determine whether you are having an impact on individual behaviors or the community as a whole. Use results to reinforce your current approach or help you change future direction.

8 What is Necessary for a Project to be Evaluated Clear goals and objectives that are measurable and obtainable. This may include both short-term and long-term objectives. A plan of action for interventions that has clear activities, timelines, and steps to success. A structured evaluation plan that fits into the programming, and is not just a separate component. The ability to collect and analyze data. The ability to interpret data.

9 Step1 Start with Formative Research Do you know the needs of your community and the target audience(s)? Do you know who else is involved in conducting/promoting nutrition behavior change and how you fit in? Do you know how the target audience(s) like to receive messages? Do you know the environmental barriers to Healthy Eating/Physical Activity?

10 Examples of Problems Linking Evaluation with Formative Research The dichotomy of long term intervention and the needs of the community. Generalized needs that don t support the interventions being proposed. Segmenting the population so that approaches are appropriate for culture and ethnicity. Creating a clear intervention strategy linked to specific needs and outcomes.

11 Step 2 Have Measurable Objectives and Outcomes Use the FNS Evaluation Framework as your guide. Have a clear definition of what constitutes success. Select Measures for: Eating behaviors or physical activity (specific to target groups). Community access to healthy food. Food environment or policies in schools.

12 Common Pitfalls in Developing Measurable Objectives

13 The Percentage Challenge in which you simply state what percent of the target group you hope to have make a change. The Clarity Challenge in which there is a lack of clarity as to what will be accomplished. The Bean Counting Challenge in which an objective has a specific target number with no justification or rational. The Data Challenge in which there is no data available or none is being collected that would measure the results achieved.

14 Step 3 Decide What Type of Evaluation is Appropriate Outcome and Impact Evaluation to determine if an intervention results in behavior change within the limits of the targeted objective. Process Evaluation to examine if interventions are implemented as intended. Exposure and Content Assessments to determine if messages are being received and if people are receptive. Environmental Scans to assess the success of PSE implementation.

15 In Making your Selection, also Consider: Is the evaluation design based upon strong formative research? Does the design consider the environment in which clients function and services are delivered? Is it reasonable to conduct within limited resources and are data going to be available? Is the design able to affirm the program or expose its limitations? (both are good)

16 Consideration is important because.. The Evaluation ultimately provides guidance to the program in order to improve or justify approach to providing services now and in the future.

17 Step 4 Collect Data Appropriate to the Evaluation Plan How will data be collected? Is it a practical approach? When will data be collected? What is the best source of data (e.g. parents or children)? How much data do you need (samples vs. universe)? How will the data be analyzed?

18 Types of Data Collection that may be Practical Surveys of clients to measure changes in behavior. Focus groups with clients to understand why or why not behavior changes were made. Community-based public health data that may be collected by other agencies (used in formative research). Key informant interviews with those entities targeted for PSE efforts. Community environmental scans. Linking community and behavioral data.

19 Pitfalls of Data Collection Data collected too soon after the end of the intervention. Poorly designed instruments. Only using positive stories or anecdotal information. Trying to collect too much data asking too many questions. Collecting data from the wrong people. Not collecting data from key contributors to the implementation process.

20 Step 5: Data Analysis and Use of Results Analyze data in a way that is appropriate to the measurable objective. Display the results in a way that is meaningful to both your organization and the funding agencies. Share the results with staff and discuss what the results mean to your organization and future approaches. Use the results to identify the strengths of your approach and the things that may need to be changed. Use results as the beginning of the formative research for next year.

21 Summary why do an Evaluation? You can assess whether or not your program was implemented as intended. You can identify whether the audience you are reaching is the intended target group. You can determine if your program is having an impact. You can use the information gathered to help you plan improvements in programming. The findings will reinforce your current approach or help you design future directions.

22 Thank You!