Table of Contents Page

Similar documents
This glossary will be available to registered Point K users online at beginning in mid-may, 2005.

VIDEO 1: WHY IS A STRATEGY PLAN IMPORTANT?

Make sure to listen to this audio: as you go through this handout, to get maximum value.

Trouble-Shooting: Questions

THE 7 KEYS TO HELP YOU FIND THE Right MARKETING TEAM

Chapter 9: Recognition and Evaluation

Learning from using the Volunteer Impact Assessment Toolkit in NHS Scotland Guidance and Tips for new users March 2009

Introduction. CLO Creative Outreach Strategies for the 21 st Century 1

Measuring the Impact of Volunteering

Learning from using the Volunteer Impact Assessment Toolkit in NHS Scotland. Guidance and Tips for new users March 2009

The truth about trackers: Delivering effective tracking surveys in financial services

Innovative Marketing Ideas That Work

TIPS PREPARING AN EVALUATION STATEMENT OF WORK ABOUT TIPS

AUDIT Where are we now? ONGOING MEASUREMENT Are we getting there?

Financial Advisors: How to Optimize your LinkedIn Profile

Capitalizing on Program Evaluation

Measuring the Impact of Volunteering Alan Witchey Volunteer Center Director United Way of Central Indiana

Advice on Conducting Agile Project Kickoff. Meetings

Business Planning for the Third Sector Models, Stage One

1 P a g e MAKING IT STICK. A guide to embedding evaluation

THE ULTIMATE CUSTOMER PERSONA TEMPLATE

Expert Reference Series of White Papers. ITIL Implementation: Where to Begin

COACHING USING THE DISC REPORT

The Role of Data in Nonprofit Storytelling Strategy? Presenter: Kerri VanMeveren Amazing Traditions, LLC

Top 10 Marketing Mistakes Even the Smartest Companies Make And How You Can Avoid Them

Games Recognizing your home s achievements in Show-Me Quality: QAPI in Action

Playbook 3: Assessing Your Baseline

HOW TO WRITE A WINNING PROPOSAL

STOP. COLLABORATE & LISTEN. EIGHT BEST PRACTICES FOR IMPROVING COLLABORATION IN THE PROPOSAL PROCESS

The Meaningful Hospitality Smart Hiring Guide

VICTORIAN COMMUNITY LEGAL SECTOR OUTCOMES MEASUREMENT FRAMEWORK

How to Develop and Establish an Effective Program:

Siebel CRM On Demand Administrator Rollout Guide

How to Write a Winning RFP for Healthcare Website Redesign

Six Steps to Improving Corporate Performance with a Communication Plan

Workshop #4: Workplace Job Skills (hard skills or job specific skills)

Impact of Agile on Change Management

SESSION TWO DISTRIBUTED FUNDRAISING: LEADERSHIP ROLES AND SKILLS

WINNING TOMORROW S CLIENT: THE SELF-MANAGING CLIENT ATTRACTION PROCESS FOR MANAGEMENT CONSULTANCIES

THE INBOUND MARKETING HANDBOOK FOR FURNITURE MANUFACTURERS

Webinar Wealth. Webinar Template

More than Mobile Forms Halliburton s Implementation of an End to End Solution

Customer Service Strategy. Adelaide City Council. Contents

How to Hire The Best Customer Service Reps

Computer Science Outreach Program Evaluation Network (CS OPEN) Application Webinar: October 5, 2015

8 ways to make your survey useless if you are making any of these mistakes with your surveys, you might as well throw away the data and start all over

Watson-Glaser III Critical Thinking Appraisal (US)

Improving Procurement s Internal Credibility: A Guide

MAPP - The Marketing Action Plan Process

Using Your Logic Model to Plan for Evaluation. Exercise Posing Evaluation Questions. The Importance of "Prove" and "Improve" Questions

Training Needs Analysis

Forward Booking Appointments: How to Fill Your Appointment Schedule. Karen E. Felsted, CPA, MS, DVM, CVPM, CVA Karyn Gavzer, MBA, CVPM

Responding to Media Requests: A How To Guide

The Financial and Insurance Advisor s Guide to Content Writing

7 Steps for Building an Effective CSR Program

Measure the Right Metrics

Linda Carrington, Wessex Commercial Solutions

ACCELERATED CERTIFICATE SOCIAL IMPACT MEASUREMENT. About Us.

HOW TO WRITE A PUBLIC SERVICE ANNOUNCEMENT. How To Write A Public Service Announcement That Is Worth Airing, Worth Hearing, And Worth Writing!

Get more out of your interviews: Eight Great Questions for Hiring Managers Prepared for PRA Clients by Doreen Kephart, CPC and Dan Trudeau, CPC

ebook Reach Your Leadership Potential

TIPS FOR INTERVIEW SUCCESS. Before

Watch for signs of skepticism.

Young People s Guide to Interviewing

How to effectively commission research

30 Course Bundle: Year 1. Vado Course Bundle. Year 1

WHITE PAPER HR Tech Implementation Checklist

THE FRANCHISE ONBOARDING PLAYBOOK

MARKETING WITH DIRECT MAIL

Guide to Creating EMPLOYEE CENTRIC INTERNAL COMMUNICATIONS

IT Service Management

The 10 Big Mistakes People Make When Running Customer Surveys

The Learning Needs Analysis

Qualitative or Quantitative: Weighing Research Methods

Business Assessment. Advisor Tool Galliard, Inc. All Rights Reserved

E-GUIDE. How to Create a Major Gifts Program in 5 Easy Steps. NetworkForGood.com 1

10 Key Components for a Winning Candidate Experience

Managers at Bryant University

POWERPOINT HANDOUT. Supervisor Core - Module 4 Ohio Child Welfare Training Program

Practice Notes IMPROVING THE IMPACT OF MICROFINANCE ON POVERTY: ACTION RESEARCH PROGRAMME

10 Online Communication Channels

UAB Performance Management 07/03/2018. Title Page 1

Determining Your Performance Evaluation Mindset

Watson Glaser III (US)

Public engagement strategy

FUNDRAISING PLAN TEMPLATE

Using Incentives to Increase Engagement and Persistence in Two- Generation Programs. Megan Stanley & LaDonna Pavetti

Lesson 3 Workplace Job Skills (hard skills or job specific skills)

Evaluating Mentoring. by Ann Rolfe

How to Hire a VA (Virtual Assistant) -List Processing -Running GIS Software -Initial Scanning of GIS Photos

Checklist for Working with Recruiters by R. Anne Hull

activities, helping people. Although you will make a surplus Athough you have only made months that you go into deficit.

Impact of Agile on Change Management

How to Develop a Paperless Office Strategy That Works for You. A Guide for Wealth Management Firms

Executive Summary... 1 The Six Steps in Brief... 2 Pre-implementation Stage... 3 Design Stage... 4

Strategic Planning: Setting the Course for the Agency

Results. Actions. Beliefs. Experiences

Candidate Interview Preparation

Checklists for Education Outreach Campaign Design

BRANDING GUIDE A PRIMER FOR CREATING AND LEVERAGING A POWERFUL BRAND

Transcription:

Logic Model & Evaluation Training Materials : Table of Contents Page Introduction Definition of Evaluation...1 Proving vs. Improving: A Brief History of Evaluation...1 Evaluation Principles...1 Why Evaluate?...3 Developing an Evaluation Plan...4 Determining Your Evaluation Questions...4 Implementation and Outcomes: You can t do one without the other....4 Evaluating Outcomes: What changes occurred as a result of our work?...6 Evaluating Implementation...8 What You Did: Documenting Your Outputs...9 How Well You Did, and Why: Additional Evaluation Questions...9 Data Collection...10 Data Collection Methods...10 Document Review...10 Talking to People: Who to Talk To...11 Talking to People: How to Talk To Them...11 Collecting Written Responses...12 Observation...12 Additional Data Collection Methods...12 Ease of Data Collection...12 Minimizing Your Effort...13 Review Your Plan...14 Table: Summary of Common Data Collection Methods...15

Introduction One of the negative connotations often associated with evaluation is that it is something done to people. One is evaluated. Participatory evaluation, in contrast, is a process controlled by the people in the program or community. It is something they undertake as a formal, reflective process for their own development and empowerment. M. Patton, Qualitative Evaluation Methods 1 This workbook is intended as an introduction to the concepts and processes of program evaluation for nonprofits. After following this workbook, we hope you ll understand evaluation as a tool for empowerment. You ll understand how evaluation can help your organization be more effective, and you ll be able to develop an evaluation plan for your programs. Definition: Evaluation is the systematic collection of information about a program in order to enable stakeholders to better understand the program, to improve program effectiveness, and/or to make decisions about future programming. Proving vs. Improving: A Brief History of Evaluation Evaluation has not always been and is still not always viewed as a tool to help those involved with a program to better understand and improve it. Historically, evaluation has focused on proving that a program works, rather than on improving it so it can be more successful. This focus on proof has meant that evaluations had to be conducted by objective external evaluators. It meant that evaluation designs were based on rigorous scientific standards. It meant that evaluation was done only at the end of a project, and focused only on whether the program was a success or failure, not on understanding what contributed to or hindered success. Finally, it meant that program staff and other stakeholders were disengaged from the evaluation process; they rarely had their questions about a program answered, and were rarely given information to help them improve the program. Evaluation Principles At Innovation Network, we believe that evaluation, based on a logic model approach, can be empowering. First, rather than being imposed (somewhat arbitrarily) from the outside, logic modelbased evaluation helps program stakeholders identify what a program is expected to accomplish (and when), thereby making sure everyone s expectations for the program are aligned. Second, by looking systematically at what goes into a program (resources), what it is doing and producing (activities and outputs), and what it is achieving (outcomes), this evaluation approach enables program stakeholders to both be accountable for results and to learn how to improve the program. 1 2nd edition, 1990; p. 129. Page 1 of 15

We believe that evaluation is most effective when it: 2 Is connected to program planning and delivery. Evaluation shouldn t be something that is done if you have some extra time and resources, or only when you are required to do it. Rather, evaluation is a process integral to an organization s effectiveness. Evaluation should inform program planning and implementation. Involves the participation of stakeholders. Those affected by the outcome of an evaluation have a right to be involved in the process. Their participation will help them to better understand the evaluation s purpose and process; participation will also promote stakeholder contribution to, and acceptance of, the evaluation results. This increases the likelihood that evaluation findings will be put to practical use. Supports an organization s capacity to learn and reflect: Evaluation is not an end in itself; it should be a part of an organization s core management processes, so it can contribute to an organization s ongoing learning. Each organization, and each evaluation effort, requires tools and methods appropriate to the data that are to be gathered and analyzed, and appropriate to the program s needs (no single best evaluation method exists). Respects the community served by the program. Evaluation should not be something that is done to program participants and others affected by or associated with the program. Rather, it should draw on their knowledge and experience to produce information that will help improve programs and better meet the needs of the community. Enables the collection of the most information with the least effort. You can t and don t need to evaluate everything! Focus on what you need to know. What are the critical pieces of information you and your stakeholders need to know to remain accountable, and to improve your program? 2 Some information in this section is drawn from: Earl, Sarah et al. Outcome Mapping: Building Learning and Reflection into Development Programs. International Development Research Centre (Canada), 2002. Page 2 of 15

Why Evaluate? Conducting a well-conceived and implemented evaluation will be in your interest. It will help you: Understand and improve your program. Even the best-run programs are not always complete successes. Every program can improve; the information collected in an evaluation can provide guidance for program improvement. As you incorporate evaluation into your ongoing work, you will gain useful information and become a learning organization one that is constantly gathering information, changing, and improving. Test the theory underlying your program. The systematic data you collect about your program s short-, intermediate and long-term achievements as well as its implementation help you to understand whether (and under what conditions) the hypotheses underlying your program are accurate, or whether they need to be modified. Tell your program s story. The data collected through evaluation can provide compelling information to help you describe what your program is doing and achieving. Evaluation results provide a strong framework for making your program s case before stakeholders, funders, and policy-makers. Be accountable. Evaluation helps you demonstrate responsible stewardship of funding dollars. Inform the field. Nonprofits that have evaluated and refined their programs can share credible results with the broader nonprofit community. A community that can share results can be more effective. Support fundraising efforts. A clear understanding of your program what you did well, and precisely how you accomplished your outcomes helps you raise additional funds to continue your work, and expand or replicate your efforts. Page 3 of 15

Developing an Evaluation Plan The evaluation planning process has three parts: Decide what information needs to be collected (determining your evaluation questions). Decide how that information will be collected (data collection). Decide who will conduct the evaluation, and when (evaluation implementation). Determining Your Evaluation Questions The evaluation plan you develop will be based on your program s logic model. As you look at your logic model, there will be questions you will have, and want to answer, about your program. The purpose of evaluation planning is to identify these questions so they can be answered. Below are some examples: Resources Did you have adequate resources? Did you have qualified staff? Activities Did you implement the activities as planned? What aspects of implementation contributed to achievement of desired outcomes? Outputs Did the activities result in the promised products? How good are those products? Outcomes Which outcomes were achieved? Why or why not? Implementation and Outcomes: You can t do one without the other. We believe that in order to implement a worthwhile evaluation, you need to understand not only the degree to which you accomplished your outcomes; it is just as important to know what aspects of your program contributed to those outcomes, and what barriers exist to your program s getting its ideal results. In the old days, nonprofit evaluation focused on documenting and reporting on program activities. Nonprofits and others then assumed that if program activities were implemented as planned, then desired results were taking place for the individuals, families, organizations, or communities they served. In general, the nonprofit community focused on reporting on process to the exclusion of outcomes. In recent years the pendulum has swung in the opposite direction: nonprofits are under pressure to measure and report outcomes, with little emphasis on process. For those interested in reading more about this topic, we suggest Perspectives on Outcome-Based Evaluation for Libraries and Museums by Stephen Weil and Peggy Rudd. http://www.imls.gov/pubs/pdf/pubobe.pdf Page 4 of 15

We strongly believe that effective evaluation encompasses both process and outcome evaluation. As the Kellogg Logic Model Guide notes: Implementation assesses the extent to which activities were executed as planned, since a program s ability to deliver its desired results depends on whether activities result in the quality and quantity of outputs specified. They tell the story of your program in terms of what happened and why. 3 If you would like more information on process and outcomes evaluation, see the W. K. Kellogg Foundation Logic Model Development Guide at http://www.wkkf.org/pubs/tools/evaluation/pub3669.pdf Programs are complex; rarely is a program totally successful. Some aspects of your program will have desired outcomes, and some won t; a program may be successful with some target beneficiaries, and less so with others. Including the evaluation of your implementation will answer questions about how and why programs work or don t work, for whom, and under what circumstances. We have developed an evaluating planning process that addresses both the how and the why. We ve found that in developing evaluation plans, it s useful to divide the planning into two sections Outcomes and Implementation. (We assume that you will be collecting information both along the way and at the end of program implementation.) Note: In this workbook, we ve started with Outcomes as the framework for our plan, but you can certainly start your plan with Implementation evaluation. The connections between the two sections are more important than the order you do them in. If you d prefer to begin with Implementation, please go to page 8. Page 5 of 15

Evaluating Outcomes: What changes occurred as a result of our work? As mentioned above, it s no longer acceptable in program evaluation to assume that good intentions, good-faith effort, or even exemplary program implementation will result in the desired outcomes for those we serve. It is important to spend time developing a plan to measure the achievement of outcomes. Remember, effective evaluation is complex, and will provide explanations for why a program is or is not working. In your logic model, you identified your desired outcomes the changes you expect to see as a result of your work. In order to assess how successfully you have met your outcomes, you ll need to determine some indicators. An indicator is the evidence or information that will tell you whether or not your program is achieving its intended outcomes. Indicators are measurable and observable and answer the question: How will we know it? Outcomes are often stated as abstract concepts or ambitions. Indicators are what is measured: specific characteristics or behaviors that provide more tangible information about those concepts or ambitions. Often, one outcome will have more than one indicator. When you develop your indicators, it may be helpful to ask yourself: What does the outcome look like when it occurs? How will I know if it has happened? What will I be able to see? An indicator should be: Meaningful: The indicator presents information that is important to key stakeholders of the program. Keep in mind that different people can have different perceptions about what defines success for a program. Reaching consensus among key stakeholders regarding what success looks like is key to ensuring buy-in to your evaluation results. Direct: The indicator captures enough of the essential components of the outcome to represent the outcome. Keep in mind that several indicators can be necessary to adequately measure an outcome. However, there is no standard for the number of indicators to use. While multiple indicators are often necessary, more than three or four may mean that the outcome is too complex and should be better defined. 3 p. 29, W. K. Kellogg Foundation Logic Model Development Guide, December 2001. Page 6 of 15

Useful: The information provided by this indicator can be put to practical use for program improvement. Realistic to Collect: The data for the indicator shouldn t be a burden to collect. Think your indicator through to make sure the data can be collected in a timely manner and at reasonable cost. Sometimes an indicator meets the other criteria described above, but the effort to collect it would be too burdensome. The following are examples of outcomes and indicators. Outcome New mothers increase their knowledge of child development. Target audiences are knowledgeable about the signs of child abuse and neglect and appropriate action to take Residents feel neighborhood is a safer place for children Increased and diversified program resources. Increased cultural and legal competency of legal aid attorneys Increased legislators awareness of policy options Students (K-6) demonstrate knowledge about opera Youth have increased knowledge about the consequences of long term ATOD use/abuse. Indicator # and % of new mothers who satisfactorily complete a short survey about child development # and % of community focus group members who know the signs of child abuse and neglect and appropriate action to take # and % of neighborhood residents who say that the neighborhood is safer for children today than it was one year ago Total dollar amount of new funding Total value of in-kind resources # and value of financial and in-kind resources, by different sources (public, private corporation, individual donors, foundations) # and % of legal aid attorneys who self-report learning about cultural issues from workshop # and % of clients of trained attorneys who indicate attorney was knowledgeable about cultural issues # and % of surveyed legislators who indicated awareness of different policy options # and % of children who can tell the story of the opera being performed. # and % of children who can describe X of the characters in the opera. # and % of children who can identify the type of voice (Soprano, alto, tenor, bass) with the character singing the part. # and % of participants who report that they gained knowledge about the risks/harms associated with ATOD use. # and % of participants who report that it is important not to use alcohol or other drugs. While developing your indicators, you may realize that your outcomes are unclear or ambiguous. This process offers an opportunity to reflect on, and further clarify, your outcomes. Keep in mind: sometimes an outcome is the same as its indicator. Some indicators are very similar to outcomes. For example, a smoking reduction program s outcome is for participants to stop smoking. In this case, the indicator will be the # or % of participants that stop smoking after participating in the program. In that case, only one indicator will be sufficient. When developing your logic model, you identified a chain of outcomes, which is useful in both clarifying your theory of change and in setting realistic expectations. The chain is also important Page 7 of 15

from an evaluation standpoint knowing what you are achieving. Each link in the chain has corresponding indicators. Outcomes Chain: Short-Term Outcome Intermediate Outcome Long-Term Outcome Participants learn job seeking skills Participants go on job interviews Indicators: Indicators: Indicator: Participants obtain full-time, paid employment #/% of participants who can meet criteria in mock interviews #/% of participants who develop a resume #/% of participants who go on at least 2 job interviews #/% of employers providing positive feedback about job candidates interview skills #/% of participants who have full-time paid employment Evaluating Implementation Note: Remember, there s no correct order for evaluation planning. This section deals with Implementation evaluation, but it s not necessary to do Outcomes first and Implementation second. The connections between the two sections are more important than the order you do them in. The illustration above outlines the information you ll need to develop your implementation evaluation plan. Your implementation evaluation plan starts with identification of the activities of your program. The activities are the actions that the program takes to achieve desired outcomes. The activities of your program are divided out by program component closely related groups of activities in your Page 8 of 15

program. You identified your activities and program components as part of developing your logic model. What You Did: Documenting Your Outputs You can t really find out how well you conducted your program unless you have a record of what you actually did. Any system to effectively document programs must be based on activities, and on the tangible results of those activities: your outputs. As you identified in your logic model, outputs are the measurable products of a program s activities. They are the concrete work you did, and the products of that work. Keep in mind: As you go through this section, you may discover that there are gaps in your documentation/record keeping system. Don t worry. It s never too late to start to keep accurate and systematic records! The outputs will come directly from your logic model. However, it is worth briefly reviewing your outputs at this point to determine if there are additional products from your activities that will be worth keeping track of. How Well You Did, and Why: Additional Evaluation Questions Documenting activities and associated outputs tells us what you did. But that isn t sufficient for evaluation purposes (after all, you already had your activities and outputs identified in your logic model). The purpose of implementation evaluation is to understand how well you did it. The next step is to identify other questions you have about your program. What information do you want to have that will help you understand the quality of your implementation? The following are examples of the types of questions you might consider: Participation: Did targeted program participants participate as expected? Why or why not? Quality: Were the services/materials you provided of high quality? Were they appropriate for your expected participants? Satisfaction: Are those affected by your program s services satisfied with them? Why or why not? Context: What factors influenced your ability to implement your program as planned? Page 9 of 15

Data Collection Of all the parts of evaluation, data collection can be the most daunting. Many of us think we need to be statisticians or evaluation professionals to engage in quality data collection, but that simply isn t true. This workbook will take you through basic concepts about data collection, a step-by-step process to develop your plan. Data Collection Methods: What s the best way to gather the information you need? 4 Once you have determined what you want to measure (your outcomes and indicators, above), you will need to identify the method you will use to collect the data. (Remember, the indicator is what will be measured. The data collection method you choose is how it will be measured.) As you go through the process of selecting your data collection strategy, keep in mind: The goal is to minimize the number of data collection instruments you use, and maximize the amount of information you collect from each one! When deciding on the best data collection method to obtain the information you need, think about: Which methods will be least disruptive to your program and to those your serve? Which methods can you afford and implement well? Which methods are best suited to obtain information from your sources (considering cultural appropriateness and other contextual issues)? The most common data collection strategies fall into the following broad categories: 5 Document review; Talking to people; Collecting written responses from people; and Observation. Document Review Analysis of printed material and other existing information can yield important evaluation data. Start with your own program records the documents you use to keep track of your program s implementation, such as applications and attendance logs. Other existing records records that were created for another purpose, whether by your organization or by external sources can also be a valuable source of information for an evaluation. Using existing documents can save time and money. Examples of existing records include: 4 Material in this section has drawn on the work of the University of Wisconsin, Extension, and Carter MacNamara s Free Management Library. 5 A brief summary table of these methods is included at the back of this workbook. Page 10 of 15

Grant proposals Research reports Other evaluation reports Crime statistics Report cards Health records (immunizations, past illnesses) Budgets Note: Be broad in your definition of documents consider the possibilities of multimedia. With the availability of video and digital cameras and the ease of disseminating information through the use of technology, many documents are now available that go beyond text. Examples include: Photographs Videotapes Audiotapes Artwork Presentations Talking to People: Who to Talk To There are several data-collection methods that involve talking to people collecting verbal responses. Many of us automatically think about gathering information from program participants, since they are the direct beneficiaries of our services. But think about other people that know about and are interested in your program. These people can also be excellent sources of evaluation data. Consider talking to: Parents of school-age participants Program staff, administrators and volunteers Staff of sister agencies Funders Talking to People: How to Talk To Them Verbal responses can be collected in a variety of ways: Partners Policy makers Experts in the field Interviews: Information collected by talking with and listening to individuals. Interviews can be conducted in person or over the phone. Interviews range on a continuum from tightly structured to free flowing and conversational. Remember, the interviews don t have to be designed specifically for the evaluation: you can also use intake or exit interviews that may have been done as part of your program s services. Focus Groups: a group of interacting individuals having some common interest or characteristics brought together by a moderator, who uses the group and its interaction as a way to gain information about a specific or focused issue. 6 6 This definition is from the University of Arizona s evaluation website, http://ag.arizona.edu/fcs/cyfernet/cyfar/focus.htm Page 11 of 15

Collecting Written Responses There are also several ways of collecting written information: Surveys: A survey a structured questionnaire to collect standardized information that is, each person who takes the survey is asked exactly the same questions. Surveys may be mailed on paper, emailed, completed online or by telephone, or completed in person. Tests: Use of standards to assess knowledge, skill, or performance. Tests can be written, oral, or activity-based (such as a driver s test). Journals/logs: Recording events and observations over time, usually to obtain the personal perspective of the writer. Observation Another common data-collection strategy is observation. This is, simply, the observation of situations, behaviors and activities, either directly or through visual documents such as photographs and videotapes. Observations should be guided by an observational checklist to ensure that the observer is collecting all necessary information. Additional Data Collection Methods There are a host of additional methods beyond the common ones described above. Think about what methods suit your program best. Consider: Case studies Expert or peer review Portfolio reviews Visual Artwork Poetry and creative writing For more extensive and detailed information on data collection methods, we highly recommend the University of Wisconsin Extension s publication, Collecting Evaluation Data: An Overview of Sources and Methods, available on their website at http://cecommerce.uwex.edu/pdfs/g3658_4.pdf Ease of Data Collection The next step is to identify how easy or difficult it is to collect the data. For each data collection method you identify, indicate whether you already have an instrument in place (mark have in your Outcomes Evaluation plan). If you don t have this instrument in place already, think about the level of effort needed to use the chosen data collection method. Indicate whether it will require low, medium, or high effort. Page 12 of 15

Minimizing Your Effort If you re like most of us, you may find that you identified too many data collection instruments, given your available human and financial resources. The final step in developing your Evaluation Plan will be to revise your plan, choosing those evaluation activities that will provide the most information with the least effort. (Remember: Minimize instruments, maximize information.) Evaluation is about trade-offs. You need to ask yourself whether the information you will get is worth the expense in time and resources that it will take to get it. In some cases, the answer will be yes. In other cases, it may be yes, but the time may not be right. In other cases, the answer will be no. There are several ways to minimize your effort and expenditure of resources in data collection. Start with what you are already doing! One way to get a great deal of information with a minimum of effort is to use existing data collection instruments whenever possible. Before you think about creating data collection instruments, ask yourself: Does our organization already collect this information? You may be surprised: your organization may already collect a lot of information to monitor activities and/or meet your reporting requirements. For example, in the provision of your services, you may use intake and referral forms, keep client records, or track your inventory. These are valuable sources of evaluation information. You may want to do an information audit to assess what you collect, and for what purpose, to identify overlaps and maybe even reduce the number of ways that you collect the same information. Capitalize on what you ve got. Once you identify the data collection instruments you already use, you may be able to augment or beef up those instruments to collect additional information with very little extra effort. For example, Group A is an organization that produces reports on child well being. Group A already keeps track of people who request materials: they have been asking for the person s name, title, organization, address, and website URL. But Group A wants to know more about how its materials are being used. Now, when a new request comes in, Group A asks the requestor to answer a few questions, including how they heard about Group A, and how they intend to use the material. Group A also asks for permission to follow up in a few months to find out whether the material met the needs of the person who requested it. Use or modify existing instruments that have been developed by other organizations, when they are available. In recent years, there has been an effort within sectors of the nonprofit community to share information. One of the most exciting results of such efforts is the increasing availability of data collection instruments produced by others and made accessible for use by the field. We urge you to explore whether there are instruments available in your field. Page 13 of 15

Here are some examples of data collection instruments: The Harvard Family Research Project's Out-of-School-Time Program Evaluation Database lists evaluation studies conducted for after school programs. Some of sites include data collection instruments. http://www.gse.harvard.edu/hfrp/projects/afterschool/evaldatabase.html AmeriCorp s Project Star has a wealth of data collection instruments, which can be downloaded at http://www.projectstar.org/star/library/new%20library.htm Review Your Plan Once your draft plan outline is complete, it s important to review it to make sure your plan will result in a worthwhile evaluation. Here are some tips for reviewing your plan: Only select data collection methods that require high levels of effort when the information you will obtain is worth the effort. See where you can combine methods you d be surprised how much information can be obtained through one data collection instrument. Be sure the data collection methods are culturally appropriate for those who will be participating in the evaluation. (Consider language barriers, literacy and education levels, etc.) Consider whether you have the resources to implement the plan. Think about time, but also expertise. Don t forget to account for the resources needed to organize and analyze the information you collect. Consider how you might obtain assistance, whether by using pre-existing data collection instruments or talking to evaluation consultants. For further reading on evaluation, we recommend Measuring Joy: Evaluation at Baltimore Clayworks by Deborah Bedwell. http://www.nea.gov/grants/apply/out/joy.html Page 14 of 15

Summary of Common Data Collection Methods 7 Document Review / Using Existing Data Examples Advantages Disadvantages Attendance records Data for local, state or federal funders Journals Maintenance records Typically already have the information don t have to go collect it Often quantitative and easy to use Budgets Cheaper to obtain than most other Performance paperwork methods Talking to People Examples Advantages Disadvantages Interviews (including intake and exit interviews) Focus Groups Discussion Groups Can be conducted in person or over the telephone (or in some cases online) Provides more information than written surveys (can follow up on particular questions and probe more deeply) Natural form of information - sharing. People can speak freely, without feeling boxed in by a standardized survey. Useful if people don t have good reading and writing skills Can use groups to stimulate conversation and feedback Collecting Written Responses Examples Advantages Disadvantages Surveys Questionnaires Paper Tests Can be on paper (in person or by mail), over the telephone, emailed, or online. Observation Collect uniform information relatively quickly and easily. Can be given anonymously so people are more open to sharing their true opinions Much less time consuming to collect information from to large numbers of people than when using the Talking to People methods. Good for collecting quantitative data, which are easier to analyze, e.g. closed ended and short answer questions. If staff don t feel like the data are used, then they may not be accurate. Not flexible; limited to what already exists Can be very time consuming to talk to people. Can be labor intensive to set up and analyze data. Interviews typically have a lot of qualitative data that are more time consuming to analyze than quantitative data. Not anonymous, so some people may be hesitant to share their true opinions Must be careful not to bias answers through interview procedure consider getting interviewer training. Respondents must be literate in the language of the survey Responses are not guaranteed (e.g., you may send out a hundred surveys but only receive three in return.) Can t follow up on answers as easily as in an interview. Not as rich an array of answers. Examples Advantages Disadvantages Observing community Community change Observe and document group activities Observe and document workshop participants Non-intrusive. Doesn t require much participation Easier than asking people to fill out a survey or participate in an interview Collect limited kinds of data through observations Takes a lot of time 7 Drawn in part from Carter MacNamara s Free Management Library, http://www.mapnp.org/library/evaluatn/ fnl_eval.htm#anchor1585345 Page 15 of 15