Program Evaluation Methods and Case Studies

Similar documents
Gaining and Sustaining Competitive Advantage

Technology Strategies for the Hospitality Industry

Begin with Part One: Rating and complete all parts in order.

STRATEGIC COMPENSATION

STRATEGIC COMPENSATION

International Edition. Alvin C. Burns. Ronald F. Bush. Nilanjana Sinha NSHM Business School, Kolkata PEARSON. Louisiana State University

Eighth Edition. Stephen P. Robbins. San Diego State University San Diego, California. David A. DeCenzo

ISM Travel & Events. Graham Crawshaw MCIPS Director of Content June 2017

Consumer Behavior. Buying, Having, and Being. Tenth Edition Global Edition. Michael R. Solomon

Hay Group Spectrum. The next generation HR solution

Economic Approaches to Organizations

Could you increase your knowledge and raise your grade if you

Essential Guide to. Second edition. Nfeiriaifi Buirk o l-v -. j-, Financial Times Prentice Hall Is an imprint of I". ( J

CM M Is for Services. AAddison-Wesley. Guidelines for Superior Service. Sandy Shrum. Second Edition. Eileen C. Forrester Brandon L Buteau

THE BIG DIGITAL FAIL Why Only 1 in 4 Companies Achieve Topline Growth with Digitalization

Natural Resource and Environmental Economics

Leadership Transitions: Differences Across Leadership Levels

The Emerging Markets Acceleration Program and Globalization Readiness Index. Capturing Breakthrough Growth in Emerging Markets

Inspiring Procurement Excellence PASIA World Conference. November 2016

Svend Hollensen. Prentice Hall

Fifth Edition CONSUMER BEHAVIOUR

Healthcare.

Managing Effectively Through Tough Times By Professor Mason Carpenter

Regulatory references. Further food for thought November allenovery.com

CFO and Financial Executives

MICROECONOMICS. London School of Economics. University of Western Ontario. Prentice Hall FINANCIAL TIMES

GERMAN LABOR AND EMPLOYMENT NEWS

Warehousing: Charting the Way to a Winning Strategy

Development Suggestions for Political Savvy

Topic 2 - Market Research. N5 Business Management

Using Shareholder Value Analysis for Acquisitions

The Role of Culture in Search and Succession. Busting three common cultural myths

PROFESSIONAL ETHICS IN VOLUNTEER ADMINISTRATION

BEHAVIOR IN ORGANIZATIONS GREENBERG PDF

The APPLE & DELOITTE partnership SEPTEMBER 2016

Digital Leadership: Will the Chief Information Officer Role Disappear?

Employee Benefits Trends

Governance Watch Webcast #1: Best Practices in Board Succession Planning

Chapter 5. Measuring Results and Behaviors 5-1. Copyright 2013 Pearson Education, Inc. publishing as Prentice Hall

The School Board Fieldbook: Leading With Vision. Study Guide

The basics of modelling

DEVELOP AND IMPLEMENT A BUSINESS PLAN FACILITATOR MANUAL WITH SIMULATED ONLINE BUSINESS ASSESSMENT BSBMGT617A

Internal Management Consulting Competency Model Taxonomy

UNIVERSITY OF ILLINOIS AT URBANA-CHAMPAIGN

Winning. strategy. The World s Most Admired lead the way in board governance and human capital management

Why All Leads Are Not Created Equal

The Coalition Concept

Engaging for Results. Opportunities for Action in Financial Services

Mentoring Guidelines and Ideas

SYLLABUS FOR MAN 4330: COMPENSATION MANAGEMENT

How to Engage Employees. A Guide for Employees, Supervisors, Managers, & Executives

Occupational Solution

Nelson Mandela s Influence Using Organizational Behaviour Techniques

Measure Your Impact. 8 KPIs for in-house legal teams

A dataxu Issue Briefing What We Talk About When We Talk About Transparency

Industrial Goods Five Leadership Issues Worthy of Board and Executive Attention

The Five Things I Wish I Had Known When I Was Promoted to the C-Suite

Effective Performance Evaluations

Terms and Conditions

Immigrants A France Quimper Le Collectif Droit d Asile

PERFORMANCE MANAGEMENT PROCESS. For Full-Time Faculty, Staff, and Administrators

Guide to becoming a self-employed lawyer in Australia

Sharon M. Oster PEARSON

Implementing your personal development plan

The Purchasing Chessboard

Perspectives. The Hotel Clerk

Sawtooth Software. Sample Size Issues for Conjoint Analysis Studies RESEARCH PAPER SERIES. Bryan Orme, Sawtooth Software, Inc.

The Climate Crisis. An Introductory Guide to Climate Change

Mentors: Measuring Success

Performance Feedback and Work Environment Survey

Josh Bloom Partner Simon-Kucher

Getting Started with Risk in ISO 9001:2015

USING LEAN PRINCIPLES FOR EFFECTIVE CONTINUAL SERVICE IMPROVEMENT

JOB CLASSIFICATION: A BENEFIT TO ASSESSMENT DEVELOPMENT

ENSURE TEAM EFFECTIVENESS CANDIDATE RESOURCE & ASSESSMENT BSBWOR502A

Participant Copy. No. Participation is voluntary. Your decision will not affect your health care at Mayo Clinic in any way.

Pathways to Management and Leadership

Opportunities for Action in Consumer Markets. Workonomics: Helping Retailers Value Human Capital

Why Do So Many Online Businesses Fail?

Leading the In-Service Organization by Anthony O. Putman

Organizational Behaviour

What is executive remuneration in high definition?

Global Supplier Code of Business Conduct & Ethics

Airline Employer Branding: How to measure the Unmeasurable

World Medical Association Declaration of Helsinki

Understanding the Role of the Chief Strategy Officer

Watson-Glaser II Critical Thinking Appraisal. Development Report. John Sample COMPANY/ORGANIZATION NAME. March 31, 2009.

JOB ROTATION YOHAN J ATLAN INTERNAL AUDITING THE IIA NY CHAPTER SCHOLARSHIP OPPORTUNITY LONG ISLAND UNIVERSITY 11/18/2013

Managers at Bryant University

7 Quality Organizations and Service. Copyright 2016, 2013, 2011 Pearson Education, Inc. 1

Accommodation and Compliance Series Service Animals in the Workplace

Innovative Marketing Ideas That Work

WMA Declaration of Helsinki - Ethical

Certified Identity Governance Expert (CIGE) Overview & Curriculum

Understanding and Mitigating IT Project Risks BY MIKE BAILEY AND MIKE RIFFEL

physician Developing Accelerate the development of those responsible for transforming your business

Putting the Wholesaler in the Driver s Seat. Best practices from the retail industry to improve profitability and gain competitive advantage

China Pursues Excellence in Logistics. A.T. Kearney s Logistics Excellence in China Study

Occupational Solution

Social Media Guidelines

Transcription:

Test Bank for Program Evaluation Methods and Case Studies Eighth Edition Emil J. Posavac Loyola University of Chicago Prentice Hall Boston Columbus Indianapolis New York San Francisco Upper Saddle River Amsterdam Cape Town Dubai London Madrid Milan Munich Paris Montreal Toronto Delhi Mexico City Sao Paulo Sydney Hong Kong Seoul Singapore Taipei Tokyo

Copyright 2011, 2007, 2003 Pearson Education, Inc., publishing as Prentice Hall, 1 Lake St., Upper Saddle River, NJ 07458. All rights reserved. Manufactured in the United States of America. The contents, or parts thereof, may be reproduced with Program Evaluation: Methods and Case Studies, Eighth Edition, by Emil J. Posavac, provided such reproductions bear copyright notice, but may not be reproduced in any form for any other purpose without written permission from the copyright owner. To obtain permission(s) to use material from this work, please submit a written request to Pearson Higher Education, Rights and Contracts Department, 501 Boylston Street, Boston, MA 02116, or fax your request to 617-671-3447. 10 9 8 7 6 5 4 3 2 1 14 13 12 11 10 www.pearsonhighered.com ISBN-10: 0-205-83909-6 ISBN-13: 978-0-205-83909-4

TABLE OF CONTENTS Chapter 1 Program Evaluation : An Overview 1 Chapter 2 Planning an Evaluation 5 Chapter 3 Developing and Using a Theory of the Program 9 Chapter 4 Developing Measures of Implementation and Outcomes 14 Chapter 5 Ethics in Program Evaluation 19 Chapter 6 The Assessment of Need 25 Chapter 7 Monitoring the Implementation and the Operation of Programs 30 Chapter 8 Qualitative Evaluation Methods 34 Chapter 9 Outcome Evaluations with One Group 39 Chapter 10 Quasi-Experimental Approaches to Outcome Evaluation 43 Chapter 11 Using Experiments to Evaluate Programs 48 Chapter 12 Analyses of Costs and Outcomes 53 Chapter 13 Evaluation Reports: Interpreting and Communicating Findings 57 Chapter 14 How to Encourage Utilization 61 iii

CHAPTER 1. PROGRAM EVALUATION: AN OVERVIEW 1. One of the major failings in the development and implementation of social service programs is * a. beginning the service without first demonstrating the effectiveness of the service. b. spending too much effort identifying the needs of the population. c. saddling the service program with too many requirements to demonstrate its effectiveness. d. isolating the service program from the political structure of the area needing the service. 2. The effectiveness of social services has often not been evaluated because a. the values of all such services are already well documented. * b. the desired outcomes are indeed hard to measure. c. it is usually sufficient to merely show interest in helping. d. government regulations forbid evaluations of human services. 3. Evaluators have often a. ignored existing measures of the outcomes of services. b. failed to contact the managers of programs. c. used large samples with low Type II errors. * d. overlooked possible negative side effects of programs. 4. Program evaluation gets confused with basic research when evaluators a. focus on the needs of the program sponsor. b. become advocates for the program participants. * c. ignore practical issues in order to examine theoretical questions. d. use variables suggested by the staff of service programs. 5. An evaluation of need refers to evaluations that a. are similar to performance appraisals. b. isolate the costs of providing various services. * c. are particularly useful in planning a new program. d. violate fundamental scientific standards. 6. Evaluations of process often focus on the degree to which a. a program is successful. b. a specific service is needed in a community. c. the costs required to run a program can be justified. * d. the program is implemented as planned. 1

7. Outcome evaluations provide the data to show whether * a. a service has achieved appropriate results. b. a target population will use a service. c. the evaluator has made every effort to work with the program director in an effective manner. d. resources were spent on programs that are most needed. 8. When an evaluation relates the cost of a program to its outcomes, this form of evaluation can be thought of as a. an evaluation of performance. * b. an evaluation of efficiency. c. a variant of basic research. d. overly scientific. 9. Formative evaluation is to summative evaluation as a. process is to outcome. * b. constructive criticism is to final grades. c. apples are to apple pies. d. cancellation is to initiation. 10. The purpose of a formative evaluation is to provide information to program sponsors a. making decisions about maintaining or terminating the program. b. to minimize the effects of arbitrary federal requirements for program evaluations. * c. making program improvements. d. for public relations needs. 11. It is sometimes hard to learn who is conducting program evaluations in an organization because a. few organizations really need to have their services evaluated. b. formal evaluations are conducted by most managers of service programs. * c. the job titles of evaluators are not standardized. d. when evaluations are completed, they usually look like basic research. 12. If the management of an organization plans to maintain or eliminate a service on the basis of its documented degree of effectiveness, then a * a. summative evaluation is needed. b. needs analysis should be conducted. c. formative evaluation is needed. d. person familiar with the program should conduct an evaluation. 13. The strongest advantage that an external evaluator has over an internal evaluator is a. the trust he/she has built up over years. b. more well-developed analytic skills. * c. the potential to be more objective. d. the influence to see that the findings are used by the program staff. 2

14. The four major forms of program evaluation are a. internal, external, conclusion, and construct. b. performance, institutional, accountability, and program review. * c. need, process, outcome, and efficiency. d. community, governmental, professional, and special interest. 15. A process evaluation would probably NOT include which one of the following questions? a. Do the program s actual participants represent the target population? b. How much staff-client contact really occurs? c. Do actual staff activities match the program plan? * d. Is the program cost-effective? 16. Internal program evaluators usually have which following advantage over consultant evaluators? a. better statistical and research competence b. better support staff c. a more objective approach to the program * d. more complete knowledge of the organization sponsoring the program 17. Program evaluation is frequently confused with a. billing and accounting procedures. * b. basic research and individual assessment. c. therapist s progress notes. d. crisis management methodology. 18. The ultimate purposes of program evaluation include a. the collection of data on the outcome of services. b. an assessment of how program staff people spend their time so that ineffective staff members can be identified. c. making decisions about the proper groups a program should serve. * d. the improvement of implemented programs and the wise selection among possible programs. 19. Monitoring is a form of evaluation which focuses on a. providing definitive proof that a program is worth the investment. b. being sure that data are available to make refinements in a program. * c. providing frequent feedback to be sure a program stays on track. d. financial resources so that funds are not misspent. 20. Implementation of programs should be examined because a. fraud is rampant in social programs. b. irresponsible service providers often destroy programs. c. internal evaluators usually ignore this issue. * d. programs are less effective when not offered at full strength. 3

Essay Questions 21. In parallel columns, list the advantages and disadvantages of internal and external evaluators. 22. Outline the essential questions that evaluators seek to answer. 23. To what degree is it true that people seek to evaluate their own efforts? 24. Why is it more complicated to evaluate organized programs than the effort of an individual? 25. Make a list of college (or community) services that you use which you believe could be improved if evaluated carefully. Pick one and explain why it is in particular need of evaluation. 4

CHAPTER 2. PLANNING AN EVALUATION 1. The first step in planning an evaluation is to a. suggest outcome measurement instruments to the staff. * b. identify the stakeholders of the evaluation. c. try to figure out how the evaluation could be done as an experiment. d. caution the program director that the evaluation will be tough. 2. Evaluators must be very careful to be sure that they understand the actual purpose of a request for an evaluation because * a. many stakeholders talk about evaluation in terms that sound like summative evaluation when they really want to improve the program. b. most program staff members are too willing to disrupt the service program to accommodate the evaluation. c. the demand for evaluation is so widespread that program staff are eager to participate. d. stakeholders are eager to apply all recommendations from an evaluator. 3. Program evaluation, like other types of applied social science research, differs from basic research in that a. validity is irrelevant in applied research although reliability is critical to both. b. reliable measurement is irrelevant in applied research. * c. there are definite time limits for completing applied research projects. d. program participants cannot refuse to provide data. 4. A skill that is more important in program evaluation compared to basic social science research is * a. estimating accurately how much time each phase of the project will require. b. being able to plan valid research designs. c. selecting measures of behavior that are reliable. d. molding the project to answer questions relevant to social theories. 5. Evaluability assessment refers to the a. ability of the program managers to fund an evaluator. b. knowing how to carry out statistical analyses. c. reliability of the measures of program outcome that are chosen. * d. likelihood that a valid evaluation can be completed. 6. A major failing of many evaluations is that the evaluator never learned a. what the outcome measures were. * b. why the program activities were expected to lead to the desired outcomes. c. who was to receive the report and use the findings. d. how the analysis was done or what the statistical findings were. 5

7. Understanding the conceptual foundation of a program provides an evaluator with * a. hints about how the program services are expected to lead to the outcomes that the staff and director hope to achieve. b. some indication of how much impact the evaluation will actually have on the organization sponsoring the program. c. some indication that the evaluation will really have an effect on the people who need the services of the program being evaluated. d. the expected conflicts between the evaluator and the management that may well impact the quality of the project. 8. An evaluation of high quality usually than one of lower quality. a. is based on data from a greater number of program participants b. was designed with input from fewer stakeholder groups * c. is based on data from a more representative sample of program participants d. employs less costly surveys and questionnaires 9. The data collection is a. best turned over to the program staff since they are on the scene and can easily handle the activity. b. to be handled by someone without direct knowledge of the setting of the program since it is usually a rather mechanical process. * c. likely to be corrupted even by well-meaning program staff members if they are permitted to control data collection. d. fairly simple since confidentiality is not an issue in applied social research projects. 10. A written proposal outlining the steps to be followed in carrying out a planned program evaluation * a. should always be prepared. b. can be omitted if the evaluation is conducted internally. c. should always bind the evaluator to exactly the procedures described. d. cannot realistically be prepared since so many unplanned events impinge upon program evaluations. 11. Resistance to an evaluation is likely to be greatest when the evaluation is a. a formative evaluation. b. a cost-effectiveness evaluation. c. an outcome evaluation. * d. a summative evaluation. 12. Small improvements in the achievements of program participants * a. can be very important if the program affects many people. b. are seldom of interest to serious program managers. c. cannot be justified in a politically-charged environment. d are often all that is expected by program staff members. 6

13. Many members of the staff of human service programs resist program evaluations in their facilities because a. evaluation methods cannot be sensitive to their concerns. * b. it may appear that they are losing control of their program. c. their jobs on are on the line. d. evaluators have a way of taking over. 14. One way for evaluators to mollify the critics of a program evaluation project is a. to demonstrate clear analytical skills. b. to mold the project design to avoid any points of concern. * c. to listen to the worries of the critics and relieve all concerns that are founded on misperceptions of program evaluation. d. for the evaluator to ally him/herself with the facility management, who will see to it that the correct procedures are followed. 15. Goal-free evaluation refers to evaluations that are conducted a. on programs with no goals. b. when program managers cannot describe their goals. c. when the program is designed to show that the government is doing something but no one expects improved outcomes. * d. when evaluators want to examine the program without their own expectations being affected by knowledge of the program s goals. 16. One way to reduce fear and resistance to evaluation activities is to reassure staff that a. the larger community will be better off knowing when a program is ineffective. b. program participants (i.e. the clients) have a right to know how effective the program is. * c. documenting success would increase the organization s commitment to the program. d. less that half of outcome evaluations result in the curtailment of a program. 17. The social science model for program evaluation served to a. get rid of a lot of ineffective services. b. demonstrate that a non-significant statistical test showed that the program was ineffective. c. introduce more creativity into program evaluation methods. * d. introduce additional rigor into program evaluation practices. 18. Using expert opinion as a form of program evaluation is especially useful when * a. the program is complex and there are few clear objective criteria of effectiveness. b. there are readily observable outcomes from a program. c. it is essential that the causal connection between the program and the outcome be definitely understood. d. new medications are being evaluated. 7

19. Finding discrepancies between client needs and services offered or between projected outcomes and achieved outcomes a. shows why a service ought not be supported. b. identifies ways to improve a program. * c. reveals the issues that must be faced in making program improvements. d. shows that evaluators have done their work. 20. Inappropriate reasons to evaluate a program include a. documenting program activities to satisfy a funding agency. b. demonstrating effectiveness prior to seeking a new source of funding. c. seeking more efficient ways to provide social and educational services. * d. commissioning an evaluation to deflect criticism and postponing a decision. 21. Considerable assistance can be obtained in planning a program evaluation from a. published evaluations. b. the Internet. c. informal conversations with other evaluators. * d. all of the above 22. An evaluation to focus on participants who have achieved the program goals a. is usually a good basis for a summative evaluation. b. is inherently dishonest. c. lies at the heart of the social science model. * d. is called the Success Case Method. Essay Questions 23. Describe how one would discuss random assignment to treatment or non-treatment conditions with a group of people who applied for a new oversubscribed job-training program. 24. How would the difference between formative and summative evaluations affect the negotiations with program staff and other stakeholders in the planning of an evaluation? 25. Discuss the advantages of involving the relevant stakeholders in the planning of an evaluation. 26. If during the planning of an evaluation, an evaluator discovers that critical stakeholder groups differ in their views of central objectives of a program, what are the best courses of action for the evaluator? 27. What can evaluators do to encourage stakeholders to take additional responsibility for programs in which they are involved? 8