Evaluation, Evaluators, and the American Evaluation Association

Similar documents
An Evaluation Roadmap for a More Effective Government

TIPS PREPARING AN EVALUATION STATEMENT OF WORK ABOUT TIPS

EMT Associates, Inc. Approach to Conducting Evaluation Projects

Evaluation Policy for GEF Funded Projects

SELF-ASSESSMENT FOR PROFESSIONAL GROWTH UMass Lowell Community Psychology Practicum

School Governance Position Statement Spring 2016

National Standards. Council for Standards in Human Service Education (1980, 2005, 2009)

Competency Framework FOR CHARTERED PROFESSIONALS IN HUMAN RESOURCES

Certified Human Resources Professional (CHRP) Competency Framework

OIG STRATEGY FOR THE MANAGEMENT

A Public Interest Framework for the Accountancy Profession

The COSO Risk Framework: A reference for internal control? Transition from COSO I to COSO II

Standards for Social Work Practice with Groups, Second Edition

CARE International Evaluation Policy 1

2008 Value-For-Money Audit of the WSIB Appeals Program Executive Summary Report. March 10, Advisory Services

Evaluation policy PURPOSE

How it works: Questions from the OCAT 2.0

CORE COMPETENCIES. For all faculty and staff

2 Collaborate to increase immediate safety. 3 Facilitate links to further support. 4 Review and report on support provided

Workshop II Project Management

UNEG Strategy

Guidelines for the Ethical Conduct of Evaluation

PROGRAM EVALUATIONS METAEVALUATION CHECKLIST (Based on The Program Evaluation Standards) Daniel L. Stufflebeam, 1999

Competency 3: Advance Human Rights and Social, Economic, and Environmental Justice

Minimum Elements and Practice Standards for Health Impact Assessment. North American HIA Practice Standards Working Group

Governing Body Geneva, November 2002 PFA. ILO evaluation framework. Evaluation within a strategic budgeting context TENTH ITEM ON THE AGENDA

Terms of Reference. International Consultant for provision of a Background Note on Rules of Origins

Head of Diversification and Growth

Field Education Planning

Supplemental Workbook - 2B Implementation Approach

Working Party on Aid Evaluation

Monitoring and Evaluation Policy

PA 6603 Economics for Public Management Course Description: An introduction to economic theory emphasizing the application of microeconomic and

THE POLICY FRAMEWORK FOR Substantive Equality

IT MANAGER - BUSINESS & TECHNOLOGY APPLICATIONS (12235) ( )

2015 Social Work Competencies for Generalists

From Practice to Profession

CORPORATE GOVERNANCE POLICY

i FACILITATOR S GUIDE

IT MANAGER - BUSINESS & TECHNOLOGY APPLICATIONS (12235) ( )

1 Introduction. 2 Program Evaluation Standard Statements. Int'l Conf. Frontiers in Education: CS and CE FECS'16 173

Proposed Enhancements to The Institute of Internal Auditors International Professional Practices Framework (IPPF)

WORLD-CLASS AUDIT REGULATION March 2016 ANNUAL INSPECTIONS REPORT.

Research Services For Parliamentary Committees

MONITORING AND EVALUATIONS MANUAL PIDF SECRETARIAT

AFM Corporate Governance Code

UNITED NATIONS DEVELOPMENT PROGRAMME JOB DESCRIPTION. Current Grade: ICS-10 Proposed Grade: ICS-10 Approved Grade: Classification Approved by:

Public Housing Revitalization Specialist GS Career Path Guide

About CIDA - Performance - Results-based Management - The Logical Framework: Making it Results-Oriented

UoD IT Job Description

Chapter 10 Crown Corporation Governance

TOPIC DESCRIPTION SUPPLEMENT for the SYSTEMS ENGINEERING SURVEY DESCRIPTION

2.1 Specific objectives of the evaluation The specific evaluation objectives are:

Republic of Liberia National Elections Commission (NEC) MONITORING & EVALUATION POLICY

Department of Social Work Field Evaluation- Pass/Fail. Date of Evaluation: Total Hours Completed To Date for this semester: /300

Corporate Governance Guidelines

Implementing Category Management for Common Goods and Services

Role Title: Chief Officer Responsible to: CCG chairs - one employing CCG Job purpose/ Main Responsibilities

IAESB Strategy and Work Plan

Department of Defense INSTRUCTION

INTERNATIONAL COMPETITION NETWORK. Findings Related to Technical Assistance for Newer Competition Agencies

NOT PROTECTIVELY MARKED. HM Inspectorate of Constabulary in Scotland. Inspection Framework. Version 1.0 (September 2014)

2017 to 2020 Strategic Plan. Approved by the CAA Board of Directors: March 1, 2017

Social Administration Learning Contract

PROMOTING A COLLABORATIVE ENVIRONMENT AMONG RISK MANAGEMENT, INTERNAL AUDIT, AND COMPLIANCE DEPARTMENTS. ANDREW SIMPSON, CISA COO CaseWare RCM Inc.

International Standard on Auditing (Ireland) 315

Proposed International Standard on Auditing 315 (Revised)

IAASB CAG Public Session (March 2016) Agenda Item. Initial Discussion on the IAASB s Future Project Related to ISA 315 (Revised) 1

SCDCPC422 Manage the tendering process to achieve priority outcomes

Joint Evaluation of Joint Programmes on Gender Equality in the United Nations System Management Response

Uni-President Enterprises Corporation Corporate Social Responsibility (CSR) Code of Practice

CHUGAI PHARMACEUTICAL CO., LTD

Washington Metropolitan Area Transit Authority Board Action/Information Summary

POSITION PROFILE FOR THE CHIEF OF THE WINNIPEG POLICE SERVICE. Last updated October, 2015

Agenda Item 6-2. Committee: International Accounting Education Standards Board Meeting

Program Sustainability Workbook 2015

WHO reform. WHO evaluation policy

CEIOPS-SEC-182/10. December CEIOPS 1 response to European Commission Green Paper on Audit Policy: Lessons from the Crisis

Strategic Plan FY14 - FY18

Montana Public Health and Human Services Public Health And Safety Division. Workforce Assessment. Final Report 2013

November CFPB Diversity and Inclusion Strategic Plan

POSITION NARRATIVE Vice President of Integration & Learning First 5 LA

Responding to RFPs. Slide 1 Responding to RFIs and RFPs Welcome this module on responding to RFIs and RFPs.

General Manager Caregiver Recruitment & Support. Our vision is: New Zealand values the wellbeing of tamariki above all else.

The Logical Framework: Making it Results- Oriented

M.S. in TOURISM MANAGEMENT

Guide to Strategic Planning for Schools

Work Health and Safety

SRI LANKA AUDITING STANDARD 315 (REVISED)

Management Manual of the Muhr und Bender Group

Society for Social Work and Research Strategic Plan

IAASB Main Agenda (March 2016) Agenda Item. Initial Discussion on the IAASB s Future Project Related to ISA 315 (Revised) 1

SW 701 Foundation Field Practicum. Learning Contract Supplement: A Guide to Completing the Learning Contract

FAQs. The ARC: Assessment of Results and Competencies of RCs and UNCTs

Practice Advisory : Quality Assurance and Improvement Program

Strategic Planning: Setting the Course for the Agency

Making the choice: Decentralized Evaluation or Review? (Orientation note) Draft for comments

Professional Competencies Self-Assessment & Development Plan

WORLD INTELLECTUAL PROPERTY ORGANIZATION GENEVA INTERNAL AUDIT AND OVERSIGHT DIVISION INTERNAL REVIEW ON PROGRAM PERFORMANCE REPORTING PROCESS

Transcription:

Evaluation, Evaluators, and the American Evaluation Association What is evaluation? Evaluation is a field that applies systematic inquiry to help improve programs, products, and personnel, as well as the human actions associated with them. The primary focus of individual evaluator s work can vary greatly to include organizations, policies, performance, research and even evaluation itself. But the common ground for all evaluators is that they aspire to achieve accountability and learning by providing the best possible information that might bear on the value of whatever is being evaluated. Evaluation is an interdisciplinary field that encompasses many areas of expertise. Many evaluators have advanced degrees in, and often work collaboratively with colleagues in, allied fields, including Public Administration Applied Social Research Policy Analysis Public Health and Translation Research Psychology Educational Research Economics Political science Sociology Performance Auditing Statistics Operations Research How can evaluation help in the federal policy context? Formal evaluation studies can contribute significantly to the understanding and success of public programs. Each program has a life cycle that starts with its original conceptualization and continues through its start-up period to its full-scale implementation. The opportunity to capitalize on early successes or to make mid-course corrections is critical, so it is essential not only to conduct a summative assessment of a program's impact, but also to conduct ongoing formative evaluation throughout its life cycle. A number of individuals and organizations play important roles in managing, overseeing, and judging the worth of public programs. They include policy makers and mangers within the Executive Branch; grantees, contractors, and Federal, State, and local government staff responsible for program implementation, congressional oversight committees, a variety of stakeholders in the private sector, program beneficiaries, and public minded citizens and groups who care deeply about how well their government works. All of them need reliable data and analysis to carry out their responsibilities. Evaluations prepared by professional, independent evaluators help prevent information gaps by: Improving knowledge and understanding of how programs work Strengthening public accountability Assessing program effectiveness and efficiency Identifying opportunities and pathways to achieving objectives, outcomes, and efficiencies

The American Evaluation Association What is the American Evaluation Association? The American Evaluation Association (AEA) is the nation's leading organization in the field of evaluation. AEA's mission is to improve evaluation practices and methods, increase evaluation use, promote the field of evaluation, and support the contribution of evaluation to the generation of theory and knowledge about effective human action. AEA has over 5,000 members representing all 50 U.S. states as well as over 60 foreign countries. AEA is an important source of information about the field of evaluation, and an important link to evaluators and evaluation decision-makers. The group is the source for guidelines on ethical and professional standards, and it provides quality professional development programs and information about professional opportunities. What are the AEA s guidelines for good evaluation practice? The American Evaluation Association is committed to effective, systematic evaluation practice that adheres to strong principles and guidelines. AEA's Guiding Principles for Evaluators call for: Systematic Inquiry. Evaluators conduct systematic, data-based inquiries Competence. Evaluators provide competent performance to stakeholders Integrity/Honesty. Evaluators display honesty and integrity in their own behavior, and attempt to ensure the honesty and integrity of the entire evaluation process Respect for People. Evaluators respect the security, dignity and self-worth of respondents, program participants, clients, and other evaluation stakeholders Responsibilities for General and Public Welfare. Evaluators articulate and take into account the diversity of general and public interests and values that may be related to the evaluation What is AEA s Evaluation Policy Initiative? AEA is committed to the development and adoption of evaluation policies that ensure the sound, context-sensitive use of evaluation and engagement of evaluators. It has established an initiative to strengthen the evaluation community's ties with policymakers. While evaluation can help inform substantive policies in a wide range of areas, and this is a recognized central purpose for much evaluation, influencing substantive policy is not the emphasis of this effort. Instead, the effort is focused on evaluation policies. Examples of areas of concern around evaluation policy include: Evaluation definition. How, if at all, is evaluation defined in an agency or in legislation? In such contexts, how is evaluation formally distinguished from or related to other functions such as program planning, monitoring, performance measurement or implementation? Requirements of evaluation. When are evaluations required? What programs or entities are required to have evaluations? How often are evaluations scheduled? What procedures are used to determine when or whether evaluation takes place? Evaluation methods. What approaches or methods of evaluation are recommended or required by legislation or regulation, for what types of programs or initiatives? Human resources regarding evaluation. What requirements exist for people who conduct evaluations? What types of training, experience or background are required? Evaluation budgets. What are the standards for budgeting for evaluation work? Evaluation implementation. What types of evaluation implementation issues are guided by policies? For instance, when are internal versus external evaluations required and how are these defined? Evaluation ethics. What are the policies for addressing ethical issues in evaluation?

Evaluation s Contribution Throughout a Program s Life Cycle Designing and managing successful public programs is no easy task, with many people and organizations involved. Professional evaluators contribute to this important enterprise in a variety of ways. Evaluator Contributions Clarifying assumptions and expectations when programs are being designed Monitoring and providing feedback to managers on processes, progress and problems experienced in early start-up stages Answering questions that unfold during program implementation about the validity of the program design the challenges to implementation success or failure to achieve intended outputs and short, medium, and long-term outcomes whether it is avoiding negative side effects how it addresses concerns of program advocates and critics Identifying appropriate observational approaches and performance measures Producing periodic studies to provide ongoing feedback on the quality and efficiency of programs Building practical accountability mechanisms into the ongoing management of programs Assessing program success in terms of processes, outputs and outcomes Determining the transferability of successful programs to other locations, organizations, and service environments Providing lessons for improving the design and execution of the next generation of programs Identifying areas for cost savings and providing guidance on costs in relation to benefits, and Assessing whether a mature program remains relevant and effective after significant demographic, cultural, or technological changes

Examples of Evaluation Methodologies There are no simple answers to questions about how well programs work, and there is no single analytic method that can decipher the complexities that are inherent within the environment and effects of public programs. A range of analytic methods is needed, and often it is preferable to use several methods simultaneously. Methods that are helpful in the early developmental stages may not be appropriate later, when the program has become more routinized and regularly implemented. Sometimes information is needed quickly, while at other times more sophisticated long-term studies are needed to understand fully the dynamics of program administration and beneficiary behaviors. Over the years, the evaluation field has developed an extensive array of methods that can be applied and adapted to various types of programs, depending upon the circumstances and stages of the program s implementation. Fundamentally, all evaluation methods should be context-sensitive, culturally relevant, and methodologically sound. Common Evaluation Methods logic models and program theories needs assessments early implementation reviews sampling methodology compliance reviews performance reviews qualitative designs case studies quasi-experimental design randomized field experiments special focus studies addressing emerging issues performance measurement systems cost-benefit and cost-effectiveness analysis meta analysis client and participant satisfaction surveys

Effective Program Evaluation Practices The field of evaluation has evolved and grown over the past quarter century, expanding and improving its analytic methods, expanding the scope of its practice and relevance, clarifying its relations with clients and stakeholders, and adopting ethical standards for its members. A set of broad practice concepts has emerged that apply to almost all evaluation situations. These general practices provide a starting point, framework, and set of perspectives to guide evaluators through the often complex and changing environment encountered in the evaluation of public programs. The following practices help ensure that evaluations provide useful and reliable information to program managers, policy makers, and stakeholders: Consultation. Consult with all major stakeholders in the design of evaluations. Evaluation in Program Design. When feasible, use evaluation principles in the initial design of programs through program logic models and broader analysis of environmental systems, setting the stage for evaluating the program throughout its life cycle. Life cycle Evaluation. Match the evaluation methodology to the stage of program development or evolution, gathering data via a range of methods over the life of the program, and providing ongoing feedback and insights throughout the program cycle. Built-in Evaluation. Build evaluation components into the program itself, so that output and outcome information begins to flow from program operations as soon as possible and continues to do so (with appropriate adjustments) throughout the life of the program. Multiple Methods. Use multiple methods whenever appropriate, offsetting the shortcoming of any one method with the strengths of another, and ensuring that the chosen methods are methodologically sound, and contextually and culturally sensitive. Evaluation Use. Identify stakeholder information needs and timelines, meet those needs via timed reporting cycles and clear reporting mechanisms, and build stakeholder capacity to understand and use evaluation results. Collaborative Evaluation Teams. Promote the formation of evaluation teams including representatives from allied fields with rich and appropriate mixes of capabilities to follow the emergence, implementation, and effectiveness of programs. Stress the importance of relevant education and experience in evaluation, while recognizing that evaluation is a complex multi-disciplinary endeavor. Culturally Competent Evaluation Practices. Use appropriate evaluation strategies and skills in working with culturally different groups. Seek self awareness of culturally-based assumptions and understanding of the worldviews of culturally-different participants and stakeholders. Diversity may be in terms of race, ethnicity, gender, religion, socio-economics, or other factors pertinent to the evaluation context. Effective Reporting. Target reports to the needs of program decision makers and stakeholders. Develop families of evaluation reports and information for programs, covering the entire spectrum of evaluative information including accountability, efficiency, and effectiveness. Independence. While seeking advice from all sides, retain control of the evaluation design, performance, and reporting. Evaluative independence is necessary for credibility and success.

Elements of a National Framework for Evaluation Practice Professional evaluators do not operate in a vacuum. However sound and important their work may be in and of itself, it will be of little value unless it is accepted as an integral part of the public program policy making and management process. The successful practice of evaluation requires shared understanding, expectations, and resources among evaluators, public officials, and stakeholders. Likewise, no one person, agency, or institution has sole control over the environment in which both evaluators and their clients work. That environment includes all the structures involved in government - laws, regulations, organizations, budgets, and public goals and objectives. It also involves a healthy combination of public and private sector evaluators. Following are several of the potential key elements of a national framework for effective evaluation practices. Standards. Public program evaluators, both within and outside the government, should abide by AEA Guiding Principles and other appropriate professional standards in conducting their work, and should cite these standards in the reports that they issue. Program Goals and Objectives. To a practical extent, mid-level goals and objectives - neither too general nor too specific - of public laws and regulations should be stated up front. If this is impractical during the stage at which a bill is drafted or a regulation is written, the law or regulation should specify requirements or establish a process for the development of such goals and objectives. Performance Measures. Measures of a program's key processes and outcomes should be developed while the program is being developed, and should be put into place when program implementation begins. However, they should be modified periodically as appropriate to reflect what has been learned about the program, especially during the early stages of implementation. Requirements and Resources. Requirements for evaluation through its various stages of implementation, and sufficient resources to meet those requirements, should be embedded in the authorizing legislation and regulations. The overall approach should be to authorize and require the periodic evaluation of the program throughout its life cycle, so that a rich source of evaluative information is available from a family of studies that surround the history of the program. This information will then be available to policy makers during the cyclical reauthorizations and amendments that are typical of public programs today. Institutional Capacity. Departmental or government-wide level, institutions such as the General Accountability Office, Inspectors General, and top tiered evaluation offices reporting directly to the Secretaries of major Federal Departments, should be supported with the resources, organizational independence, competencies, and authorities necessary for the effective evaluation and oversight of public programs. Private-Sector Evaluators. There should be a robust private practice of independent non-government evaluators and evaluation groups with a broad range of viewpoints and capabilities that can provide effective independent evaluation as well as input and feedback to internal evaluation efforts. Body of Work. Evaluators both within the Federal Departments and agencies and from the private sector should produce a wide range of studies, recognizing the advantages and limitations of various methodological approaches. This provides public officials with timely and useful evaluative information as well as longer-term data that can be used cumulatively to enhance understanding about what works and why. Collaboration with Stakeholders. Data collection for programs of national scope should be developed and operated in concert with program officials, state and local governments and other major stakeholders. This promotes the efficiency and effectiveness inherent in unified systems. University Programs. University programs that train evaluators should take account of all relevant methods available and concern themselves as well with the various program and policy contexts in which evaluation results are used. Professional Development. Evaluators should have access to, and they should avail themselves of, continuing professional development and training opportunities, including opportunities that expand their understanding of public policy processes and the policy context.