BEST PRACTICE GUIDELINES FOR EVALUATION

Similar documents
DAC Principles for Evaluation of Development Assistance (1991) 1

Working Party on Aid Evaluation

COMMUNICATION FROM THE COMMISSION TO THE EUROPEAN PARLIAMENT, THE COUNCIL, THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE AND THE COMMITTEE OF THE REGIONS

Guidelines for Governance of the Electricity Sector in Canada

GUIDANCE NOTE: DEVELOPING A RESULTS

Executive summary.

CEMR Response to the European Transparency Initiative COM (2006) 194

Evaluation. Guidelines

EXTERNAL EVALUATION OF THE EUROPEAN UNION AGENCY FOR FUNDAMENTAL RIGHTS DRAFT TECHNICAL SPECIFICATIONS

CIVIL SERVICE COMMISSIONERS FOR NORTHERN IRELAND CODE OF PRACTICE

Chapter 10 Crown Corporation Governance

Pillar II. Institutional Framework and Management Capacity

ARRANGEMENTS FOR JOINT OECD- UNDP SUPPORT TO THE GLOBAL PARTNERSHIP FOR EFFECTIVE DEVELOPMENT CO- OPERATION

ISO/IEC INTERNATIONAL STANDARD. Corporate governance of information technology. Gouvernance des technologies de l'information par l'entreprise

DONOR ASSISTANCE DEVELOPMENT ENVIRONMENT TO CAPACITY DEVELOPMENT CO-OPERATION GUIDELINES SERIES ORGANISATION FOR ECONOMIC CO-OPERATION AND DEVELOPMENT

The IUCN Monitoring and Evaluation Policy

Expert meeting on Building an open and innovative government for better policies and service delivery. Paris, 8-9 June 2010

OECD Series on Principles of GLP and Compliance Monitoring Number 7 (Revised)

Drafting conventions for Auditing Guidelines and key terms for public-sector auditing

WHO reform. WHO evaluation policy

DRAFT CONCLUSIONS OF THE 65TH SESSION

Indicator One Country Ownership and Results

International tio Good Practice Principles for Country-Led Division of Labour and Complementarity n. Working Party on Aid Effectiveness

Western Balkans Recommendation on Public Participation

Integrated Programme of Work for the ECE Committee on Forests and the Forest Industry and the FAO European Forestry Commission

Regional Animal Welfare Strategy for Asia, the Far East and Oceania: Draft Implementation Plan. Prepared by the RAWS Implementation Working Group

WORK PROGRAMME OF THE COR'S TASK FORCE ON UKRAINE FOR

The IUCN Monitoring and Evaluation Policy

An Overview of the Modernisation Project. By Angela Hunte, Barbados Statistical Service

ENHANCING CORPORATE GOVERNANCE

Irish Aid. Evaluation Policy

Realisation of the SDGs in Countries Affected by Conflict and Fragility: The Role of the New Deal Conceptual Note

Ongoing evaluation in rural development

Secretariat. United Nations ST/SG/AC.6/2000/L.8. Review of the United Nations Programme in Public Administration and Finance

Independent Formative Evaluation of the World Health Organization

Terms of Reference (TOR)

Business Statement to the Meeting of G8 Ministers of Labour and Employment

PARES Strategic Dialogue 2013 Drafting a Memorandum of Understanding on a national level. Recommendations to employment services

Lessons from Performance Contracting Case Studies A Framework for Public Sector Performance Contracting

Monitoring and Evaluation Guidance Note

Public. Principles. Administration. The edition

Public Internal Control Systems in the European Union

Realisation of the SDGs in Countries Affected by Conflict and Fragility: The Role of the New Deal. Conceptual Note

Peer Review Report. Peer Review on Corporate Social Responsibility Paris (France), 22 October 2013

Quality Assessments of Statistical Production Processes in Eurostat Pierre Ecochard and Małgorzata Szczęsna, Eurostat

Capacity Development Framework

Module 5: Project Evaluation in

PROGRAMME CYCLE MANAGEMENT/PEMT, CONTROLLING AND

3 RD WESTMINSTER WORKSHOP, THE PUBLIC ACCOUNTS COMMITTEE JUNE 2013 AIM

EXECUTIVE SUMMARY 9. Executive summary

Tool 5: Political Economy and Stakeholder Analysis

Appendix to ISSAI 3100

GLOBAL FINANCIAL MANAGEMENT PARTNERSHIP TERMS OF REFERENCE FOR THE EVALUATION OF DGF-FINANCED PROGRAMS

TERMS OF REFERENCE. Evaluation of the Independent Science and Partnership Council (ISPC) May Background Rationale and Context

2018 MONITORING ROUND

Organisation de Coopération et de Développement Economiques Organisation for Economic Co-operation and Development

Study on Social Impact Assessment as a tool for mainstreaming social inclusion and social protection concerns in public policy in EU Member States

"FICHE CONTRADICTOIRE" Evaluation of the European Union's support to Private Sector Development in Third Countries ( )

Key Recommendations for CSR

Decentralisation and Co-ordination: The Twin Challenges of Labour Market Policy

A Framework for National Health Policies, Strategies and Plans

GUIDELINES FOR PUBLIC SECTOR REVIEWS AND EVALUATIONS

Feedback from the Open Government Partnership Support Unit, 7 August 2014

A New Approach to Assessing Multilateral Organisations Evaluation Performance. Approach and Methodology Final Draft

INDICATOR 7. Mutual accountability among development actors is strengthened through inclusive reviews

IAASB Main Agenda (December 2016) Agenda Item

Trade-Related Assistance: What Do Recent Evaluations Tell Us?

1.1 the content of New Zealand s Open Government Partnership (OGP) Action Plan, which is attached to this paper

REPORT FROM THE COMMISSION TO THE EUROPEAN PARLIAMENT, THE COUNCIL, THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE AND THE COMMITTEE OF THE REGIONS

Harnessing the potential for SAIs to contribute to the success of the Sustainable Development Goals

Effective Institutions Platform (EIP): Governance Arrangements

Externally Facilitated Board Effectiveness Review

Terms of Reference IPC Compliance Review Policy and Process

Research Commentary volume 2, issue 3. Making Country Ownership a Reality: An NGO Perspective. by Mercedes Mas de Xaxás and Carolyn Gibb Vogel

Corporate Planning in the Public Sector

Background Document on Oversight Bodies for Regulatory Reform

WTO SPS Transparency Workshop October WTO, Geneva

QUICK GUIDE TO INTEGRATING PUBLIC-PRIVATE DIALOGUE SCOPING MISSION. Investment Climate l World Bank Group. In partnership with

The NSDS Approach in a Nutshell. National Strategy for the Development of Statistics

Economic and Social Council

Achievements and. Directions

ONLINE PUBLIC CONSULTATION

Organisation de Coopération et de Développement Economiques Organisation for Economic Co-operation and Development

Digital government toolkit

Prepared by the United Nations Statistics Division

Introduction. Basic Principles

UN Evaluation Week High level panel

Resolution X.12. Principles for partnerships between the Ramsar Convention and the business sector

New arrangements for prioritising, commissioning, funding and evaluating research, education and treatment. Statement of intent

LONDON BOROUGH OF BARNET CODE OF CORPORATE GOVERNANCE

SNA News Number 28 May 2009

ACCRA AGENDA FOR ACTION

Measuring Business Impacts on People s Well-Being

Governance, Management and Planning

NZAID Evaluation Policy Statement

Concept note: ISPA Quality Assurance Protocols August 22, 2016

Long-Range Research Initiative Global Research Strategy. 21st Century Approaches to Risk Sciences

Survey of Stakeholder Perspectives of Audit Quality Detailed Discussion of Survey Results

Enhanced Effectiveness at Country Level. Summary of Pilot Focus Areas

MULTILATERAL AID REVIEW: ONE-YEAR UPDATE

Transcription:

Unclassified PUMA/PAC(97)2 PUMA/PAC(97)2 Or. Eng. Unclassified Organisation de Coopération et de Développement Economiques OLIS : 31-Oct-1997 Organisation for Economic Co-operation and Development Dist. : 04-Nov-1997 Or. Eng. PUBLIC MANAGEMENT SERVICE PUBLIC MANAGEMENT COMMITTEE BEST PRACTICE GUIDELINES FOR EVALUATION Accountability in Public Organisations Meeting of the Performance Management Network Paris, 24-25 November 1997 58107 Document complet disponible sur OLIS dans son format d'origine Complete document available on OLIS in its original format Cancels & replaces the same document: sent on OLIS 29-Oct-1997 The Secretariat has studied evaluation and its use in Member countries. The project has focused on how evaluation can be used effectively as a decision-making tool, rather than on methodological issues. On the basis of a background report (Promoting the Use of Programme Evaluation PUMA/ PAC(97)3) a set of best practice guidelines for evaluations has been developed. The Secretariat was assisted by a Reference Group of senior officials and experts from Australia, Canada, Sweden, the United States and the European Commission. An expert group meeting was held in April 1997. In addition, preliminary guidelines were discussed at the Senior Budget Officials Meeting in June 1997. Delegates at the 1997 Activity Meeting on Performance Management are invited to review the Guidelines and suggest improvements they may consider necessary. Written comments may also be sent after the meeting. The Guidelines will be submitted to the next meeting of the Public Management Committee with the aim of getting its endorsement for general distribution. For additional information, you may contact Sigurdur Helgason at the PUMA Secretariat: Tel (33-1) 45 24 90 88; Fax (33-1) 45 24 87 96; E-mail address sigurdur.helgason@oecd.org.

BEST PRACTICE GUIDELINES FOR EVALUATION A focus on results is a central element in recent public sector reforms in OECD countries. Evaluation is important in a results-oriented environment because it provides feedback on the performance of public policies. Previous reports by the OECD s Public Management Committee have emphasised that a wellperforming public sector should continuously evaluate policy effectiveness (Governance in Transition, 1995). Moreover, evaluation can be seen to complement other performance management approaches such as ongoing performance measurement and reporting, service quality initiatives and performance contracts. Because evaluation is about knowledge for action, it is an important resource for decision-makers. Yet its use has often proved to be problematic. For evaluation to have an impact on decision-making there must be a functioning evaluation market, that is, there must be both a good-quality supply of evaluation studies and genuine demand for the results. These best practice guidelines seek to enhance the use of evaluation. Therefore, the main focus is not on methodological questions but rather on issues that governments and evaluators need to consider when organising public sector evaluation activities. The Eight Guidelines 1. Generate Effective Demand 2. Systematise Evaluation Activities 3. Use a Different Evaluators to Fulfil Different Objectives 4. Influence Attitudes and Build Capacities 5. Plan Evaluations to Ensure Linkages with Decision-marking 6. Focus on Significant Issues and Involve Stakeholders 7. Enhance Credibility 8. Communicate Findings and Monitor Results OECD Public Management Service 1997 2

Generate Effective Demand Ø Support from the top must be ensured: it is vital that public sector managers and politicians understand the value of evaluation and are committed to enhancing its use. The role of the Ministry of Finance and other central management agencies is often essential in gaining support and credibility for evaluation activities. Managers should be involved in the evaluation process, especially if the aim is to improve the implementation of a policy. Ø Demand for evaluation needs to be generated, specified and articulated by the users. The ultimate goal is to make both internal and external evaluations a normal and valued activity in the administration. Ø The users need to be aware of the likely limitations of evaluation and expectations should be realistic. Unrealistic objectives are typical with respect to the timing and scope of findings of evaluations. It should be pointed out that while evaluation will be able to provide important information and reduce uncertainty of issues it studies, it cannot give right answers or replace judgement in decision-making. Demand for evaluation can be fostered by, e.g., promulgating a government evaluation strategy, making evaluations mandatory, stressing the need for results information in allocating resources, providing incentives by reducing input and process controls, giving an external stakeholder the right to ask evaluation questions, earmarking funds for evaluation purposes, giving recognition to evaluation efforts (publicity, prizes, promotions), and demonstrating that evaluations really matter by acting on results. Systematise Evaluation Activities Ø A degree of institutionalisation contributes to evaluations playing a genuine role in public management. A systematic, high-quality supply of evaluations, utilisation of evaluation findings, efficient implementation and organisational learning requires a framework to support it. However, the institutionalisation should not lead to evaluation becoming a bureaucratic paperwork exercise. It is also essential that evaluation is strategically integrated into the overall performance management framework. Ø For evaluation to be effective it must be an integral part of policy design and implementation. Linkages to the budget process are especially important. However, the use of evaluation to support budget proposals or decisions requires quality control. Fears of control- and savings-driven evaluations have to be dealt with, for example by involving those evaluated in the process. It also has to be borne in mind that evaluation information is only one factor to be considered in the budget process. Ø Funding is a vital issue in institutionalising evaluation. Money can be earmarked, special funds created and external sources sought for financing evaluation projects. However, the costs and benefits of systematic evaluations should be compared with other approaches, for example, ad hoc evaluations or ongoing performance measurement. Evaluation can be organised in various ways depending on its objectives. Evaluation can be regular and formal or more flexible and informal; centralised or decentralised; internal or external; reported to the executive or to the legislative branch of government; controlled or independent in initiating and carrying out evaluations; open or restricted in terms of who participates in the evaluation process and has access to results; and more or less integrated with the policy- making process. 3

Use a Different Evaluators to Fulfil Different Objectives Ø The location of evaluators influences the objectives, focus, utilisation and the overall status of evaluation. There are several options for placing the evaluation function: it can be conducted internally in the administration, externally by independent organisations or, as often is the case, by a mixture of different actors. Ø When evaluation is carried out by the organisations evaluated, collecting data may be relatively easy and inexpensive. Self-assessment enhances learning from experience and facilitates putting results to use. Problems concern the time and skills of the staff, the rather limited range of issues and the credibility of findings. Ø Evaluation anchored in central agencies is able to address government-wide issues and credibly study impact and relevance. The authority and control tools of an evaluator from the centre may also enhance the use of evaluation. Problems may arise in data gathering, which can be time-consuming and non-comprehensive. The roles of the evaluator (management consultant, evaluator, controller and resource allocator) should be clarified, since unclear or conflicting roles reduce the credibility of findings. Ø When evaluation is carried out by external evaluators (e.g. audit offices, research bodies, management consultants), their autonomy and evaluation skills often improve the reliability and extent of evaluation. However, external evaluators may have limited understanding of the substance and the culture of the evaluated organisation. They may offer standard or unduly theoretical evaluations. Evaluated organisations or the administration in general may be less eager to accept findings and implement possible recommendations. It is useful to anchor evaluation in several places that can serve various evaluation objectives and provide information for a multiplicity of markets. Also, since organisations are often more receptive to information produced internally, there may be advantages in promoting internal evaluation but with external bodies or groups overseeing quality. In general, the location should reflect the role and objectives the government wants evaluation to have. Influence Attitudes and Build Capacities Ø For evaluation to succeed, a culture that values its conduct and utilisation has to be created. It is necessary to have trained evaluators, well-informed commissioners and enlightened and enthusiastic users. Methodological, political, managerial and communication skills are needed. Training is important, especially in the early phases of systematising evaluation activities. Training and support can be organised by creating networks; publishing handbooks, success stories, newsletters and journals; organising seminars, workshops, training courses and visits; setting up demonstration projects and setting up a help desk in the administration 4

Plan Evaluations to Ensure Linkages with Decision-marking Ø Careful planning makes managing evaluation easier, contributes to the quality of evaluation and strengthens the commitment to act upon its findings. The commissioner in consultation with the users of evaluation should take final responsibility for the choices made in planning evaluations. In addition to individual evaluation plans, it may be useful to compile and publish departmental or governmentwide evaluation plans. Ø The single most important issue in planning evaluations is defining the objectives. The objectives influence the location, methodology and utilisation of evaluation. Ø To be of use, evaluation must fit into the policy and decision-making cycles. Timing is difficult as the policy- and decision-making cycle is often unpredictable, different users may have separate timetables, it may not be possible to wait for the outcome of a policy to become clear and evaluation itself may involve surprises. However, it is important to ensure that the findings of evaluations -- even preliminary ones -- are available when decisions are being made. While planning evaluations, attention must be paid to defining objectives, issues, criteria, available information, data collection and methods of analysis. The timetable, budget, evaluators and the overall organisation of the evaluation must also be decided. The way in which the results are presented, used, implemented and monitored must also be considered. Focus on Significant Issues and Involve Stakeholders Ø To receive the attention of its intended users, the evaluation must meet their information needs. A dialogue can contribute to understanding these needs and becoming familiar with the users way of thinking and acting. Ø To be relevant, evaluations should address issues that are significant for political, budgetary, management, accountability or other strategic reasons. For evaluation to be worthwhile, there should be a sincere intention to use the findings. In addition, the evaluation process should be flexible enough to adjust to changes in the priorities of issues being evaluated. Ø A strategy is needed to manage the stakeholders of evaluation. Defining stakeholders and involving them in the evaluation (e.g., creating steering groups or advisory committees) can enrich the process and increase the use the results. However, the interaction process must be managed so that it is not too costly and time-consuming and does not decrease the credibility of evaluation. Partnership with stakeholders in defining the terms of reference is often considered the most beneficial way to maximise the use of evaluation. A number of countries have used commissions to involve stakeholders in evaluations. Commissions often include representatives from political parties, academic world, trade unions and employers associations, consumer organisations, environmental interest groups, etc. Their role is to study and assess specific issues and prepare decisions. Commission reports are often sent to large number of interested parties - both public and private - for consideration and review. 5

Enhance Credibility Ø It is important to enhance the credibility of evaluations, as lack of credibility may undermine their use. Factors influencing credibility include competence and role of the evaluator, consultations and involvement of stakeholders and communication of findings. Professional and ethical standards of evaluation are also important to enhance credibility. Ø Methodological quality of evaluation encompasses issues such as relevant criteria, adequate evidence and reliable and clear findings. These issues have a major effect on the credibility and use of evaluation. However, to some extent there may be a trade off between methodological rigour and the utility of evaluation. Credibility may be enhanced by, for example, creating steering groups, making use of independent external bodies (e.g. audit offices) to review evaluations, drafting checklists setting quality standards for evaluations, conducting the evaluation process in a transparent way and communicating effectively the results and their limitations. Communicate Findings and Monitor Results Ø Evaluation findings should be presented in an effective and timely fashion. Openness and publicity often increase the credibility and use of evaluation and create pressure to act upon findings. Ø In reporting findings, it is useful to make judgements and recommendations (and even present alternatives). These often attract attention, provoke and outline discussion and promote subsequent action. Criteria used in making judgements and recommendations should be as clear as possible. Blame should be avoided: evaluation should be used to find ways to overcome problems rather than as a vehicle to list shortcomings. The links between recommendations, policy modifications and their implementation need to be considered. It may be worthwhile to present a specific implementation plan based on the recommendations of the evaluation. Implementation often requires support and consulting services, which can be facilitated by financial or technical aid. Ø Monitoring the impact of evaluation promotes its use, guides the implementation process, identifies additional evaluation needs and may be used as a learning tool to improve evaluation practices. The evaluation report should be clear, comprehensive and include an executive summary. The report needs to be effectively distributed and should generally be made publicly available. Seminars, workshops and discussion groups are useful in disseminating, marketing and also interpreting the results. Key decision-makers need special attention. Follow-up of the use of evaluations can be organised by reviewing all conducted evaluations, or by making a follow up procedure a part of the design of evaluations. 6

About this Policy Brief This policy brief is the product of a project on Evaluation carried out by the OECD Public Management Service (PUMA). A complementary background report Promoting the Use of Programme Evaluation is available as a PUMA general distribution document (free of charge). The report presents a more detailed discussion on evaluation, its role in public management and ways of enhancing its use, and explores the evaluation practices in selected OECD countries. In preparing the report the Secretariat was assisted by a Reference Group composed of senior officials and experts from Australia, Canada, Sweden, the United States and the European Commission. Examples are mainly drawn from these countries, the EC and Finland, France and the Netherlands. It must be emphasised that there is no single right way to organise and conduct evaluations. The choice of methods will depend on several factors, including the objectives of evaluations, the role of evaluations in a wider performance management framework and institutional and political considerations. These best practice guidelines, along with other information about PUMA s work on performance management and evaluation, are available on the PUMA Internet site of the World Wide Web at http:www.oecd.org/puma/. Some definitions Evaluations are systematic, analytical assessments addressing the value and other important aspects of a policy, organisation or a programme, with emphasis on reliability and usability of findings. The main objective of evaluations is to improve decision-making, resource allocation and accountability. Commissioners are organisations that commission evaluations. The role of the commissioner is to plan the evaluation, monitor its progress, receive the final evaluation report, and in many cases decide how it will be used. The commissioner may also play other important roles such as to arrange consultations and build consensus. Organisations that have the role of commissioner include organisations responsible for an activity, parent Ministry, central government agencies including the Ministry of Finance, and independent evaluation and audit organisations. In some cases the commissioner is also the evaluator. Evaluators are those organisations or individuals collecting and analysing the data and judging the value of the evaluated subject. Evaluators may be the civil servants managing or implementing the policy, government evaluation units, budget departments or other central management agencies, expert panels or commissions, state audit offices, academic research institutions, management consultants, consumer and community groups, etc. Users of evaluation may be policy-makers (parliamentarians, ministers, other decision-makers), responsible ministries, the budget department, auditors, those managing and implementing the evaluated policy or programme, the target group, other initiators or sponsors of the evaluation or other groups interested in or affected by the evaluation results. Stakeholders of evaluation are those who have an interest in the evaluated policy or programme because the process or findings may be relevant to their role or action. Often the stakeholders and the users are the same actors. 7