Inter-Agency Working Group on Evaluation Report of the Meeting 2-3 April 2001 Palais des Nations, Geneva, Switzerland A meeting of the IAWG was held in Geneva on 2-3 April 2001 and was participated in by 29 member agencies and 3 observers (Annex 1). The meeting agenda focused on the report of the IAWG review and a few other evaluation related issues. Of particular importance was the discussion and agreement on the future of IAWG which are presented in Sections I and II below. The report also includes the highlights of the discussion on other key evaluation related issues following the sequence of the agenda items. I. I. IAWG : Review recommendations and issues The two members of the Review Team presented the main findings of the evaluation. The over-all findings concluded that IAWG has been useful as an informal forum for exchange of information but has not generated any output or product of consequence to the evaluation function of the member agencies. The team stressed the importance of developing a sense of identity for a living evaluation forum, requiring that members must take interest in developing and participating in concrete meaningful activities. This would require joint action on particular issues/topics, more live interaction on those issues in between meetings, and more active information sharing. The following recommendations were highlighted: IAWG should be more product/results-oriented as well as outward looking; Dissemination of products/outputs must be targeted at higher levels of the UN system and the individual agencies; The immediate priorities could be developing UN policy on evaluation function standards and norms, and defining evaluation procedures and methodology for collaborative evaluation of UNDAF; Training and learning opportunities must be initiated; Thematic and geographic-based groups must be set up for the purpose of generating interaction; and A strong secretariat and an interactive web site must be set up. To situate the discussion in the current emerging context, presentations were also made by UNICEF and JIU on Trends in the Evaluation Function of the UN System. Emphasis was placed on the use of the evaluation function for strategic governance and country programme performance, keeping in focus the dimensions of results, empowerment, learning, and accountability. It was deemed important to keep the functions of evaluation, monitoring, audit, inspection, and research distinct from each 1
other. Methodologies of different types were also required to suit evaluation requirements of projects, programmes, strategy, and policy. The presentations generated a lively debate on various issues in the Report on the IAWG Review. Among the key issues raised that could put the role of IAWG in a broader context were the following: Address new challenges and opportunities for evaluation function. The emerging role of evaluation function is manifest in a new era of development cooperation. The focus of the international community on common goals like poverty reduction, the millenium declarations, and acknowledgment of globalization issues pose new challenges and opportunities for the evaluation function. The sweeping organizational changes, including results-based management approach, puts the evaluation function at center stage. Increasing focus on and awareness of development effectiveness likewise impinges on the role of evaluation function. It is important to ascertain the nature of evaluation demand and those who demand it. IAWG could utilize these opportunities to move evaluation closer to decision making. Focus on common issues. The mandate and purpose of IAWG could be further refined and tightened through a focus on common issues, taking into consideration the composition of IAWG and the varying mandates of the member agencies. Certain common denominators were identified, however, such as common norms and standards for evaluation, harmonization of evaluation principles and processes, knowledge sharing, and generation of learning opportunities. Work in some of the above-mentioned areas potentially stand out as niche areas for IAWG Consider both development and normative issues. An agreement emerged that the mandate might encompass both development issues as well as normative issues. This will allow the scope of activities to cover both developmental and organizational effectiveness. Emphasis was also placed for IAWG to advocate evaluation function to the decision and policy makers of the UN organizations. Commitment and value-added: key to IAWG success. A key to the future success of IAWG is the commitment of its members to support its priority agenda and the benefits it will be able to generate for the evaluation function in the respective organizations. These could be achieved through limiting issues to be addressed, pooling resources and helping to generate an evaluative culture through strong and effective advocacy within organizations. Sustained information flow and a dynamic Secretariat. There was consensus on the need to be practical and realistic when determining the IAWG s work. The need for a dynamic Secretariat to coordinate the work and information flow was reiterated. Several ideas were floated for consideration: (a) theme-based work programme to be pursued ensuring that everyone has flexibility to participate according to capacity and needs; (b) making full use of networking and e-communication would allow for prompt information sharing and participation of the field; and (c) creation of core capacity at the Secretariat for efficient information exchange and development of a web site with a search engine to tap other knowledge sources. 2
II. IAWG: Areas of Agreement The review findings and recommendations followed by topical discussion on a number of issues, (e.g. UNDAF, Knowledge, Training, Partnerships), stimulated lively discussions on the future of IAWG. The concluding session resulted in a consensus on the following issues: objective and scope of IAWG; modalities of conducting its business; and number of priority areas of work for the current year: First, the group agreed that IAWG remains a useful informal group of evaluation professionals in the UN System. It provides a forum for professional exchanges. Second, its objective should be to promote and foster the role of evaluation function within UN agencies, specifically (a) it should contribute to enhance the quality of evaluation; (b) it should cover both development and normative aspects (organizational and developmental effectiveness); and (c) it should contribute to evaluation capacity development within evaluation units of UN. Third, it should be driven by its own agenda and work plan. The priority items suggested included evaluation norms and standards in the UN System, glossary of terms, evaluation of UNDAF, knowledge sharing, training, etc. Fourth, it should maintain an effective secretariat to coordinate activities, a web site for free interaction, and theme-based groups to undertake specific activities. A funding modality was considered necessary for maintaining the secretariat and certain support activities. Fifth, four working groups (theme-based) were created to initiate and conduct work on the issues. The groups will be coordinated by a Champion, and participation by the members will be on a voluntary basis. The four working groups are: UN Norms and Standards, with UNICEF as Champion; Knowledge and Learning, with IFAD; 1 UNDAF, with UNFPA; and Evaluation Capacity Development, with the World Bank. It was agreed that each working group will develop its own work plan and modalities of getting things done. Annex 2 lists the agencies who expressed interest in joining the different thematic areas. The Chair of IAWG will send a separate note to members in the near future to initiate implementation arrangements for the specific areas of agreement. III. Evaluation and Results-Based Management The presentation by UNDP and UNFPA on the topic emphasized the need to focus evaluation on development results beyond the current output focus of projects. The UNFPA presentation also underlined the implications of RBM for monitoring and 1 It was decided in later discussions after the Geneva meeting that another agency will be the champion for this working group. 3
evaluation, highlighting among others the emphasis now placed on measuring results using appropriate indicators, involving stakeholders, focusing on actual compared to planned results, and promoting M&E as part of a wider performance management framework. The challenges being faced by both organizations in this regard were in providing clear definition of concepts to frontline practitioners, full understanding of the underlying principles and logic of RBM, provision of M&E tools fully aligned with RBM. The distinctive features of better use of indicators, strategic partnerships, and knowledge sharing were also highlighted. The ultimate objective was to achieve development effectiveness of UN system intervention. RBM contributes to making evaluation more focused at strategic levels. The participants raised a number of issues relating to their respective experiences, namely: Need for clear definition of RBM terminologies and establishing a system of coherence in their application. More clarity is required in methodology and process of identification, collection and use of indicators. Need for evaluation to address strategic issues. Training of frontline professionals in RBM. Clear vision and delivery of results at different levels, i.e. project, programme, institutional performance, and policies. Need for better capturing of lessons learnt and sharing of experiences in RBM. In the context of the discussions on RBM, the positive political response to impact evaluation in the UNGA was noted by UN DESA. The interest in and emphasis of member states on results give the evaluators the responsibility of ensuring that the right kind of measures are used. The UN evaluators tasks include helping to operationalise global goals and to build the difficult bridge between the micro and meso data from operations, and the macro data generated by the social indicators progress. The IAWG members were encouraged to work on producing appropriate indicators relevant to their respective agencies concerns, and share them with the group. IV. UNDAF Evaluation The presentations on the topic identified key challenges of UNDAF evaluation and the potential contribution that IAWG could make to enhance the relevance and effectiveness of the process. The key challenges were to focus evaluation on results orientation, ensuring quality assurance and ownership of the process. As a corollary, the need for monitoring and evaluation (M&E) was recognized both at the global level focussing on the achievements of UNDAF at the UN system and the country level, linking UNDAF gains and results with the national development situation, and the country programme implementation. The use of the Common Country Assessment (CCA) for evaluation and monitoring at the country level was also emphasized. The difficulties of monitoring and 4
evaluating development results in a multi-agency, cross-sectoral, multi-scope situation posed specific challenges. The need for a joint monitoring and evaluation of key issues such as global goals and joint programmes was strongly felt. IAWG could help develop guidelines and tools for M&E of UNDAF, harmonize M&E approaches, and guidance for joint M&E systems at UNDAF, country programme, and programme/project level. The comments underscored a number of issues which needed collective attention: Clarify at the country level a number of parallel processes such as CDF and PRSP, and explain the relationship between these processes. Further dialogue is needed for developing effective synergies and avoid confusion or duplication. The possibility of using CDF or a countermeasure was pointed out. Training of country level professionals was considered critical for the success of UNDAF M&E. Clarification was provided about UNDAF also involving organizations without presence but with operational programmes, and sometimes organizations working in a regional setting. The discussions reflected an agreement that IAWG should mobilize efforts to influence M&E of UNDAF. It was observed that forming a thematic group for this purpose would be appropriate to initiate action. V. Joint Evaluation and Evaluation Partnerships The topic was introduced by two presentors from the World Bank and UNICEF. Certain basic principles and foundation for evaluation partnership were emphasized, namely: Partnership infuses new perspectives and allows reaching new perimeters of knowledge, facilitating their dissemination and enables the deal with global public policy issue. Partnership involves shared responsibilities but distinct accountabilities. Basic foundation of effective partnership lies in shared objectives, values and motivation, comparative advantage and complementation of the partners. Effective partnerships build upon mutual knowledge about partners, willingness and flexibility to arrive at reason-based consensus. Partnership may require certain investment in process management and may compromise least cost alternatives for evaluation. Joint evaluation may also reflect different degrees of partnerships that can result in different outcomes. What is important in joint evaluation is an appropriate mindset, search for low cost model, agreement on processes with low transaction cost, and focus on joint recommendations for action. 5
It was also acknowledged that certain opportunities exist for joint work on UNDAF and Evaluation Capacity Development. The discussion brought out some additional experiences on the subject such as: joint evaluations can lead to complex process issues involving longer time; and joint evaluation has potential to lead to a higher level of purpose and outcome. VI. Evaluation Dissemination and Feedback Drawing from the OECD/DAC Workshop on Evaluation Feedback for Effective Learning, the presentation by UNIDO underlined the following aspects of the feedback process: full involvement of all stakeholders, balancing learning and accountability, moving from project to thematic evaluation, sharing and dissemination of lessons. Although there was no blueprint for effective feedback, UN organizations are moving to seek greater participation of stakeholders in evaluation process and making evaluation results available to management for wider use. The presentation by IFAD provided an exposé of the communication strategy of its evaluation which has five key instruments: standard set of core products and customtailored communication strategy for each evaluation, optimal use of information technology, systematic procedures for routine communication, and evaluation help-desk. The discussion that followed brought out several issues: There are numerous experiences on feedback which could generate important lessons for all. Ownership in evaluation is important for follow up. Reports require customized presentations for varying audience. Need for a balance of accountability and learning in evaluation. Need for making databases relevant and updated. Role of evaluation units for following up on recommendations. 6