IAS EVALUATION POLICY May 2011
Table of contents Background...3 Purpose of the evaluation policy...3 Evaluation definition...3 Evaluation objectives...4 Evaluation scope...4 Guiding principles for the evaluation function...6 Evaluation management...7 Page 2 of 8
Background Founded in 1988, the International AIDS Society (IAS) is the world's leading independent association of HIV professionals, with over 16,000 members from more than 196 countries working at all levels of the global response to AIDS. IAS members include researchers from all disciplines, clinicians, public health and community practitioners on the frontlines of the epidemic, as well as policy and programme planners. The IAS is the custodian of the biennial International AIDS Conference and lead organizer of the IAS Conference on HIV Pathogenesis, Treatment and Prevention. Those conferences alternately take place every two years. In addition, the IAS has initiated several projects to further support the professional development of key stakeholders coming from resource-limited countries and to leverage knowledge, experience and influence of its members to advocate for the policy changes and political commitments necessary to end the AIDS epidemic. The IAS is committed as well to provide assistance to regional AIDS societies/networks and conferences. In order to successfully achieve this mission, evaluation has become a full part of the IAS strategy. Since 2004, all IAS-convened conferences have been systematically evaluated and since May 2008, the IAS secretariat has a full time staff devoted to evaluation. Given the growing internal and external demand for evaluation and the widening scope of the evaluation function at the IAS 1, it is important for the IAS to have its own evaluation policy. Purpose of the evaluation policy The purpose of the evaluation policy is to ensure that IAS has timely, strategically focused and objective information on the performance and impact of its conferences, projects, initiatives and strategies to better achieve its goals. The policy aims to foster a common institutional understanding of the evaluation function at IAS, and further strengthen evidence-based decision-making and advocacy, transparency, accountability and effectiveness. Evaluation definition According to the UNEG Norms 2 for Evaluation, an evaluation is an assessment, as systematic and impartial as possible, of an activity, project, programme, strategy, policy, topic, theme, sector, operational area or institutional performance. It focuses on expected and achieved accomplishments examining the results chain, processes, contextual factors and causality, in order to understand achievements or the lack thereof. It aims at determining the relevance, impact, effectiveness, efficiency and sustainability of the IAS s interventions and contributions. An evaluation should provide evidence-based information that is credible, reliable and useful, enabling the timely incorporation of findings, recommendations and lessons into the decision-making processes. 1 The evaluation function is no more restricted to conferences: it also covers other IAS projects, initiatives and strategies. 2 Norms for Evaluation in the UN System endorsed by the UNEG in April 2005. Page 3 of 8
Evaluation is distinct from financial and compliance audit. It also differs from monitoring, which forms a part of management s accountability for self-assessment and reporting. However, its must be recognized that evaluation findings both draw from and inform the products of monitoring. Evaluation objectives All evaluations share the same objectives of organizational learning and accountability. 1. Evaluation is essential for learning and supporting decision-making process, so as to improve the design of future activities to be conducted by the IAS. This requires a commitment from the IAS managers to follow-up and act upon lessons learnt. 2. Evaluation provides the basis for a system of accountability to IAS members, partners, sponsors, donors and ultimately to the IAS Governing Council. It allows to assessing results and determining the extent to which expected results were successfully achieved. Evaluation plays also a critical role in promoting the work carried out by the IAS. Evaluation scope What to evaluate? The following categories are considered for evaluation: Conferences convened by the IAS and its regional partners. Meetings, summits and other events convened by the IAS. Membership benefits and resources not restricted to IAS members. IAS projects and initiatives such as workshops, professional development programmes, fellowship programmes, prizes and awards, the Industry Liaison Forum, awareness campaigns, etc. IAS strategies (i.e. the IAS strategic plan and departmental strategies such as the partnership strategy) Thematic evaluations will be also considered, especially themes addressed by IAS policy/advocacy activities. For the purpose of this policy, any of the above categories will be referred to as evaluand, i.e. the object to be evaluated/subject of the evaluation. Page 4 of 8
When to evaluate? Most evaluations are post-evaluations, meaning the object to be evaluated is considered completed. However, in view of the need, in selected cases, to learn from experience earlier, evaluation can be conducted during the life cycle of the evaluand. This applies to certain projects, services, policies and strategies, and is usually carried out through a mid-term review. What are the evaluation criteria? The IAS considers the following DAC Criteria 3, as laid out in the DAC Principles for Evaluation of Development Assistance: Relevance: measures the extent to which the objectives and design of the evaluand are suited to the priorities of the target stakeholders and remain valid. It also refers to the extent to which the objectives and design of the evaluand are aligned with IAS s mission, strategy and specific priorities. Relevance can be understood as are we doing the right thing? This includes the question are we the best placed organization to do it? (in other words, do IAS s comparative advantages/added values justify its role?). Effectiveness: assesses whether the evaluand achieved/is achieving progress towards its expected results. It also refers to the major factors influencing the achievement or non-achievement of the results. Efficiency: measures the outputs in relation to the inputs. It examines the extent to which the approved outputs have been achieved within the agreed budget, timeframe and specifications. It is an economic term which signifies that the evaluand uses the least costly resources possible in order to achieve the desired results. This generally requires comparing alternative approaches to achieving the same outputs, to see whether the most efficient process has been adopted. Impact: assesses the positive or negative, intended or unintended effects produced by the evaluand and the extent to which these effects can be attributed to the intervention (i.e. the evaluand). An impact evaluation usually takes place after the intervention has evolved to a steady state. Sustainability: measures the extent to which changes generated by the IAS s intervention are maintained over a longer period and identifies the major factors which influenced the achievement or non-achievement of sustainability of the intervention. There are different aspects of sustainability, including institutional, capacity, technological and financial sustainability. These different aspects have to be assessed when looking at the sustainability of an intervention. 3 Sources: The DAC Principles for the Evaluation of Development Assistance, OECD (1991), Glossary of Terms Used in Evaluation, in 'Methods and Procedures in Aid Evaluation', OECD (1986), and the Glossary of Evaluation and Results Based Management (RBM) Terms, OECD (2000). Page 5 of 8
Given the wide range of potential evaluands at the IAS, not all criteria can be systematically considered. Guiding principles for the evaluation function All evaluations follow the same guiding principles, based on the UNEG Norms and Standards and Code of Conduct for Evaluation: Independence/Impartiality Evaluation must be conducted in an independent and impartial manner. Feasibility Evaluation must be feasible. To this end, evaluation concerns must be addressed at the design stage of the evaluand, with adequate resources set aside 4. Credibility Evaluation must be credible by meeting professional quality standards and rigour 5. Inclusiveness Whenever possible, evaluations must be planned and undertaken in close collaboration with key stakeholders. Transparency Evaluation methodology, findings, recommendations and lessons must be made public and disseminated to all stakeholders concerned through a range of channels. Utilisation Evaluation must be duly considered, with management responses through action plans and progress reports. The use of evaluation must be an integral part of IAS's planning and implementation. Whenever possible, data must be disaggregated by gender, age and other key demographics or variables. Evaluation should also include trend analysis whenever possible. 4 An amount totaling 3 to 5 per cent of programme/project expenditures should be dedicated to evaluation. 5 No method is superior to others. Evaluation methods must be chosen that are appropriate for the evaluand and the evaluation to be performed, and include both qualitative and quantitative data. Page 6 of 8
Evaluation management Because independence and objectivity are vital for the credibility of the evaluation work, evaluation is conducted by an independent department, the Planning, Monitoring and Evaluation (PME) department. Although this department is integrated into the organizational structure of the IAS secretariat, the head of this department is directly and solely responsible to, and takes his/her instructions only from the IAS Executive Director. The PME department is responsible for the following tasks: Designs evaluations 6 and data collection instruments, in collaboration with key stakeholders. Recruits and supervises external consultants, interns and volunteers to support the evaluation function. Conducts evaluations, including data collection. Performs statistical analysis (with SPSS) and qualitative data analysis. Drafts evaluation reports and finalizes them based on consultations with key stakeholders 7. Ensures the wide dissemination of evaluation findings and recommendations. Coordinates the evaluation follow-up process (see details in the table below). Ensures quality assurance for evaluation through reviewing and approving surveys forms, evaluation plans and evaluation reports produced by staff and consultants. Evaluation Follow-Up Process The PME department collates all recommendations of the final evaluation report and those included in internal reports relevant to the evaluand, cluster them by area and subarea and assign a responsible person for each of them. All this information is contained in the Management Response Sheet and shared in due time with all staff. Staff responsible for implementing recommendations is also responsible for reporting progress on follow-up actions and providing justifications for any failure in implementing fully or partially the recommendation(s) in question (this is done directly on the management response sheet saved on sharepoint). The PME department periodically monitors the information in the management response sheet and shares it with the Executive Director. 6 Evaluation Terms of Reference (ToRs) or evaluation plans. 7 The Evaluation Report is logically structured; it contains an executive summary, a detailed description of the evaluation methodology that has been used for conducting the evaluation, evidence-based findings, conclusions and recommendations, as well as acknowledgements, automatic tables of contents and figures, a list of acronyms and relevant appendixes. Findings are presented in a way that makes the information accessible and comprehensible. The PME department is responsible for drafting the evaluation report, getting and incorporating feedback from key stakeholders, editing the report and obtaining the final approval of senior manager(s) and/or director(s) for publishing and disseminating the report. Page 7 of 8
With regards to capacity building and knowledge sharing, the PME department builds knowledge of good evaluation practices with a view to increase staff capacity in evaluation and to promote an evaluation culture in IAS. It is also committed to build/strengthen M&E capacities of IAS members and partners, and to share knowledge with external evaluators and conference managers through: Organization of workshops. Provision of technical assistance. Dissemination of evaluation products, guidelines and other resources (through the IAS website, a google group dedicated to conference evaluation, other websites and blogs). Participation in meetings, committees and online forums. The PME department has no responsibility in project implementation except at the planning stage where it is responsible for: Ensuring the project objectives are measurable. Developing or reviewing the M&E plan. Checking the overall project logic, using the logical framework approach. *************************** Page 8 of 8