MONITORING, EVALUATION AND THE 12 COMPONENTS OF A FUNCTIONAL M&E SYSTEM

Size: px
Start display at page:

Download "MONITORING, EVALUATION AND THE 12 COMPONENTS OF A FUNCTIONAL M&E SYSTEM"

Transcription

1 Saturday, June 16, 2018 MONITORING, EVALUATION AND THE 12 COMPONENTS OF A FUNCTIONAL M&E SYSTEM By Michael Jumba Image 1: The M&E team in an organisation discussing project progress. Source: Freepik.com Although often used together, something that has been misinterpreted in layman circles to mean that they both are the same thing; monitoring and evaluation are two different and distinct concepts. Monitoring is the tracking of progress from the baseline, using quantitative data that is easily collected on an on-going basis. Although monitoring mostly collects quantitative data, qualitative data can also be collected. For example, when conducting a Citizens Report Card concerning a project. There are a number of reasons why monitoring is carried out. They include: for internal use by project managers and staff for project management; for internal use by the organisation at the regional, national and/or international 1

2 level; for planning and management purposes; and to address accountability needs of the organisation to stakeholders. However, monitoring alone is not sufficient for drawing conclusions or giving reasons why desired results were never realized. In addition, monitoring alone is not enough to show the direction an intervention is headed. This makes evaluations relevant. Evaluation on the other hand is the systematic, evidence-based interrogation of a program, policy or project. Evaluations have the following characteristics: (i) the five OECD/DAC evaluation criteria namely: Relevance, Effectiveness, Efficiency, Impact and Sustainability; (ii) consider cross-cutting issues such as poverty, gender and environment; and (iii) the analysis of the intervention logic (e.g. logframe). With regards to the five OECD/DAC evaluation criteria, relevance focuses on establishing whether the intervention is aligned to the local and national priorities, or solving the identified problem or needs of the target population. With regards to effectiveness, evaluations seek to establish whether the objectives of the development interventions were achieved while efficiency seeks to establish whether the objectives are being achieved economically by the development intervention. With regards to impact, evaluations seek to establish whether the development intervention contributed to reaching higher level development objectives for example, the Sustainable Development Goals (SDGs). Finally, with regards to sustainability, evaluations seek to establish whether the positive effects or outcomes of a development intervention will continue being practiced by the target population, even after the intervention has been terminated. A good example here is, will farmers continue to plant let's say trees, out of their own volition even after a tree planting campaign project by an NGO has concluded after five years? There are number of reasons why evaluations are carried out: a) needs assessment i.e. assessing the situation before an intervention is implemented; b) process evaluation are carried out halfway through implementation of interventions, with the intentions of understanding the status of the intervention, that then should lead to improvement of performance; and, c) 2

3 impact evaluation are carried out to attribute change on the target population to a development project or programme. When married together, monitoring and evaluation do not change their distinctive characteristics; but rather, become one fundamental tool for use by project and programme managers, to encourage on-going learning for the improvement of development interventions. For this to work properly, monitoring and evaluation ought to be closely related, be interactive, and above all, be mutually supportive. In other words, in organisations and government agencies where good monitoring and evaluations are taking place, monitoring ought to feed into process and summative evaluations; this is on top of proper baseline studies being conducted before interventions are implemented. There are a number of benefits of conducting good M&E. They include: improvement in management; enhancing performance and effectiveness of interventions; enhancing efficiency and value for money; and most important, increase in transparency and accountability. At the center of monitoring and evaluation are indicators. So what is an indicator? An indicator is a variable that measures a specific aspect of a program, policy or project. They are used to demonstrate that activities were implemented as planned, or that the program has influenced a change in a desired outcome. When choosing indicators for monitoring and evaluation, it is important that you have indicators at all levels namely: inputs, activities, outputs, outcomes and impacts. In addition, when developing indicators, they ought to be Simple, Measurable, Attributable, Reliable and Time bound (the SMART criteria). M&E indicators enable comparison with respect to a baseline for different time periods, as well as comparisons across interventions. In other words, minus indicators for measuring progress and evaluating impact, monitoring and evaluation would be impossible. If we are to have any practical M&E in any organisation or government agency, a functional M&E system, as outlined by the UNAIDS (2008) framework ought to be in place. A functional M&E system not only provides essential data to meet program/project needs, but also 3

4 improves the response and operations of an intervention. Should an organisation or government agency choose not to have a functional M&E system, then their M&E will be merely for decoration purposes. Since it came to the fore in 2008, "the Organising Framework for a Functional National HIV M&E System" by UNAIDS, has become the internationally recognised standard for functional M&E systems. This framework comprises twelve (12) components that are organised into three (3) onion like rings namely: the outer ring, the inner ring and the centre of the onion. The outer ring broadly represents human resource, partnerships and planning; ingredients that are pillars of data collection and data use. The middle ring broadly represents the mechanisms through which data are collected, verified and transformed into useful information and the centre of the onion represents the central purpose of a M&E system which is using the gathered and processed data as evidence to support decision making, while at the same time learning (see the diagram below for illustration). 4

5 The 12 Components of a Functional M&E System THE OUTER RING 1. Organisational structures with M&E functions 2. Human Resource Capacity for M&E 3. Partnerships for M&E 4. M&E plan 5. Costed annual work plan 6. M&E advocacy, Communication and Culture THE MIDDLE RING 7. Routine Monitoring 8. Surveys and Surveillance 9. Databases 10. Supervision and data auditing 11. Evaluation and research CENTER OF THE FRAMEWORK 12. Data dissemination and use Our solutions Are you in an organisation or government agency that is need of a monitoring and evaluation system? Our solutions are as follows: 5

6 1. We conduct a baseline study to understand the situation at the organisation or government agency to understand the status of the 12 components of a functional M&E system. This study includes the barriers to the establishment of a functional M&E System. 2. We interrogate the human resources in the organisation or government agency to establish whether they have knowledge in M&E. 3. Once we have this information, we work with the client step-by-step to develop a functional M&E system; complete with indicators for measuring progress and the reporting procedures. 4. We finalise the process by setting up databases and supporting infrastructure for archiving the generated information. This information could then be retrieved when need arise in the future. Contact us, we will be glad to work with you in this endeavour. 6