VOLUME TWO: Methodology Manual and Implementation Plan. A methodology for assessing the impact of the Marie Curie Fellowships

Similar documents
Table of Contents. Mob-A. Road Map of Calls for Proposals...63 Mob-B. Evaluation criteria for evaluating proposals...64

Tassc:Estimator technical briefing

COMMISSION OF THE EUROPEAN COMMUNITIES COMMUNICATION FROM THE COMMISSION IMPACT ASSESSMENT

MARIE CURIE RESEARCHERS AND THEIR LONG-TERM CAREER DEVELOPMENT: a comparative study

EXTERNAL EVALUATION OF THE EUROPEAN UNION AGENCY FOR FUNDAMENTAL RIGHTS DRAFT TECHNICAL SPECIFICATIONS

COMMUNICATION FROM THE COMMISSION TO THE EUROPEAN PARLIAMENT, THE COUNCIL, THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE AND THE COMMITTEE OF THE REGIONS

THE COSTS AND BENEFITS OF DIVERSITY

Desk Research (Literature review and information gathering from secondary data sources)

PEACE IV PROGRAMME ( ) European Territorial Cooperation Programme United Kingdom-Ireland (Ireland - Northern Ireland) EVALUATION PLAN

information exchange. To achieve this, the process must be managed through a partnership between the European Commission and the member states.

Access to Employment Context Report Cyprus. ESF Expert Evaluation Network. page 1

APEC ENGINEER FRAMEWORK

COMMISSION STAFF WORKING DOCUMENT. Horizon Europe Stakeholder Consultation Synopsis Report. Accompanying the document.

Appendix B SOUTHERN CROSS UNIVERSITY. Professional Staff Secondary Classification Descriptors

SEEA EXPERIMENTAL ECOSYSTEM ACCOUNTING REVISION 2020 REVISION ISSUES NOTE

ENA Future Worlds Consultation

Effectiveness Evaluation of Stockholm Convention

CHAPTER 1 INTRODUCTION

Criteria based evaluations

ERAC 1211/11 UM/nj 1

INTERNATIONAL STANDARD ON AUDITING 530 AUDIT SAMPLING AND OTHER MEANS OF TESTING CONTENTS

Market studies. Guidance on the OFT approach

Standards for Doctoral programmes in Forensic Psychology

Digital Industries Apprenticeship: Assessment Plan. Unified Communications Technician. Published in November 2016

(Legislative acts) REGULATIONS

TOOL #57. ANALYTICAL METHODS TO COMPARE OPTIONS OR ASSESS

IRM s Professional Standards in Risk Management PART 1 Consultation: Functional Standards

Module 5: Project Evaluation in

Official Journal of the European Union L 314/9 DECISIONS COUNCIL

TOOL #47. EVALUATION CRITERIA AND QUESTIONS

Digital Industries Apprenticeship: Assessment Plan. IS Business Analyst. March 2017

MARIE CURIE RESEARCHERS AND THEIR LONG-TERM CAREER DEVELOPMENT: A COMPARATIVE STUDY

How to map excellence in research and technological development in Europe

Standard for applying the Principle. Involving Stakeholders DRAFT.

Tender specifications. Evaluation of the EEA and related services. Open call for tenders EEA/EDO/07/001

Planning Responsibly in Medical Education. Interim PRIME Capacity Guide for Health Services

Position paper on Framework Programme 9

Open Government Data Assessment Report Template

Finally, a good strategy must include a sound monitoring and evaluation system as well as a revision mechanism for updating the strategic choices.

EUROPEAN COMMISSION DIRECTORATE-GENERAL FOR MOBILITY AND TRANSPORT

BBC BOARD REGULATION. Policy on material changes to the BBC s public service activities and commercial activities

QUALIFICATION AND COURSE CATALOGUE CIPD

REPORT FROM THE COMMISSION TO THE EUROPEAN PARLIAMENT, THE COUNCIL, THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE AND THE COMMITTEE OF THE REGIONS

NATIONAL PROGRAMMES FOR THE SUPPORT OF PHOTOVOLTAICS IN GREECE

Study on Social Impact Assessment as a tool for mainstreaming social inclusion and social protection concerns in public policy in EU Member States

Working Party on Aid Evaluation

Auditing Standards and Practices Council

Marketing Research to Support the Stage Gate New Product Development Process

SERVICE REQUEST - ANNEX Specific Terms of Reference

IMC/02/03 Evaluation Plan ( Programmes)

Making the choice: Decentralized Evaluation or Review? (Orientation note) Draft for comments

Introduction to Business Research 3

OPTIONAL MODULE: ASSESSING INCLUSIVE OWNERSHIP

S3-4AlpClusters. Training Tool. Smart Specialisation with Smart Clusters Preparation of a Base of Evidence Qualitative and Quantitative Analysis

that these standards can only be delivered effectively by devolution of responsibility to the frontline;

COMMON IMPLEMENTATION STRATEGY FOR THE WATER FRAMEWORK DIRECTIVE (2000/60/EC)

Implementing the North American Industry Classification System: The Canadian Experience

POLICY GUIDELINES ON STANDARD SETTING MAY 2008

Evaluation Policy for GEF Funded Projects

Evaluation Framework: Research Programmes and Schemes

Cover Page. The handle holds various files of this Leiden University dissertation

Audit Sampling and Other Means of Testing

Techniques A practical path to customer loyalty

Pillar II. Institutional Framework and Management Capacity

REGULATIONS FOR STUDY AND SCIENTIFIC GROUPS, COLLABORATING INSTITUTIONS AND OTHER MECHANISMS OF COLLABORATION 1

Applying PSM to Enterprise Measurement

IESBA Strategy Survey Questionnaire, April 2017

Charter. The IASB and other accounting standard-setters. Working together to develop and maintain global financial reporting standards.

Main concepts and suggested approach

EXCELLENCE 1.1 Quality, innovative aspects and credibility of the research (including inter/multidisciplinary aspects)

HOW AN INDUSTRIAL BASELINE SURVEY CAN INFORM LONG-TERM STRATEGIC PLANNING AND FACILITATE CREATION OF SUSTAINABLE ECONOMIC LINKAGES

Level 5 NVQ Diploma in Management and Leadership Complete

A Guide to the Gender-inclusive Job Evaluation Standard NZS 8007:2006

Netherlands Wild Fire Investigation Recommendations

Part I PRELIMINARY. Part II MARKET DEFINITION ASSESSING SIGNIFICANT MARKET POWER. Part IV IMPOSITION OF OBLIGATIONS UNDER THE REGULATORY FRAMEWORK

YOUNG DRIVER SAFETY PROGRAMME BEST PRACTICE FRAMEWORK

INTERNATIONAL STANDARD ON AUDITING 530 AUDIT SAMPLING AND OTHER MEANS OF SELECTIVE TESTING PROCEDURES CONTENTS

REGISTERED CANDIDATE AUDITOR (RCA) TECHNICAL COMPETENCE REQUIREMENTS

CENTRAL BANK OF CYPRUS

Glossary of Research Terms

Factsheet: Social Return on Investment (SROI) as self-assessment method for NRNs

RESEARCH Evaluation System Start Tech Vienna 2016 Call

Curriculum mapping of Company Programme & Welsh Baccalaureate Qualification

Marie Skłodowska-Curie Actions, Guide for Applicants Innovative Training Networks 2018 Page 6 of Refugee Convention and the 1967 Protocol.

INDIVIDUAL CONSULTANT PROCUREMENT NOTICE

Memo. Further Details on EU Careers Branding and Use of New Media

GE/GN8640. Risk Evaluation and Assessment. Guidance on Planning an Application of the Common Safety Method on. Rail Industry Guidance Note

Marie Skłodowska-Curie Actions under Horizon 2020

PHASE TWO FOLLOW-UP REPORT ON THE AUDIT OF CONTRACTS (2008)

Launch conference for ESCO v1. Concept note

Staff Guidelines for Conducting Impact Assessment

ISA 230, Audit Documentation

The Examination for Professional Practice in Psychology: The Enhanced EPPP Frequently Asked Questions

Glossary of Terms Ability Accommodation Adjusted validity/reliability coefficient Alternate forms Analysis of work Assessment Band Battery

Introductory guide to understanding impact

Response to Consultation on Review of Apprenticeship Training In Ireland

Personal Development Intermediate 1

END-POINT ASSESSMENT PLAN

EUROPEAN PARLIAMENT AND COUNCIL

Audit Documentation. ISA 230 (Redrafted) Issued June International Standard on Auditing

Transcription:

European Commission, Research Directorate-General A methodology for assessing the impact of the Marie Curie Fellowships VOLUME TWO: Methodology Manual and Implementation Plan

Contents - Volume Two 1. Introduction to the Methodology...1 1.1 General methodological framework...2 1.2 Structure of the methodology...3 1.3 Implementation of the methodology...5 2. Presentation of the methodology...6 2.1 ACTIVITY ONE: Setting the parameters...7 TASK ONE: Scoping the study...7 TASK TWO: Setting the timeframe...8 2.2 ACTIVITY TWO: Variables and Indicators...11 TASK ONE: Selection of variables...12 Step One - Identification of candidate variables...12 Step Two - The 'variable' selection tool...13 TASK TWO - Indicator selection process...15 Step One - Candidate indicators...15 Step Two - Indicator filtration tool...16 2.3 ACTIVITY THREE: Research...21 TASK ONE: Setting up a longitudinal survey capability...22 Step One: Laying out the survey instruments...22 Step Two: Longitudinal survey question strategy...24 Step Three: Survey question types and formats...29 Step Four: Orienting the instruments to impact assessment...32 Step Five: Piloting the questions...37 TASK TWO: Setting up a periodic survey capability...39 TASK THREE: Direct contact research...41 Step One: Setting up a consultative group...42 Step Two: The participant interview programme...43 2.4 ACTIVITY FOUR: Analysis...45 TASK ONE: Data management and processing...45 TASK TWO: Interpreting survey data in an impact assessment context...45 General note on showing group characteristics...62 TASK THREE: Analysis of data from interviews and consultative groups...63 3. Implementation Plan...65 3.1 Overview of the Implementation Plan...65

General note on the administration of surveys...67 PHASE ONE: Launching the Impact Assessment Exercise...68 1. The Inaugural Impact Assessment...68 a. Study and observation periods...68 b. Work programme...69 2. The Retrospective Impact Assessment...70 a. Study and observation periods...70 b. Work programme...70 PHASE TWO: First full impact assessment cycle...72 a. Study and observation periods...73 b. Work programme...74 3.2 Integrating data-gathering and analysis regimes...77 ii

List of Tables Table 1 - Relating indicators to variables...16 Table 2 - Example of an indicator evaluation format and core questions...19 Table 3 - Generic information types in the PAN framework...24 Table 4 - Sample information characteristics of ex ante/ex post question sets...25 Table 5 - Demonstration of question formulation for longitudinal surveys...26 Table 6 - Setting up 'additionality' linkages...27 Table 7 - Re-focussing the question strategy to the host perspective...28 Table 8 - Required sets of longitudinal research instruments...36 Table 9- Recommended composition of the Consultitive Group...43 Table 10 - Inaugural Assessment Work Programme...69 Table 11 - Retrospective Assessment Work Programme...71 Table 12 - Work Items for Study Period One (Phase Two)...75 Table 13 - Work Items for Study Period Two (Phase Two)...76 Table 14 -Work Items for Study Period Three (Phase Two)...77 List of Figures Figure 1 - Basic methodological sequence...2 Figure 2 - Activities, Tasks and Steps...3 Figure 3 - General schematic outline...5 Figure 4 - Impact demonstration potential of Fellowship outcomes...8 Figure 5 - The observation period for a Fellowship...9 Figure 6 - Study periods and observation periods...10 Figure 7 - Overall variable and indicator development scheme...12 Figure 8 - 'Variable' selection tool...13 Figure 9 - Indicator selection process...15 Figure 10 - Basic survey format...23 Figure 11 - Impact flows...27 Figure 12 - Question Area configuration...33 Figure 13 - Framework for direct contact research...42 Figure 14 - Implementation schedule for Phase One and Phase Two...66 Figure 15 - Phase One Implementation Schedule...68 Figure 16 - Phase Two Implementation Schedule...72 Figure 17 - Generic schedule for ex post-ex ante survey cycles...74 iii

1. Introduction to the Methodology In this Volume, the Impact Assessment Methodology for the Marie Curie Fellowships (MCF) is laid out in operational form. Based on the conceptual framework given in Volume One, Section 2 sets out the methodology in terms of the procedures and principles for data-gathering and analysis that will be required in order to assess the impacts of the Fellowships. Section 3 sets out an implementation plan for an operational impact assessment exercise based on indicators and research instruments that have been developed according to the methodology. At the beginning of the conceptual framework the term 'methodology' was defined as a configuration of research and analytical methods and techniques that has been oriented to a specified research objective. Reflecting the highly complex and potentially fluid requirements for demonstrating impacts in the MCF context, some of the elements presented in this Volume are prescriptive while others are discretionary. Prescriptive elements define the basic principles, procedures and instruments that are relevant to implementing the methodology as described in the implementation plan (Section 3). Discretionary elements define complementary and/or supplementary approaches to the collection and/or analysis of data that may be employed in the implementation on a selective or optional basis. The methodology was developed with reference to six pragmatic criteria that incorporate available best-practice and the Commission's own original terms of reference for impact assessment. 1. High probability - The methodology focuses on the kinds of impacts that can be shown with greatest confidence. Approaches that have a low probability of demonstrating 'additionality' from the Fellowships have been excluded or discussed in peripheral contexts. 2. Basic impact assessment capability at reasonable cost - The methodology is oriented in the first instance to providing the Commission with a basic impact assessment capability that can be operationalised at reasonable cost relative to its potential yield. 3. Immediate implementation - The prescriptive elements of the methodology focus on research techniques and processes that can be assembled and operationalised relatively quickly. 4. Backward and forward capability - Subject to the caveats outlined in the conceptual framework concerning the availability of comparable data, the methodology is designed to enable impact assessments of past, present and future Fellowships. 5. A 'living' methodology - The methodology contains tools and processes for the further adaptation and evolution of the methodology. They allow also for

the eventual integration of the MCF evaluation and monitoring regimes with the impact assessment regime. 6. Standard analytical techniques - The methodology has been designed to gather data in a form that is amenable to standard qualitative and quantitative techniques as used commonly in the social sciences. No prescriptive element in the basic methodology requires research and analytical expertise that would not be available internally to the Commission, or from a range of potential contractors. 1.1 General methodological framework The conceptual framework given in Volume One is central to the methodology. It serves two structural purposes: 1. It informs the process by which the all of the research elements (questions, indicators, processes, tools etc.) are identified, defined and structured. 2. It informs the analytical processes by which assessments of impact are derived from the research data. The structural position of the conceptual framework in the basic methodological sequence is illustrated in Figure 1. Figure 1 - Basic methodological sequence PROBLEM: how to identify impacts? CONCEPTUAL & ANALYTICAL FRAMEWORK ANALYSIS: how to interpret impacts? METHODOLOGY APPLICATION VARIABLE AND INDICATOR DEVELOPMENT RESEARCH METHODS SELECTION 2

1.2 Structure of the methodology Basically, the methodology is structured as a group of methods, techniques, instruments, tools and processes that are organised and presented sequentially in terms of four ACTIVITIES: G G G G ACTIVITY ONE: Setting Parameters -Definingwhatistobeincludedand excluded from an impact assessment study within a given timeframe. ACTIVITY TWO: Variables and Indicators - Identifying variables and developing impact assessment indicators that can be operationalised in research instruments. ACTIVITY THREE: Research - Developing core research capabilities for impact assessment, including the design of instruments and the collection and organisation of data. ACTIVITY FOUR: Analysis - Analysing data and demonstrating impacts. Each ACTIVITY will be broken down into TASKS which in turn contain a number of sequential Steps. The general structure of ACTIVITIES, TASKS and Steps is illustrated in Figure 2. Figure 2 - Activities, Tasks and Steps ACTIVITY 1 TASK 1 TASK 2 TASK TASK Step 1 Step 2 Step 3 ACTIVITY 2 TASK 1 TASK 2 TASK Step 1 Step 2 Step 3 ACTIVITY 3 ACTIVITY 4 TASK 3

The ACTIVITIES are structured sequentially. All of the tasks in each ACTIVITY are oriented towards specific outputs, each enabling tasks in the next ACTIVITY: Activity ONE PARAMETERS OUTPUTS - study scope and timeframes Activity TWO INDICATORS OUTPUTS - selections of variables and indicators Activity THREE RESEARCH OUTPUTS -research frameworks and instruments Activity FOUR ANALYSIS OUTPUTS - impact demonstrations The Commission's original specifications for the methodology called for the provision of a comprehensive capability for assessing a very wide range of possible impacts on potentially a very large group of stakeholders. Eventually, such an enterprise could involve many individual studies, each covering different aspects of the Fellowships with respect to different stakeholder groups at different times. Obviously, it is not possible to lay out the details for every possible type of study that could be devised. Consequently, the methodology is designed to be adaptable and flexible such that the Commission may use it to explore different impacts of the MCF according to priorities and needs that may emerge as the programme evolves. For example, from time-to-time the Commission may wish to use the methodology to inaugurate specialised studies that focus on the characteristics of specific groups, regions and disciplines, or to track the impact of the Fellowships relative to specific stimuli or events. Figure 3 shows the general schematic outline of the methodology. It illustrates three essential features: 1. The processes for selecting research variables and indicators is central to all other aspects of the methodology. The choice of variables and indicators largely determines the kind of research that is feasible and the quality of the results that can be expected. 2. The primary focus of the methodology is on the longitudinal - 'over time' - dimension. The methodology is aimed at the eventual construction of a viable longitudinal impact assessment monitoring capability for assessing impacts in general areas that apply to the whole range of Fellowships, and/or for assessing the impacts of specific groupings of Fellowships in selected circumstances. 4

3. The periodic capabilities are based also on a longitudinal research strategy. Specially constructed periodic instruments are the only viable alternatives for gathering data on the impacts of most Fellowship projects completed before about 1997-98, or, at the discretion of the Commission, for specially focussed research into individual types of impacts on individual stakeholder groups. Figure 3 - General schematic outline ACTIVITY 1: PARAMETERS scope timeframe ACTIVITY 3: RESEARCH ACTIVITY 2: VARIABLES & INDICATORS selection of variables and indicators core monitoring capability periodic capability ACTIVITY 4: ANALYSIS current and future Fellowship projects (FP4 - FP5+) completed Fellowship projects (FP3 - FP4) 1.3 Implementation of the methodology The IMPLEMENTATION PLAN given in Section 3 outlines an operational series of impact assessments employing research instruments that have been pre-developed according to the methodology as presented in Section 2. The implementation is set up in two phases, one oriented to assessing the impacts of completed Fellowships awarded in FP4, the other to FP5 Fellowship awards. 5

2. Presentation of the methodology The following sections present the various tasks in each ACTIVITY in the order in which they are to be addressed. The description of each task is accompanied by descriptions of the various methods, techniques, tools and processes that apply in each case. Where required, this is supplemented by examples and/or demonstrations. In cases where more detailed material is required, the reader will be referred to an appropriate ANNEX. As indicated in Section 1, the potential range of applications for this methodology is very large. Therefore, throughout this Section, the features of the methodology are presented with reference to the development of a core impact assessment monitoring capability based on survey techniques, along with a periodic survey capability that has been derived from the monitoring capability. The IMPLEMENTATION PLAN given in Section 3 is an application of the core impact assessment monitoring capability and the periodic survey capability as presented below. However, the range of possibilities for applying the methodology goes beyond the implementation as outlined in Section 3. On a discretionary basis, implementers can use the methodology to design any number of supplementary impact assessments. For this reason, the methodology is presented on a stand-alone basis. In this way, it can serve both as a reference resource for Section 3, and as the foundation on which further impact assessments can be constructed. 6

2.1 ACTIVITY ONE: Setting the parameters The first step in inaugurating any research initiative using the methodology is to define the parameters of the study in terms of: the scope of the study; the timeframe of the study. TASK ONE: Scoping the study The broad objectives for impact assessment in the MCF context were described in Section 1.1 of the conceptual framework. Included is a huge array of actual and potential actor relationships, impact types and impact sources: Impacts on the careers of individual scientists in terms of key factors such as employability, expertise, mobility, international standing, access to further training, equal opportunities etc. Impacts on host institutions in terms of key factors such as development of research capabilities, research quality, collaboration opportunities, educational outputs etc. Impacts on the general development of European science and technology in terms of knowledge creation and transfer, development of new technologies, the creation of 'spin-off' companies, scientific and technical cohesion etc. Impacts on European Union research programmes in terms of public awareness, influences on national and regional research policies and research programme administration. Setting the parameters of the study involves making preliminary decisions about whom and what to include and exclude. These decisions will be useful in guiding the selection of variables and indicators in ACTIVITY TWO and in the selection of research approaches and research instrument design in ACTIVITY THREE. As noted in Section 4.5.3 of the conceptual framework, the possibility to attribute impacts specifically to the Marie Curie Fellowships attenuates sharply as the focus moves away from core areas that are directly associated with the behaviours of participants (i.e. Fellows and hosts). Furthermore, as set out in Sections 3.1.1 and 4.5.2 of the conceptual framework, most of the information that will show impact will be available only from stakeholders in the participant arena. Most of the potential impacts will relate in particular to Fellows - how they have been impacted by the Fellowships and how they have projected these impacts into their professional lives. Accordingly, Figure 4 (a reproduction of Figure 4 in Volume One) illustrates that the strongest potential to show impacts will be in the participant arena and the weakest in the public arena. Likewise, Figure 4 shows that of the two main types of outcome from publicly-funded research - (1) enhanced institutional and individual capabilities, and (2) discoveries and innovations - by far the strongest attribution potential will be with regard to impacts that are linked in some way with the enhancement of institutional and individual capabilities. Figure 4 illustrates the relationship between the scope that is defined for the study and the relative degrees of difficulty that are likely to be encountered in demonstrating impacts. 7

Figure 4 - Impact demonstration potential of Fellowship outcomes discoveries & innovations institutional & individual capabilities I M P A C T S participant arena commercial arena programme arena public arena Strongest Weakest Clearly, the scope with the highest potential and lowest risk will focus on the impacts of institutional and individual capabilities, as concentrated in the impact arenas where greatest direct impact is likely to be detectable - i.e. more in the participant arena and than in the commercial, programme and commercial arenas. This attenuation factor does not preclude investigation of impacts in the more remote impact arenas, but excursions of this kind must be regarded as 'experiments' carrying all of the attendant risks. For studies of this kind, various prescriptive elements of the methodology would have to be adapted specially, and perhaps supplemented by discretionary methods and techniques. TASK TWO: Setting the timeframe Setting the timeframe is important for both the core monitoring capability and for periodic studies. Basically, there are two aspects to be considered: 1. Researchers must define a study period that sets out a time frame during which all observations will be made. 2. Researchers must define an observation period that setsout the time frame during which each individual Fellowship that is included in the study period will be examined. Fellowships are awarded intermittently and the duration of Fellowship projects can vary. This means that any given study period will encompass many Fellowships at various stages of completion. However, impact assessment cannot begin immediately following the completion of a Fellowship project. A minimum follow-up interval is 8

required in order to give impacts the opportunity to occur. It is unlikely that significant impacts can be detected if this interval is less than two years. The total duration of a study period can be determined arbitrarily - e.g. Fellowships projects awarded or begun between specified dates. Alternatively, the duration could correspond to an external point of reference - e.g. Fellowships awarded in the last two years of FP4. In every case it is important that the total duration of the study period be long enough to encompass a sufficient quantity of Fellowships, but short enough that the time periods allowed for the occurrence of impacts are reasonably comparable. The observation period is the total time to which data gathered about any single Fellowship will refer. The observation period is defined generically as the duration of the funded Fellowship project plus a pre-determined minimum follow-up interval. This definition ensures that the observation period allows sufficient time for impacts to become detectable. The generic structure of an observation period is illustrated in Figure 5. Figure 5 - The observation period for a Fellowship observation period duration of project (varies with Fellowship) follow-up interval (similar for each Fellowship) The duration of each Fellowship project is fixed by the terms of the Fellowship award. However, the duration of the follow-up interval is determined by the impact assessment research team. This interval is a key factor in implementing the methodology and it is described in more detail in the implementation plan (see Section 3 below). The duration of the follow-up interval is standardised to the extent that the observation period for all Fellowships examined in a given study period will incorporate a follow-up interval of a similar length - e.g. a minimum of two years and a maximum of three years. As individual Fellowship projects begin and end at different points in time, a degree of leeway is required so that data can be collected with reference to groups of Fellowship projects that begin and end within roughly the same time period. For pragmatic reasons, it is best in most circumstances that the follow-up interval not exceed three years. If the interval is longer than this, chances may diminish that potential respondents can be contacted. Moreover, propensity to respond to questionnaires decreases substantially over time. On the other hand, if the follow-up interval is too short there may not have been time for impacts to have occurred. The most practical length for a study period in the MCF context is between three and four years. For example, a hypothetical study period beginning on 1 January 2000 and ending on 31 January 2003 would encompass observation periods for Fellowship projects begun and completed in the two year period between 1 January 2000 and 31 9

December 2001. The minimum follow-up interval in this example would be two years and the longest about three years, the majority falling somewhere in between. The relationship between study periods and observation periods is illustrated in Figure 6. The gathering of data is geared in the first instance to the observation periods. For shorter study periods, aggregation of the data can occur relatively quickly. However, if the study period is long, data will be compiled and collated as the impact assessment progresses. Interim results can be assembled and reported at selected intervals and aggregated into cumulative reports at the end of each study period. Figure 6 - Study periods and observation periods Fellow ship observ ation period STUDY PERIOD 10

2.2 ACTIVITY TWO: Variables and Indicators ACTIVITY TWO pertains generally to the selection of key research variables and indicators. Variables are large categories of behaviours, characteristics or qualities in which variance can be observed. Indicators are empirical observations that relate to given variables. Observed changes in the indicators are taken to indicate change in the variable with which they are associated. Examples of variables that are relevant to the MCF include 'career progress' or 'academic excellence'. Each of these items suggests a range of contributing factors. For example, factors like education and skill levels, personal qualities, geographical location, rates of upward and lateral mobility, pay and so forth, all contribute collectively to a description of the dynamics of the 'career progress' variable. To the extent that any of these factors can be observed systematically, the potential exists to show the 'career progress' of a Marie Curie Fellow. All of the TASKS in ACTIVITY TWO revolve around a set of 'filtration tools' designed to ensure that the indicators finally selected are those that will have the highest impact demonstration potential. The results of this operation are fed into the research instrument development process in ACTIVITY THREE. The variable and indicator selection process is the key to the methodology as a whole. As explained in the conceptual framework, in an impact assessment the probability of identifying the most productive indicators increases if they are closely associated with the most unique or characteristic features of the programme whose impacts are being assessed. The following unique features and characteristics of the MCF were used throughout the development of this methodology: 1. European focus - The MCF is oriented to the evolution of a Europe-wide research capability, and not to the specific requirements or objectives of any individual Member State. 2. Universality - Participation in the MCF is open to any individual or institution (firms as well as universities and research institutes) in any Member State. 3. Subsidiarity - The MCF is designed to complement nationally-funded post-doctoral research and not to duplicate or provide alternatives to programmes funded by the EU Member States. 4. Mobility - The MCF is structured around the principle of mobility - both in the sense of researcher mobility and in the sense of the mobility of capabilities and competencies between Member States. 5. Integration - The MCF is linked into the overall agenda for EU-supported research and technical development, primarily through the Framework Programmes. Although all items on the above list are relevant, 'mobility' and 'universality' are by far the most significant in terms of demonstrating impacts that are attributable to the Fellowships. The primary and most unique feature of the MCF is that it enables the mobility of researchers between Member States on a universal basis. Most of the other features in the list have strong mobility/universality implications. IMPORTANT NOTE: The above list is a description of the features and characteristics of the Fellowships that applied at the time this methodology was 11

developed. These may change over time. It is important that all indicators used in research be reviewed periodically, and especially subsequent to any significant changes in the scope, objectives or characteristics of the Fellowship programme. Figure 7 - Overall variable and indicator development scheme TASK ONE: selecting variables STEP 1 candidate variables STEP 2 variable selection tool INDICATORS VARIABLES STEP 2 indicator selection tool STEP 1 candidate indicators TASK TWO: selecting indicators TheobjectiveofACTIVITY TWO is to develop effective impact assessment indicators that relate to the most appropriate variables in an impact assessment context. The filtration process described in TASK ONE and TASK TWO is a pivotal element in indicator selection. All potential variables and indicators must be fed through the filtration process. The overall scheme of the ACTIVITY TWO process is illustrated in Figure 7. TASK ONE: Selection of variables The methodology works outwards from a selection of variables, defined as broad categories of behaviours, characteristics, and outcomes that relate to any of the types of activities that are undertaken as part of a Fellowship project. TASK ONE involves identifying candidate variables and then comparing them to Fellowship programme criteria in terms of impact assessment. Step One - Identification of candidate variables Virtually any general category of behaviours, characteristics or qualities that pertain to a post-graduate and/or post-doctoral research and training environment can be selected as a candidate variable. 12

The identification of candidate variables is largely an open-ended and qualitative exercise, but most variables will relate to the general parameters and expectations for the Fellowship programme as a whole. Discussion between the research team and Commission officials involved in the direct administration of the programme will assist in defining candidate variables. It is important that a reasonably wide range of candidate variables is identified, and that none are dismissed out-of-hand before their potential for yielding useful and usable indicators is assessed in STEP TWO. Step Two - The 'variable' selection tool Clearly, not all possible variables will yield demonstrations of impacts, and many may do so only in areas that are trivial or otherwise peripheral for the impact assessment. It is important, however, not to dismiss any potential variable out-of-hand. To systematise the elimination process, all candidate variables as identified in Step One will be fed through a variable selection tool as illustrated in Figure 8. Figure 8 - 'Variable' selection tool candidate variable SELECTION TOOL Unique features and characteristics yes no Other policy relevant criteria SELECTED VARIABLE reject and file The selection tool is a simple device whose main objective is to ensure that the suitability of each variable is assessed consistently in terms of its potential to yield indicators in contexts that are relevant to the objectives of the impact assessment. The tool adds structure when comparing the characteristics of candidate variables with: the unique features and characteristics of the Fellowships as defined at the time the study in inaugurated; any other policy-related criteria that have been specifically defined as being relevant to the Commission's objectives in undertaking the impact assessment (e.g. specific planning, policy, public interest or other agendas). 13

Using the tool involves the following processes: 1. The implementation team assembles and validates a list of the unique features and characteristics of the Fellowship programme that applies in the relevant time period. 2. The implementation team identifies any current policy-relevant criteria with Commission officials who are responsible for the direction and administration of the Fellowship programme. 3. Each candidate variable is evaluated with reference to the set of unique features and characteristics. If most aspects of the evaluation are positive - particularly as concerns the crucial 'mobility' and 'universality' elements - the variable is then evaluated with reference to any policy-relevant criteria (as identified in '2' above). 4. Candidate variables for which the evaluation is negative with respect to the list of unique features and characteristics can be rejected. If they do not reflect the dynamics of the Fellowships, it is unlikely that relevance to policy-relevant criteria will be sufficient to make them useful as variables. 5. Some candidate variables may compare well with the list of unique features and characteristics, but otherwise they may be irrelevant to the objectives of the impact assessment. These candidates should be rejected on this basis. 6. It is good practice to keep rejected candidate variables on file along with the reasons for their rejection. As circumstances change, variables that are deemed less suitable for one type of impact assessment initiative may prove useful in other initiatives. The 'reject' file also minimises the risk of work duplication over the life-cycle of the methodology. The final choice of which variables to use will depend to a large extent on the quality of the indicators as identified in TASK TWO. The following six variables have been pre-selected by the developers of the methodology using the tool described above. career progress research excellence contacts and networks public impact of research commercial spin-offs scientific outputs These variables will be used as examples throughout the presentation of the methodology, and they form the basis of the pre-developed research instruments that will be used in the first phases of implementing the impact assessment (See Section 3 - Implementation Plan). 14

TASK TWO - Indicator selection process The indicator selection process involves two steps that are designed to ensure that appropriate and productive indicators are identified for each chosen variable. The whole process is illustrated in Figure 9. Figure 9 - Indicator selection process STEP 1 candidate indicators VARIABLES STEP 2 INDICATOR FILTRATION TOOL PAN classification P } A N group classification 1 2 3 } evaluation of potential low high INDICATORS Step One - Candidate indicators Basically, an indicator is an aspect of variable that can be observed systematically. The implementation team can generate candidate indicators through open group discussion - brainstorming - of the dynamics of each variable selected in TASK ONE. Candidate indicators can be identified also through desk research. Previous programme evaluations of the MCF or other related programmes are often good sources of candidate indicators. Many of the information categories in the existing MCF administrative monitoring regime (the intermediate and final scientific reports filed by Fellows and supervising scientists) can be re-focussed as impact assessment indicators. Candidate indicators can be drawn also from analogous or related studies of post-graduate and post-doctoral activities as performed by the Commission or by other bodies, and from secondary literature. Identification of candidate indicators involves breaking down each of the variables selected in TASK ONE into contributing factors. Table 1 illustrates the relationship of variables to indicators using the 'career progress' and 'academic excellence' examples. Each of the indicators in Table 1 has the following characteristics: It has the potential to indicate change with respect to some aspect of a variable. 15

It is directly or indirectly observable and amenable in principle to measurement. Either existing sources of data can be consulted with respect to the indicator, or it is feasible to collect data through research. It can be used (at least to some extent) to relate the performance of a selected group or individual with that of other comparable entities - by crossreferencing, establishing correlation, examining distributions, benchmarking etc. Table 1 - Relating indicators to variables Variables Indicators 'career progress' employed or unemployed? employment entry level promotions change of employment type (e.g. from engineering to management) location of employment relationship of employment to skills 'academic excellence' class or standing of degrees rate of promotion in academic hierarchy success in funding further research publication record awards, honours and prizes Step Two - Indicator filtration tool The indicator 'filtration' process embodied in the tool is designed: to sort candidate indicators according to the kinds and degrees of impacts each selected indicator is likely to indicate, and which kinds of evidence and analytical capabilities may be required and/or available; to eliminate all but the most relevant indicators that have the highest likelihood of being productive in demonstrating impacts. The filtration tool consists of three processes: PROCESS ONE puts the 'Production-Application-Network' (PAN) concept into operation (as explained in Section 4.5.7 of the conceptual framework). PROCESS TWO evaluates indicators in terms of the characteristics and availability of different types of data (see the discussion in Section 4.5.8 of the conceptual framework). PROCESS THREE considers the impact assessment potential of each candidate indicator. 16

G PROCESS ONE - PAN classification Candidate indicators will be classified first according to categories in the PAN framework - i.e. in terms of which type or types of impact they have the potential to show: P (Production indicators) are associated with factors that relate to participation in the Fellowship and to the production of any form of output and/or outcome. A (Application indicators) are associated with factors related to the uses and benefits that accrue from the application of Fellowship outputs and/or outcomes. N (Network indicators) are associated with factors related to the formation of new network relationships or the evolution of existing ones. In some cases, an indicator will have the potential to indicate more than one type of impact and can be classified into more than one category. In particular, and consistent with the rationale behind the PAN framework (see Sections 3.2 and 4,5,7 of the conceptual framework) many P and A indicators will have N dimensions. For example, indicators associated with the 'commercial spin-off' variable can be both A- indicators in that they demonstrate an application of a Fellowship outcome, and N- indicators in that data on the locations of spin-off enterprises indicate networkbuilding activities. The PAN classification will facilitate the remaining two processes in the indicator selection tool, and assist also in the analysis processes at the impact demonstration stage. G PROCESS TWO - Data group classification Process Two provides the first opportunity to reduce the size of the candidate indicator group. This process employs the Group classification system that was explained in Section 4.5.8 of the conceptual framework. The Groups in this case refer to the types of data resources required by the indicator. Group 1 indicators - require only data that is collected already by the Commission in connection with the Fellowships; Group 2 indicators - require data gathered specially for the impact assessment; Group 3 indicators - require access to external sources of data (e.g. patent and publications databases, or official government statistics). 17

The Group classification process involves three operations: 1. The indicators are first evaluated according to the type of data resources that will likely be needed in order to support research using these indicators (i.e. according to Group 1, 2 or 3 criteria). 2. All indicators in Groups 1and 2 can be carried forward immediately to Process Three for evaluation of their impact demonstration potential. 3. Group 3 indicators are separated out and filed for possible use in discretionary contexts. Group 3 indicators could be useful in determining comparative criteria, setting benchmarks, and so forth, and they could be operationalised as special studies on a discretionary basis. G PROCESS THREE - Evaluation of impact assessment potential This final process in the indicator selection tool involves evaluating the impact demonstration potential of the reduced set of candidate indicators - i.e. all of the indicators that in principle can be supported by Group 1 or Group 2 data. The process has two goals: 1. to eliminate candidate indicators that are fundamentally problematical, irrelevant or infeasible in the context of the prescriptive elements of the study; 2. to refine the remaining indicators such that their impact demonstration potential is focussed and maximised. The desired outcome of this process can be illustrated with reference to some of the indicators listed above in Table 1. All of these indicators are relevant to their respective variables, but not all are useful in the MCF impact assessment context. For example, the 'class or standing of degrees' indicator is relevant to the 'academic excellence' variable, but largely irrelevant to the MCF impact assessment. For post doctoral research in particular, the accreditation factor comes into play only as a criterion for the selection of Fellowship recipients in the first instance. On the other hand, indicators like 'promotion rate', 'funding success' and 'publication record' are relevant to the 'academic excellence' variable, but they are not necessarily relevant to the MCF impact assessment unless they are qualified in some way. In each case, these indicators would have to be linked in the impact assessment research process to factors that are closely or uniquely associated with the MCF programme - i.e. the research would have to show that 'funding success' was built upon some specific outcome of the Fellowship - or otherwise compared with the performance of a control group. The evaluation of impact assessment potential will require substantial qualitative input from the implementers, but the process can approached systematically by subjecting each candidate indicator to a series of questions. Table 2 contains a suggested core group of questions that explore the potential technical quality of the indicators - in terms of operational feasibility - and their 'yield' 18

potential in terms of demonstrating impact from some aspect of the Fellowships. (Implementers are referred to the background discussion in the conceptual framework for guidance in the application and interpretation of these questions.). For illustration purposes, the table has been filled in with reference to the 'output of scientific publications' indicator, envisaged in this instance as being applied in an ex post context, 2-3 years following the completion of Fellowship projects begun in the first two years of FP4. Table 2 - Example of an indicator evaluation format and core questions Variable: academic excellence Indicator: output of scientific publications Questions: Yes No? 1. Is the indicator amenable to a known or obvious form of systematic observation and/or measurement? X 2. Is it feasible to gather data relevant to the indicator using the survey method? X 3. Is it sufficient to gather data relevant to the indicator using the survey method? X 4. Is it feasible to gather data relevant to the indicator using qualitative (i.e. interview) methods? X 5. Is the indicator significant in the time-frame selected for the study - i.e. would sufficient information be available within this X time-frame to show significant variation? 6. Does the indicator require reference to an external control group in order to be significant in an impact assessment context? X 7. Can an appropriate control group be identified? X 8. Does the indicator require an external benchmark in order to be significant in an impact assessment context? X 9. If required, can an appropriate external benchmark be established? X 10. Are there theoretical and/or methodological problems in linking the indicator specifically to demonstrations of additionality X in the MCF context? 11. Is it likely that analysis will be possible using known methods or close derivatives of known methods? X 12. Is there a higher than acceptable likelihood of inherent bias in the data? X The example given in the table shows what is likely to be a typical outcome of the indicator evaluation process. In this case, the indicator shows promise in that some useful data can be gathered from surveys and fairly standard analytical techniques applied. On the other hand, it is open to question whether data gathered by a survey will be sufficient to show anything where publications are concerned. To show impact, the indicator may require reference to the performance of external control groups (probably in this case requiring access to Group 3 data). However, identifying this control group and/or obtaining information about its performance for comparative purposes may not be straightforward - information quantity and quality may vary by scientific field and geographical location. Moreover, in this case there are significant theoretical problems in linking publication counts to 'excellence' determinations - i.e. there is no prima facie case that higher or lower publication 19

quantities indicate degrees of 'excellence'. There are similar problems making a case that Marie Curie Fellows would be any more or less prone to producing quality scientific output than any other group of academics at similar stages in their careers. For all indicators that have been evaluated in the above fashion, five courses of action are available: 1. Researchers can accept the indicator based on the 'balance' of its positive and negative features relative to the goal of demonstrating impact. 2. Researchers can peripheralise the indicator - perhaps using it in a supportive or comparative capacity in connection with other indicators. 3. Researchers can reserve the indicator forpossibleuseinaspecialised discretionary study of some description. 4. Researchers can eliminate the indicator from the frame of reference. 5. Researchers can re-focus the indicator such that it is made more relevant and useful for impact assessment. The fifth alternative - re-focussing the indicator - is very important. It could be that the most obvious characteristics of the indicator made it unsuitable. However, focussing-in on a different aspect of the indicator could make it suitable. Again taking the 'publications' indicator as an example, it could be possible using information gathered in a survey of Fellows to link specific aspects of scientific output to specific features of the Fellowship programme, or to capabilities developed during the Fellowship that would likely not have been developed otherwise to the same extent. Consistent patterns of this nature could reinforce claims of additionality. ANNEX I outlines a core group of variables and indicators that has been pre-selected by the developers of the methodology by using the above tools. 20

2.3 ACTIVITY THREE: Research In ACTIVITY THREE, the selected variables and indicators are made operational. This involves the formulation of research questions. The nature of these questions largely determines the design of research instruments, and how data will be gathered and formatted. The following discussion is oriented to describing the generic requirements for research instruments to assess the impacts of the Marie Curie Fellowships. The operational context for these instruments is described in the Implementation Plan (Section 3). As emphasised earlier, the general focus of the research is on the longitudinal dimension - i.e. it is oriented to the collection of time-series data. As explained in the conceptual framework, the 'over-time' element is critical in the attribution of impacts (i.e. additionality) to specific stimuli as opposed to mere 'consequences' in the form of substitution or random effects. The type of longitudinal framework that relates most directly to the MCF is the panel study. This involves tracking the activities, behaviours, outputs etc. of a group of subjects over time. To be effective, the panel method requires reasonably active contact with a subject group throughout a given time period. In the present methodology, this contact involves assessing the activities of Fellows at a point in time close to the commencement of their Fellowship projects (i.e. near the beginning of the observation period as described in ACTIVITY ONE), and again after the follow-up interval. As shown in Figure 3, this methodology provides for two research capabilities: 1. A core impact assessment monitoring capability is provided for assessing impacts in general areas across the whole range of Fellowships, and/or for assessing the impacts of specific groupings of Fellowships in selected circumstances. It is anticipated that the core monitoring capability will take time to construct and that it will be applicable mainly for Fellowships begun in FP5 and in subsequent Framework Programmes. 2. A periodic capability is provided for cases in which it is not possible to implement a monitoring process. For example, a single survey response is likely to be the only way to collect data from Fellows who completed projects in FP3 or the earlier phases of FP4. Also, periodic instruments may be the only viable alternatives where detailed research is required concerning specific types of impacts on individual stakeholder groups. As explained above in Section 2, the potential scope and range of impact assessment activity in the MCF context is very large. Accordingly, no single set of questions in a single instrument will be adequate to capture all of the impact dynamics of the Fellowships. The approach taken in the following sections is to focus on setting out the research instrument development process in generic form with specific reference to developing a core monitoring capability. Theperiodic capability will be introduced as a derivative of the core monitoring capability. 21

In each case, reference will be made to questions that have been field-tested already as part of the development process for the methodology. These questions are used to illustrate the mechanics and analytical potential of different question types and formats. By following the process laid out below, impact assessment researchers will be able to construct an operational range of periodic and/or monitoring instruments as and when required at any point in the life-cycle of the methodology. Details of the field-testing exercise are given in ANNEX II. TASK ONE: Setting up a longitudinal survey capability As explained in the previous section, the description of research instrument design and application is oriented to the development of a core monitoring capability that could be used to assess general impacts relating to all Fellowships. Obviously, the range of potential variables that could be pursued in this context is limited. Nevertheless, this kind of study has been chosen for presentation purposes because it provides the simplest vehicle for demonstrating the process of designing and applying research instruments. On a discretionary basis, a wide range of longitudinal monitoring studies is possible, depending upon the priorities and the available resources. Monitoring studies could be set up to assess impacts with respect to a variety of specific factors, and these could run concurrently with general impact assessment exercises as might be applied to all Fellowships. Many factors might be included in studies like these in order to complement and supplement general impact assessments. Some of the more likely are: specific scientific fields selected countries or regions specific demographic criteria (e.g. age, gender) specific variables (e.g. commercial spin-offs) specific types of funding (e.g. MCF returning grants) For example, specifically focussed monitoring studies could track impacts on the progress of one selected scientific fields in one group of countries, or explore the transfer of skills and knowledge in a selected technical field from academic to commercial arenas in the EU generally, or in selected Member States. In all cases, the mechanics of setting up and operationalising a longitudinal monitoring study is the same. Step One: Laying out the survey instruments All of the survey instruments in this methodology adopt a standard basic format. This is illustrated in Figure 10. The basic data fields in this format are as follows. The Survey Header contains all of the information required to identify the particular survey to which the instrument refers. This can be inputted as text or by using an alphanumeric code (e.g. 'A' = general monitoring study of all FP5 Fellowships, 'B' = periodic study of FP4 fellowships completed in 1996 etc.). In some cases, the same survey instruments may be distributed to different groups of Fellows and/or hosts. Where it is important to distinguish between the responses of these 22