TOOL #62. THE USE OF ANALYTICAL MODELS AND METHODS

Size: px
Start display at page:

Download "TOOL #62. THE USE OF ANALYTICAL MODELS AND METHODS"

Transcription

1 TOOL #62. THE USE OF ANALYTICAL MODELS AND METHODS 1. INTRODUCTION Models provide a framework to analyse and investigate the impacts of policy options ex ante (IA), or also ex post (retrospective evaluations. Their purpose is to provide information to support decision makers. All models are simplifications but good models provide insights and understanding if used correctly. It is important, therefore, to ensure that the right model is selected and used in a manner to deliver policy relevant results of the requisite quality. Box 1. Simplified description of economic model types typically used in IA and evaluation studies (N.B. models often span the arbitrary boundaries presented below) General Equilibrium Models: Allow for consistent comparative analysis by ensuring that the economic system and individual markets remain in general equilibrium in the long term. They are typically used to capture oneoff and long-term effects from policy "shocks". They are able to produce disaggregated results as such models only require one (base) year of data. They provide detailed information on the policy impact of a particular variable of interest. Many CGE models suffer from a lack of historical validation. Some types of CGE are also used for forecasting and scenario building. Econometric Models: These models are typically used to capture medium/long-term effects from shocks and for forecasting. Modelled relationships are econometrically estimated using historical detailed time-series data rather than economic theory. Models can capture the process of dynamic adjustment and structural changes if these are not too substantial. Such models are not generally suitable for short-term analysis (but can in some cases span different time frames). They are also premised on the assumption that historical relations will still be valid in the future. Partial equilibrium models: Single Sector Models or System Models typically used in the detailed analysis of a specific economic sector (such as energy supply) or a combination of related economic sectors (such as the interaction of energy supply, and a number of energy demand sectors) over short/medium/long term. They can provide a high degree of disaggregation within the sector(s) covered. Models are unable to capture the interactions with other sectors and the effects in other markets but remain in equilibrium within the sectors in question. Factors related to issues outside of the sectors in question must be supplied exogenously and interaction/feedback to the rest of the economy is ignored. Micro-simulation Models: Typically used for analyses at a detailed disaggregated level over the short term these usually focus on individuals, households or firms (e.g. tax effect on income distribution) although they can be provide insights at a higher level of aggregation. Models require very detailed disaggregated data and may not therefore be able to cover all actors of interest or all resource flows nor important general equilibrium feedback effects. Input-Output models offer an alternative approach to large-scale economic modelling. This is typically used for short-term analysis of supply chains and how industries are related. The models are based around economic input-output tables which indicate the values of purchases between economic sectors in a particular year. Inputoutput tables are usually available at the national level though they can be aggregated to regional and European levels. Results are easy to interpret and few resources are necessary but the models are simple, rely heavily on assumptions and can only be used for static analysis as the model doesn t take into account changes over time. Integrated modelling approaches combine other relevant models or modules together. The resultant integrated model can be applied to assess impacts in several policy areas simultaneously (e.g. combined analysis of air pollution emissions, atmospheric transport, ecosystem sensitivity and economic abatement costs can be used to develop cost-effective abatement strategies). Despite its strengths, an integrated model requires a great deal of resources to construct. The difficulties lie in both theoretical approaches, with models that may be based on different assumptions, and the practicalities of linking different sets of computer code, model classifications, etc. 508

2 Modelling is a complex and technical activity that requires specific expertise. The JRC 747 can provide advice and support related to IA modelling activities. The JRC has established an online inventory 748 of all the models that are currently in use in the Commissions called MIDAS. This inventory has an export function that automatically generates a model description that can be used when preparing Annex 4 to the IA report which describes the models used in the IA. The rest of this tool addresses key aspects of modelling in relation to preparing an impact assessment. 2. GENERAL PRINCIPLES ABOUT THE USE OF MODELS Successful modelling requires communicating to decision makers how a model works and the strengths and limitations of a chosen modelling approach. As for all impact assessment methods, communicating and understanding uncertainty in model outputs is also vital. Quality Assurance processes and where relevant uncertainty analysis can ensure that decision makers receive this key information. 3. QUALITY ASSURANCE (QA) In any modelling exercise, time and resources should be allocated to quality assurance processes. The level of quality assurance should be proportionate to the impact and complexity of the model. Models are also developed and used by external organisations on behalf of the Commission but QA procedures should nonetheless be an integral part of the work such contractors undertake for the Commission services and this may have to be included in appropriate terms of reference. QA will include: Testing by the model developer before a new model version is released. This might include checking the consistency of results from previous model exercises using earlier versions of the model; Validation that a model can reproduce historical/statistical data. This gives confidence that the model can be used to assess policy scenarios; A periodic review (of relevant parts of the model) by internal or external reviewers particularly for complex models which may be the sole basis for evaluating policy options. A critical assessment of the assumptions used in the construction of the model to determine whether they are realistic and relevant to the problem at hand. This type of quality assurance does not have to be undertaken as part of each individual impact assessment particularly if a model and modelling team are involved in preparing 747 Unit JRC-I.1; JRC-I1-SEC@ec.europa.eu 748 The modelling inventory and knowledge management system (MIDAS) is an inventory of models that are in used by the Commission services to support policy preparation. It describes the relationships between models, data, policies and people. (internal site of the Commission) 509

3 several such assessments in a short period of time. What counts is the reliability of the results used in each impact assessment. A key element of risk management is ensuring that models are developed, managed and maintained by appropriately skilled and experienced staff. Furthermore, the model user should be fully capable of using the model and understanding model risks, limitations, major assumptions and outputs. Transparency regarding models and modelling approaches can enhance the quality of models and their outputs. Publication of all or some relevant details of a model or its outputs can be a useful QA tool because it facilitates effective scrutiny by engaging external experts. The Commission may receive evidence for an impact assessment from external sources. In such cases it is important to exercise a quality control process or ensure that it has been performed by the external contractor and that the results are available for examination. 4. SENSITIVITY AND UNCERTAINTY ANALYSES A transparent and high-quality impact assessment should acknowledge and, to the extent relevant or possible, attempt to quantify the uncertainty in model results because the uncertainty could change the ranking and conclusions about the policy options. Sensitivity analysis is about understanding how the uncertainty in the output of a mathematical model or system (numerical or otherwise) can be attributed to the different sources of uncertainty in the model inputs which allows identification of those inputs that have the greatest effect on model results. The quantification of uncertainty in a model output using the propagation of uncertainty in the input variables is known as uncertainty analysis. Such analysis can give an estimate of the variance of the output. Undertaking sensitivity analyses is likely to require extra computational, human or financial resources to be deployed during the impact assessment. These resources may not be routinely available to undertake the necessary sensitivity analysis for particularly complex models. Nonetheless, those undertaking modelling studies should attempt, at least periodically and not necessarily for each and every impact assessment, to understand the influence of key model parameters on model results. The JRC can provide support on sensitivity and uncertainty analyses in impact assessments. 749 There are two ways to quantify uncertainty. The first "one at a time" approach is more common and less complex than the second "global" approach. However, the one at a time approach provides unrealistically small estimates of model uncertainty in most cases. The choice will necessarily be determined by the complexity of the model, the available resources (including computing, time and personnel resources) and the importance of the policy intervention. The first approach examines the variation in the model output as each input variable is changed one at a time, usually to the minimum and maximum plausible values. This one-ata-time (OAT) approach is most commonly used in Commission IAs. 749 JRC I.01/SAMO 510

4 Box 2. A simple example of sensitivity analysis A model is built to estimate the potential economic cost of a chemical accident at a proposed plant in a European region, including trans-boundary effects. It examines the number of people and businesses living within a certain radius and estimates the total value of lost property and life corresponding to different classes of explosion or fire. Applying sensitivity analysis, the output variable of interest is the total cost of the damage. Uncertain inputs include medical costs per individual, total population within the impact radius, the size of the impact radius, and the assumed proportion of people and businesses affected, among others. Using expert opinion and available statistics, probability distributions are assigned to each variable, and a sample is constructed consisting of some thousand runs of the accident model. The sample is used to run the model, and the resulting output vector is used to estimate sensitivity. It is found that, with 95% confidence, the estimated cost is within 2Bn to 20Bn. Furthermore, the most influential input variable is the stock of flammable material, causing 38% of the variance in the cost, followed by engineering variables accounting for 15% of the variance, with a set of meteorological parameters (wind speed and direction) accounting for most of the remaining variance. A global approach for quantifying uncertainty allows for the simultaneous exploration of all sources of known uncertainty and which can capture nonlinearities and interactions between model inputs. In global uncertainty and sensitivity analysis (GSA), probability distributions are assigned to uncertain model inputs. This uncertainty is then propagated through the model by running it repeatedly with different input values, which provides probability distributions of the model outputs. In particular, the variance of each model output is used as a measure of uncertainty, and the contribution of each input to the output variance is a measure of sensitivity. Software is available to simplify such analyses. Sensitivity analysis can also be designed to address higher level model uncertainty, such as the impact of different model specifications or model selections which can be propagated through the analysis via modelaveraging procedures. The basic steps to performing GSA are as follows: (1) Define a variable of interest for the analysis. This variable should be the main model output of interest to the impact assessment, and can be the result of a suitable aggregation of spatially distributed or time-dependent model outputs. An examples might be the net monetary benefit; (2) Identify all model variables which are affected by uncertainty in consultation with experts and stakeholders as appropriate. Inputs can be of various natures, i.e. scalar variables, time series or spatially distributed maps. (3) Characterise the uncertainty for each selected input by assigning a probability distribution using all available information such as experiments, estimations, physical bounds considerations and expert opinion. This is also a particularly important step which may require significant resources. Extended peer-review should be considered to ensure quality in the treatment of uncertainty. (4) Generate a sample from the previously defined probability distributions. The sample is a matrix which specifies the input values to use for each model run (of a large number 511

5 of such model runs) and is designed to allow the calculation of sensitivity. The sample is generated so as to explore the full extent of uncertainty and is based on the input distributions specified in the previous step. Such samples can be generated from a number of software packages. (5) The model is run many times using the sampled input variables for each model run as identified in the previous step. For each run, record the value of the output variable of interest is recorded. This process is usually accomplished automatically using computer software. (6) The results of the model runs are then used to estimate sensitivity, as well as the uncertainty in the model output. The suggested software will yield the fractional contribution of each input to the output variance. Box 3. IA on biofuels and indirect land use change (SEC(2012) 343): Monte Carlo analysis to illustrate the range of uncertainty of ILUC GHG factors. The IFPRI-MIRAGE-BioF model was used to model the consumption of biofuels used in the EU and to estimate the emissions of greenhouse gases associated with indirect land-use change for a range of biofuel feedstocks. The model is a general equilibrium model, which encompasses all economic sectors and markets and their inter-actions at a global scale. The figure below shows the estimated indirect land-use change emissions in gco 2/MJ for a range of different biofuel feedstocks. The model was combined with a Monte Carlo simulation, to provide a better description of the probability distribution of the uncertainty associated with the key model variables. More information on this analysis can be found (see Annex XI of SEC(2012) 343). Figure 6: Results of the Monte-Carlo analysis: estimated indirect land-use change emissions (gco2/mj)-under scenario of current trade policy. The bars indicate 1st and 99th percentile, while the boxes are 25 th and 75th percentiles. Sensitivity analysis following the above steps can be complicated, impractical or infeasible. For example, large computer models require sufficient computing power and may take a long time to run. There may also be large numbers of uncertain model inputs, and correlations 512

6 between input variables. Techniques exist to deal with these problems for which the JRC can provide assistance. Sensitivity analysis can only address uncertainties for which there is quantitative information characterising that uncertainty. When this information is missing, or when a deeper assessment of the framing of the analysis is needed, or where there is a major disagreement among stakeholders about the nature of the problem, or when there is a lack of time/computational resources, then sensitivity auditing is more suitable but sensitivity analysis is still advisable as one of the steps of sensitivity auditing. Sensitivity analysis measures how uncertainty in model input variables contributes to the uncertainty in the model output, and is therefore a numerical analysis which requires uncertainties to be quantified. Sensitivity auditing, on the other hand, is a wider consideration of the effect of all types of uncertainty, including structural assumptions embedded in the model, and subjective decisions taken in the framing of the problem. Sensitivity auditing includes sensitivity analysis as part of its procedure. The ultimate aim is to communicate openly and honestly the extent to which particular models can be used to support policy decisions and what their limitations are. Modellers could usefully consider the following principles: Before entering into contractual arrangements with third party consultants, consider the full spectrum of available models in the available literature to tackle the problem, and whether the complexity of the model is justified by the quality of information used to calibrate it, i.e. that a large model is not being used rhetorically to convey a spurious impression of accuracy. Critically examine all model assumptions. Are there implicit or hidden assumptions which a third party might point to? Would it be possible to evaluate the impact of taking a different approach to tackle the issue? Be careful not to over or under-estimate uncertainties in model input parameters. In some cases, uncertainty assigned to parameters can be cross-checked against values in published research, or given second opinions by experts. Where uncertainty is particularly difficult to quantify, it may be better to discuss it in qualitative terms rather than give a spurious impression of accuracy. Aim for transparency when relevant and possible the model calculations should be checked by third parties. In general, sensitivity auditing stresses the idea of honestly communicating the extent to which model results can be trusted, taking into account as much as possible all forms of potential uncertainty, and to anticipate criticism by third parties. In particular, one should avoid giving the impression of false confidence by quantification at all costs. In some cases there is simply not enough data, or the process is too complex, to give a meaningful quantitative prediction. 513

7 5. TRANSPARENCY When IA analysis relies on modelling or the use of analytical methods, the model should be documented in the corporate modelling inventory MIDAS and the IA report should include a dedicated annex presenting the following information: A brief description of the main model/method which addresses: The model developer and nature (public/private/open source) of the model; Model structure and modelling approach with any key assumptions, limitations and simplifications (where these are not explained in the description of the baseline in the IA Report); Intended field of application and appropriateness for the specific impact assessment study presented; Model validation and peer review with relevant references; The extent to which the content of the model and input data have been discussed with external experts. Explanation of the likely uncertainty in the model results and the likely robustness of model results to changes in underlying assumptions or data inputs. Where this is not possible at least a qualitative indication of the uncertainty and its relevance in relation to the analysis and comparison of policy options should be provided. The steps taken to assure the quality of the modelling results presented in the IA. A concise description of the baseline used in the modelling exercise in terms of the key assumptions, key sources of macroeconomic and socio-economic data, the policies and measures the baseline contains and any assumptions about these policies and measures (such as the extent to which they are deemed implemented by the Member States, or their estimated impact following implementation). 6. USE OF CONSISTENT HORIZONTAL ASSUMPTIONS AND FORECASTS The impact assessment process requires a baseline scenario to be constructed which incorporates all existing policies and measures and shows how a particular problem will evolve in the future without further policy intervention. In addition, the impacts associated with each policy option should be compared against this baseline. Developing a model baseline implies: Deciding upon the assumptions on how to represent the existing policy framework for the relevant sector at Union and Member State level; Making assumptions over a defined future time horizon on the evolution of important macroeconomic and socio-economic variable such as GDP, demographic structure, energy prices etc. Many different models are used in the Commission covering a wide range of different policy areas. Discussion in the IA Steering Group will help ensure that the most appropriate information sources and assumptions are used in constructing model baselines. For example, 514

8 population projections (EUROPOP) 750 and GDP projections 751 are regularly produced by ESTAT and DG ECFIN. Projections on energy, transport and GHG emissions are regularly prepared by DGs ENER, CLIMA and MOVE