MCDA for Health Care Decisions - Emerging Good Practices: Report 2 of the ISPOR MCDA Task Force

Size: px
Start display at page:

Download "MCDA for Health Care Decisions - Emerging Good Practices: Report 2 of the ISPOR MCDA Task Force"

Transcription

1 MCDA for Health Care Decisions - Emerging Good Practices: Report 2 of the ISPOR MCDA Task Force This is the second ISPOR MCDA Task Force Report. It builds on MCDA for Health Care Decision Making An Introduction: Report 1 of the ISPOR MCDA Emerging Good Practices Task Force. Both draft reports are available on the ISPOR MCDA Task Force webpage, GRP.asp ISPOR MCDA Task Force Co-Chairs (alphabetical order) Nancy Devlin, PhD, Director of Research, Office of Health Economics, London, UK Maarten IJzerman, PhD, Professor of Clinical Epidemiology & Health Technology Assessment (HTA) and Head, Department of Health Technology & Services Research University of Twente, Enschede, The Netherlands Kevin Marsh, PhD, Senior Research Scientist and EU Director of Modelling and Simulation Evidera, London, UK Praveen Thokala, MASc, PhD, Research Fellow, University of Sheffield, Sheffield, UK Task Force Leadership Group (alphabetical order) Note: inclusion as authors t.b.c, depending on contribution to authorship Meindert Boysen, MSc, Programme Director Technology Appraisals, PASLU and HST, National Institute for Health and Clinical Excellence (NICE), Manchester, UK Zoltán Kaló, MSc, MD, PhD, Professor of Health Economics, Department of Health Policy and Health Economics, Eötvös Loránd University (ELTE), Founder & CEO, Syreon Research Institute Budapest, Hungary Thomas Lönngren, MSc (pharm), Strategic Advisor, NDA Group AB, UK and Sweden Filip Mussen, MSc, PhD, Vice-President, Regional Regulatory Affairs, Janssen Pharmaceutical Companies of Johnson & Johnson, Antwerp, Belgium Stuart Peacock, MSc, PhD, Co-Director, Canadian Centre for Applied Research in Cancer Control (ARCC), Deputy Head of Cancer Control Research, British Columbia Cancer Agency, and Professor, School of Population and Public Health, University of British Columbia, Vancouver, Canada John Watkins, PharmD, MPH, BCPS, Pharmacy Manager, Formulary Development, Premera Blue Cross, Bothell, WA, USA PLEASE NOTE: The task force s final recommendations (both reports) will be presented during the Third Plenary Session of ISPOR s 18 th Annual European Congress in Milan on Wednesday, 11 November. YOUR feedback will be very important. Every comment is read and discussed. 1

2 INTRODUCTION Health care decisions are often complex, invariably requiring the assessment and valuation of options that differ on multiple dimensions. Stakeholders may disagree on the relative value of these dimensions. Decisions are made under conditions of uncertainty. Moreover, relying on judgments or informal processes can lead to poor quality decisions. One answer to improve health care decision making is multi criteria decision analysis (MCDA). It supports consistent and rigorous decision making in health care, and makes the basis for these decisions transparent to other stakeholders. MCDA is an umbrella term for several analytical techniques developed over the last several decades with reference to many disciplines such as operations research, psychology, economics and decision sciences (Belton and Stewart, 2002). Despite its frequent use, currently, there is little guidance on how to select and implement the appropriate technique in health care. There are several literature sources useful to researchers trying to understand the differences between MCDA methods and how to select and implement these methods (Guitouni & Martel, 1998; Velasquez & Hester, 2013; DeMontis et al., 2000 and 2005; Getzner et al., 2005; Ivlev et al., 2014; Keeney and von Winterfeldt, 2009; Keeney 2002; Dodgson et al, 2009; Olsen et al, 1999). While this task force report extracts and synthesizes the insights from these sources, it goes a step further, providing accessible guidance on good practices for using MCDA in health care and includes a comprehensive checklist for users. Although many steps in MCDA are comparable and independent of the specific method, there are methods choices that need to be addressed. Throughout the report, the theoretical, methodological and practical differences between methods are outlined, general practices for using MCDA are identified, and the implications of using MCDA for different decision problems are elaborated. This guidance covers a wide range of decisions, including: portfolio investment, market authorization, reimbursement and coverage, and clinical decisions. The differences between these decisions and the implications for good practice in the use of MCDA are discussed. For instance, the intended use of MCDA, whether it is an aggregate measure of benefit or simply a rank-order of alternatives to support decisions, has implications for: a) the relevant underlying theoretical framework, b) the elicitation procedure, facilitated group discussions or survey-based analyses, and c) the different approaches to analyse alternatives performance and obtain scores. The report s primarily focus is on the simple-weighted sum model, a form of value measurement.. This is by far the most prevalent MCDA technique in health care (Marsh et al, 2014). It requires a number of assumptions, which are elaborated throughout this report. One such assumption is the commensurability of value criteria that stakeholders are comfortable trading off changes in criteria against one another. There has been little formal testing of this assumption, and there is invariably unspoken acceptance of it among those applying MCDA in health care. Where this assumption is violated, other approaches are appropriate, in particular, out-ranking methods, such as Promethee and Electre (Thokala and Duenas, 2012). 2. GOOD PRACTICE GUIDELINES: METHODOLOGY While many methods are available to implement value measurement approaches, most are based on a set of common steps (Figure 1) (Devlin et al, 2015; Dodgson et al, 2009). This section identifies good practice when implementing each step. 2

3 50 51 Figure 1: Common steps in MCDA Define the decision problem Identify criteria Measure performance Score and weight criteria Aggregation Dealing with uncertainty Interpretation These steps need not to be implemented in the order they are presented. It is possible to envisage steps being undertaken iteratively. For instance, further insight into criteria weights may be generated during deliberations about the results of the MCDA. Similarly, it may not be necessary to complete all the steps to improve decision making. Defining criteria and measuring performance against these criteria ( partial MCDA ) may be sufficient. It may not be necessary to undertake explicit scoring and weighting, especially where it reveals a dominant alternative or clear cut trade-offs. When this is not the case, it is good practice to undertake the remaining steps of the MCDA. There are also studies that do not separate the scoring and weighting steps. This is the case with decompositional approaches, such as discrete choice experiments (see section 4). Another approach that does not separate scoring and weighting is identification of performance categories within each criterion as well as a system to score each performance category. These scores are then aggregated to provide an overall value estimate, essentially reflecting both scores and weights in the scoring system (Hansen et al, 2012; NHS England, 2015; Schnipper et al, 2015). Defining the decision problem The appropriate MCDA approach will depend on the decision problem. Therefore, the first step in designing a MCDA should involve developing a clear description of the decision problem, including: decision-makers objectives; alternative options; and relevant stakeholders. The objectives of decision makers will determine how sophisticated the MCDA scoring and weighting method needs to be. For instance, in some cases, we only need a rank-order of alternatives, while a societal value function is required in other cases. 3

4 The research team should consult widely through interviews and panel sessions with subject experts and stakeholders, and review previous decisions when defining the decision problem. The resulting description of the decision problem should be validated with stakeholders and subject experts. Selecting and structuring the criteria Decision criteria are the factors that are important in comparing the alternatives. Criteria can be identified and defined from several sources, including: the mission statements of the decision making organisation; documents describing previous decisions; evaluations to support related decisions; treatment guidelines; and stakeholder and expert consultation. These sources will usually provide a long list of potential criteria, which should then be shortened by applying the requirements listed below. The rationale for excluding particular criteria should be recorded and reported, and the final criteria list should be validated with stakeholders and subject experts. If consensus cannot be reached on the identity and definition of criteria, disagreements should be explored by re-running analyses based on different criteria. Criteria used in a simple-weighted sum value measurement model should meet the following requirements: 1. Completeness: The criteria should capture all factors important to measuring stakeholders objectives. 2. Non-redundancy: Criteria should be removed if they are unnecessary or judged unimportant, e.g., options that are expected to achieve the same level of performance on a criterion. However, this is only considered a redundancy when: a) the objective is to rank options, and b) new options not displaying the same performance will not be considered at a later date. Furthermore, when the objective is to value options or where unknown options will be evaluated using MCDA, the criteria should be included. 3. Non-overlap: Criteria should be defined to avoid double counting, which can give too much weight to a value dimension. It is important that overlapping is not confused with correlation. Criteria can be correlated while still measuring separate values. 4. Preferential independence: The weight for a criterion should be independent of the score on other criteria. For instance, including separate criteria for efficacy of treatment and severity of disease may violate this requirement, as the value attached to the efficacy of a treatment may depend on the severity of the disease. In such circumstances, either dependent criteria can be combined into a single criterion, or more complex, non-linear functions for aggregating criteria can be adopted (see Section 3). There is no rule as to how many criteria should be included in an analysis. Nonetheless, it is also important to be aware of the practical implications for including many criteria, including the time and cognitive burden involved in completing weighting exercises and the increased probability of inconsistency in response to weighting exercises. It is good practice to include as few criteria as is consistent with making a well-founded decision. The research team should be aware of the potential biases that may invalidate the identification of criteria (Montibeller and von Winterfeldt, 2014). Availability bias increases the probability that the criteria list focuses on events that are easily recalled. Myopic problem representation bias results from participants relying on oversimplified mental models. It is important for researchers to encourage participants to think harder about the criteria list, prompting them by using extreme or unusual scenarios, stimulating them with statistics, and encouraging group work. 4

5 It can be useful, especially where there are a large number of criteria, to organize criteria into a hierarchical structure using a value tree, which clusters criteria into sets of distinguishable components (see Berkeley and Humphreys, 1982; Stillwell and Winterfeldt, 1987; Winterfeldt and Fasolo, 2009; Hughes et al, 2013). This can help verifying whether the criteria are appropriate to the problem, as well as facilitate the weighting process by assessing weights first in groups, and then between groups. Once criteria have been identified, they should be defined clearly enough to allow their measurement, scoring and weighting. Most MCDA methods are capable of combining quantitative data, either on objective (e.g. probability of experiencing an adverse event) or subjective (e.g. PROs, subjective wellbeing) criteria, alongside qualitative data (e.g. expert opinion on the portability of an inhaler). There are several challenges to achieving this outcome (Montibeller and von Winterfeldt, 2014). Proxy bias occurs when proxy outcomes receive a larger weight than the fundamental outcome. Fundamental outcomes are also most likely to meet the requirements for the use simple weighted sum approach (Keeney and von Winterfeldt, 2009). Proxy outcomes should thus be avoided. Criteria should be defined in terms of absolute scales, rather than change estimates such as odds ratios, which risk producing invalid scores and weights. Whether criteria are presented as gains or losses will impact their scoring and weighting, so the status quo should thus clearly be identified, and criteria defined consistency as gains or losses. The definition of criteria should include the end points of the scales. Two factors will influence the definition of end points, which may need to await the measurement of performance of options. First, will the criteria be applied to repeat decisions? Many uses of MCDA in health care will involve the development of bespoke, decision-specific criteria lists. However, certain applications, such as HTA, may require a common list of criteria to be applied in multiple assessments. The use of criteria over multiple decisions will require that global (or fixed) scales are employed, anchored by their end-points at the best and worst performance that could realistically occur. Local (or relative) scales anchored by their end-points at best and worst performance of the options currently under evaluation. The definition of global scales requires more work than local scales. Second, how will uncertainty be propagated through the MCDA? Where Monte Carlo simulation is used, and a local scale is being adopted, it is recommended that the scale should cover the variation in performance, with a rule of thumb use a range defined by the 95% confidence intervals of the range of performance of options on the criteria (Tervonen et al, 2015). Performance measurement Once the criteria are identified and defined, it is necessary to measure the performance of alternatives. MCDA can incorporate a range of sources of performance measurement, ranging from evidence synthesis to expert opinions. The method should conform to the broad principles of evidence-based medicine (e.g. and to local methods guidelines. Often such guidelines will recommend that a network meta-analysis (NMA) is undertaken to synthesize evidence on performance. NMAs invariably report relative effect estimates. To facilitate scoring and weighting, it is recommended that criteria are defined using absolute scales. Therefore, it is often required that the results of an NMA are translated into absolute values by combining them with reliable estimates of baseline effect. 5

6 MCDA is capable of combining quantitative data (either objectively of subjectively assessed) and qualitative data. Objective measurement of criteria performance has the advantage of distinguishing the factual performance measurement from the value judgments involved in scoring and weighting. However, where data are not available or the criteria are subjective, the performance can be estimated using subjective expert opinion. The results of the performance measurement should be reported in the form of a performance matrix, showing the performance of each alternative against each criterion (EMA 2012). This should include both estimates of average performance, variance in this estimate, and the sources of data. Scoring and weighting criteria The objective of scoring and weighting is to capture stakeholders priorities and preferences for criteria. Weights capture priorities or preferences between criteria. Scores capture priorities or preferences within a criterion. By combining these two pieces of data, we are able to assess the relative importance of any change in performance within any of the criteria. Scores differ from performance measures in two ways. First, scores are often used to translate performance measures using different units for each criterion onto a common scale, usually a scale. Second, scores incorporate priorities or preferences for changes in performance, such that the same change along the scoring scale (e.g or 60-70) are equally preferred. There are many methods for eliciting scores and weights. As the meaning and interpretation of scores and weights differ between the methods, considerable care needs to be taken to ensure that the scores and weights are understood by the participants and are consistent with the objectives of the MCDA. Typology of scoring and weighting methods Table 1 summarizes a typology of scoring and weighting methods. This covers the stated-preference approaches employed by the majority of MCDAs undertaken in health care. Though revealed preference approaches could also be used to estimate decision makers preferences based on retrospective analysis of decisions (Dakin et al, 2014). Table 1 Typology of Stated-Preference Techniques Used in Value Measurement MCDA models Category Method Examples of MCDAs in healthcare Baltussen et al. (2007) Marsh et al (2012) DCE/ Conjoint Defechereux et al. (2012) analysis Muhlbacher et al (2013) Choice Decompositional based Hansen and Ombler PAPRIKA (2008) Best-worst scaling Swancutt et al, (2008) Al-Janabi et al (2011) Matching Standard gamble Montgomery et al (2001) Time trade off Sommers et al (2007) Compositional Ranking Zuniga et al (2009) 6

7 Direct rating Pairwise Trade off Visual analogue scales Point allocation SMART AHP Swing weighting Goetghebeur et al (2012) Wilson et al. (2007) Sussex et al (2013) Kroese et al (2010) Bots and Hulshof (2000) van Til et al (2008b) Dolan et al. (2005) van Til et al. (2008a) Hummel et al (2012) EMA (2011) Felli et al (2009) Tervonen et al (2015) Matching Bi-section method* Difference method* *Indicates methods just used for scoring Stated preference methods can be broadly classified as compositional and decompositional methods (see Helm et al, 2004; and Weernink et al, 2014). - Compositional methods involve eliciting decision-makers priority or preference for changes within and between criteria, which are later aggregated to estimate the overall composite value of an option. When compositional methods are applied to generate weights for the ranges of performance relevant to a set of alternatives (the swing in performance), the methods are sometimes referred to as swing weighting. - Decompositional methods involve participants valuing options as a whole, from which weights and scores for criteria are derived using regression-based techniques, in the form of coefficients. The remainder of this section focuses on selecting techniques within both the compositional and decompositional groups. The implementation of compositional techniques is covered, whereas implementation of decompositional methods is not because good practice guidelines are already available e.g., ISPOR good research practices on implementing discrete choice experiments (Johnson et al, 2013, Bridges et al, 2011). Selection between scoring and weighting methods The most appropriate method will depend on the objective of the analysis, the time available to undertake the analysis, and the stakeholders involved. The methods can be distinguished according to two factors that will impact how appropriate they are in different circumstances: 7

8 Theoretical relevance Methods emanate from two different theoretical traditions. It is important to ensure that the theory underlying the method is acceptable to decision makers. Choice-based, matching and trade off methods, and direct rating methods such as SMART are based on multi-attribute utility theory (MAUT) or multiattribute value theory (MAVT) based on the work of von Neumann and Morgenstern, 1947; Krantz et al, 1971; Keeney and Raiffa, 1976; Dye and Smith, Analytica hierarchal process (AHP) was developed from a different theoretical basis developed by Saaty (1986, 2008). An overview of the history of developments in MCDA is provided by Koksalan et al (2011). A key difference between these theoretical perspectives is their assumption about the transitivity of preferences (Vargas, 1987; Guitouni and Martel, 1998; De Montis et al, 2005). MAUT/MAVT requires that preferences are transitive. AHP relaxes this assumption, only requiring the acceptance of the reciprocal property relation that the strength of one criteria s dominance over a second is inversely proportional to the second s over the first. An implication of AHP s rejection of transitivity as a basis for modelling decision makers preferences is that a) responses to AHP questions are subject to inconsistency a non-uniformity of respondents answers and b) the results of AHP are subject to rank reversal changes in the rank ordering of options, e.g., when another option is added to the list of alternatives (REF). An analyst is required to decide how to define excess inconsistency and how to deal with this. Saaty defined a consistency ratio of more than 0.1 as excessive and recommended that above this level, new pairwise comparisons should be undertaken to reformulate values (De Montis et al, 2005). However, the literature contains different thresholds for defining excessive inconsistency and approaches for dealing with inconsistency, including: asking respondents to complete the questions again, asking them to check and revise their answer or excluding their responses (REF). Those methods based on MAUT/MAVT vary in their correspondence with its theoretical requirements that scores display interval properties and weights are scaling constants. Scores have interval scale properties when equal increments on a scoring scale have equal increments of value. This is achieved with partial value functions, point allocation methods, and the coefficients generated by the DCE. Approaches that adopt ordinal scales, do not necessarily display interval properties. Weights are more likely to be scaling constants, or value trade-offs, when a method elicits trade-offs directly, rather than assessment of the importance of criteria, and where the weighting exercise considers the range of performance being evaluated. These conditions are best met by the swing weighting and DCE approaches. Another element of theoretical relevance is whether criteria are uncertain and the importance of this uncertainty (Keeney and von Winterfeldt, 2009). Where outcomes are not characterized by uncertainty, or where decision makers display little risk aversion, scoring and weighting can take the form of preferences for certain consequences. Otherwise, preferences will need to be elicited for uncertain prospects, such as using standard gamble. The importance of the theoretical relevance of a method will depend on the objective of the analysis. It is paramount if the objective is to produce a precise estimate of the value of an option, e.g., when informing pricing decisions or designing a HTA methodology. When the objective is to rank options, it is possible 8

9 to imagine a lower level of theoretical relevance being acceptable - only requiring the determination of whether the value of one option is greater than another. Cognitive burden Certain data is easier for participants to provide. Ordinal data, such as that required by ranking techniques or DCE, is easier to provide than cardinal data. Certain methods require more data to be considered. Pairwise comparison of criteria, such as used by AHP, is easier than the simultaneous comparison of multiple criteria, as required by DCE. Some techniques provide more support to participants by breaking down the problem into components (achieved by direct methods) and facilitate knowledge sharing (easier in a workshop setting). Implementing scoring and weighting methods The weighting challenge can be simplified where scoring functions are known to be linear, only requiring the estimation of the relative importance of a single unit change on each criterion (Keeney and von Winterfeldt, 2009). There are circumstances when scoring functions are likely to be linear when: a) a criterion is a fundamental objective of value in itself (e.g. number of lives saved) or b) the range being valued is very small (e.g. where cost is small compared to the decision makers budget) (Keeney and von Winterfeldt, 2009). Eliciting preferences poses a number of challenges. All elicitation techniques are subject to potential biases (Montibeller and von Winterfeldt, 2014): - Anchoring bias occurs when estimates of numerical value are inadvertently influenced by an unrelated initial value an anchor. It is recommended that researchers avoid introducing anchors or provide multiple and counter-anchors. - Certain effect bias occurs as participants often prefer sure things to gambles with similar utility. It is recommended that sure things are avoided or that elicitation questions consistently use sure things or gambles across criteria. - Equalizing bias occurs when participants allocate similar weights to all criteria and can be mitigated by ranking criteria first or eliciting weights hierarchically. - Splitting bias occurs when grouping criteria impacts the weights they are given and can be mitigated by avoiding splits with large weight ratios and using ratio judgments rather than direct estimation. - Scaling bias occurs for multiple reasons, including: a) underestimating large differences; b) using most of the range, whatever its size; c) producing a symmetric distribution of responses centred on the mid-point of the range; d)the tendency to use equally all parts of the response scale; and e) interpreting scales in appropriately, such as treating ordinal scales as dichotomous (Weber and Borcherding, 1993). Participants should be provided with sufficient information about the problem and the MCDA method to effectively provide scores and weights. This should include detail on how their data will be used in the analysis. This can be achieved by training participants on the methods being employed. Good practice guidelines on the effective communication of data should be followed (Fischhoff et al, 2011). It can be helpful to elicit stakeholders reasons for their value judgments to ensure their consistency between the value functions implied by responses. When stakeholders are unable to provide precise responses to scoring and weighting exercises, ranges can be elicited. 9

10 Scores and weights should be provided by the stakeholders whose values are defined as relevant to the decision problem. It is important that participants are representative of the stakeholder group of interest and that potential heterogeneity in preferences is considered when identifying participants. Where scoring involves options being assessed directly, e.g., when using direct rating or pairwise comparisons to score options, rather than converting performance into scores using partial value functions, it is possible that ratings are influenced by factors other than performance on the criterion. In this instance, it is important to either ensure that ratings are done by disinterested parties or that sensitivity analysis is used to test for the robustness of conclusions to variations in scores. It is also recommended that the weighting exercise is undertaken before the scoring exercise, to avoid weights being influenced by knowledge of the performance of the options revealed during the scoring exercise. However, this will only be possible if a global scaling approach is employed, as a local scaling approach will require the scoring exercise to be undertaken to define the scales to be weighted. Otherwise, scoring and weighting are both undertaken in abstract of the performance of options, and can be undertaken in any order. Calculation of aggregate scores Aggregation methods should be justified in light of both the decision problem and the nature of decision makers preferences, and should be consistent with the scoring and weighting method adopted (see section 3.4). Typically, aggregation involves the weighted-sum approach, assuming a linear additive value function, and simply multiplying scores and weights and aggregating these weighted scores. This requires that criteria are compensatory, non-overlapping and preferentially independent (see section 3.2). Where these assumptions do not hold, it may be possible to redefine or add criteria to capture the source of non-additivity (Keeney and Winterfeldt, 2009), or alternative aggregation approaches, such as multiplicative functions, are required, though these are rarely applied in health care (Belton and Stewart, 2002). Unless there is consensus on weights and scores, two or more sets of inputs should be used in the aggregation to reflect heterogeneity in stakeholders preferences. Results may vary with the order in which data are average and aggregated. If scores and weights are averaged before they are aggregated, the result may differ compared with aggregating scores and weights for each stakeholder and then averaging the results. It is recommended that the latter method is adopted, as this provides data on overall value from the perspective of each stakeholder. Dealing with uncertainty All MCDA studies should include an assessment of uncertainty both parameter and structural as it pertains to the decision problem being addressed. Several techniques are available to explore the impact of uncertainty, including deterministic sensitivity analysis, probabilistic sensitivity analysis, Bayesian frameworks, fuzzy set theory, and grey theory (Broekhuizen et al, 2015). The appropriate approach should be justified with reference to the key differences between these the techniques, including whether: a) uncertainty in multiple parameters needs to be taken into account simultaneously; b) dependence relations exist between parameters; c) the availability of the necessary inputs, (including parameter ranges or distributions); d) the ability of stakeholders to provide these 10

11 inputs where necessary, (including the time available to educate stakeholders to allow them to provide these inputs); and e) the extra information yielded, including the visual representations generated. At a minimum, deterministic sensitivity analysis should be undertaken. However, other approaches should be considered where this does not provide sufficient reassurance that uncertainty does not impact the results of the MCDA. Where uncertainties in structural assumptions were identified in the process of conceptualizing and building an MCDA, such as in the criteria list, those assumptions should be documented and tested in scenario analyses. Uncertainty should not be included as a criterion in an MCDA, but rather as a source of variation in the structure of the MCDA and parameter inputs. Interpretation and reporting It is important that the face validity of the results is assessed to understand whether the MCDA outputs reflect the intuition of the decision makers. A lack of face validity is an opportunity to revisit the MCDA steps MCDA to ensure they have been undertaken appropriately. Is the list of criteria complete? Did stakeholders interpret the meaning of scores and weights correctly? Updates or corrections can be made, and the process of revisiting earlier steps can help stakeholders understand why the results differ from their expectation. This process can facilitates discussion to support decision making. It should not be seen as an opportunity to update the results in favor stakeholders preferred alternatives. The MCDA results should be interpreted in light of the meaning of scores and weights. This varies with the techniques employed and will need careful explanation to the stakeholders. The interpretation should also be undertaken in light of the analysis of the impact of uncertainty and heterogeneity on model outcomes and of the validation tasks undertaken. Reporting of the MCDA should include: a) the criteria; b) justification of their inclusion and reason for excluding other criteria; c) the performance matrix; d) scoring and weighting methods, including reasons for selecting these methods; e) results of the scoring and weighting exercise; f) base case results; g) results of the sensitivity and scenario analysis; and h) results of validation exercises. 3. MCDA CHECKLIST This section presents a checklist designed to support the design, reporting and critical assessment of a MCDA study. Given the problem-contingent nature of the selection of appropriate MCDA methods, the checklist is not intended to be used to prescribe the choice of specific methods. Table 2: MCDA checklist MCDA step Recommendation The decision problem Develop a clear description of the decision problem including objectives, stakeholders, and options, and whether the objective is to value or rank alternatives. Validate the decision problem with decision makers and clinical experts. 11

12 Selecting and defining criteria Measuring performance Scoring and weighting Aggregation Dealing with uncertainty Interpretation Report the sources and justify the methods used to identify criteria, the long list of criteria identified, and the rationale for excluding criteria. Report and justify the criteria definitions, including the scales used to measure criteria. Summarise the structure of the criteria using a value tree. Validate the criteria and the value tree with stakeholders and against MCDA requirements (complete, non-redundant, nonoverlapping, preferentially independent). Report and justify the sources uses to measure performance. Report the performance matrix, showing performance of alternatives against criteria, including relevant measures of variance. Justify the population whose values should be elicited in the scoring and weighting exercise. Justify the methods used for scoring and weighting with reference to the decision problem and the stakeholders involved. Report the values of scores and weights, including relevant measures of variance. Validate the meaning of scores and weights with stakeholders. Report the aggregation function used. Validate with stakeholders that the analysis reflects how they expected their scores and weights to be used. Report and justify variation (ranges, distribution) in parameter inputs, including sources, and list all assumptions made (including structural assumptions). Consider the effects on the results of parameter uncertainty and assumptions. At a minimum deterministic sensitivity analysis should be performed. Justify the uncertainty analysis method(s) adopted. Consider the effects on results of variation in scores and weights and performance measures between sub-groups. Interpret the findings in light of the results of the validation undertaken, the meaning of the scores and weights employed, and the analysis of the impacts of uncertainty and heterogeneity Report the results of the validation, including where results diverged from expectations and how this was resolved. 4. GOOD PRACTICE GUIDELINES: RESOURCES AND SKILLS The successful implementation of MCDA requires investment in research team, expert, stakeholder, and management time. Different dimensions of the decision context will influence the resources available: 1. The time available to make a decision will vary between problems. For instance, HTA decisions have more time and resources available to them than shared decisions between a clinician and an individual patient. 2. The resources available to support a decision will vary between decisions and locations. More resources are likely be to be made available by higher income countries compared with lower 12

13 and middle income countries; national-level decision makers compared with regional or locallevel decision makers; and to support repeat decisions rather than one-off decisions. Methods should be selected based on appropriateness to the decision problem, rather than to fit a resource envelope. Although where more than one method is appropriate, resource availability may inform the methods selection. Furthermore, while it is good practice to implement a full MCDA, a partial MCDA may improve decision making and require fewer resources. The research team The research team needs the time, technical expertise, and appropriate software to successfully implement the chosen method. Invariably MCDA will require a multi-disciplinary team. The types of competencies required may include: a) identifying, reviewing and synthesizing evidence; b) workshop facilitation; c) survey design; d) management of the biases in preference elicitation; and e) statistical analysis, for instance the use of regression models to analyze the results of discrete choice experiments. Many steps in the implementation of an MCDA may be supported by specialized decision-making software, of which at least 20 programs (mostly web-based) are widely available (Weistroffer et al, 2006). These are especially useful for problems involving relatively large numbers of alternatives and criteria and when employing weighting and scoring. Some of the software packages also support survey development and collection of criteria weights. Stakeholders time and availability The success of the MCDA will rely on the commitment of stakeholders, who will have other calls on their time. This may include determining and/or validating the criteria list, providing scores and weights; reviewing, validating and interpreting results. A workshop-based method may require stakeholders to be available at the same time for multiple meetings. A survey-based method may be less demanding on stakeholders time, but often involves larger samples of stakeholders. Many of the tasks involved in the MCDA are unfamiliar to stakeholders and may require cognitive effort. It is important to allow sufficient time for piloting elicitation techniques, training participants on the elicitation techniques, discussion of tasks and responses to facilitate learning, and validation. Expert input Expertise is required in the therapy area of interest and in the methods being employed. The multidisciplinary nature of MCDA means that the team may need supplementing with experts in specific techniques, such as statistical analysis or workshop facilitation. It is often difficult to identify a team that has the necessary breadth of methodological and therapy area expertise, particularly when evaluating treatments for multiple indications. In these circumstances, clinical expertise will often be a valuable addition to the team. Such expertise is also valuable within workshops being run to elicit inputs into the MCDA, particularly when the participants are not themselves familiar with an indication, such as decision makers or the general population CONCLUSION The first task force report introduced MCDA, including: a) the motivation for its use; b) the steps commonly involved in undertaking MCDA; c) and the diversity of approaches employed to implement 13

14 these steps. This second report summarizes good practice when implementing MCDA. It provides guidance to help select and implement the appropriate MCDA approach for a particular decision problem. This includes considerations of theoretical relevance, but also guidance on the ability of stakeholders to contribute and the time and resources required. This guidance is summarized in a checklist for those designing and reviewing MCDA applications in health care. We emphasize the importance of and illustrate how to implement several key good practices when undertaking MCDA, including: - Being clear about the objective of the analysis, and reflecting these objectives in criteria selection. - Being aware of the theoretical assumptions underlying methods and carefully selecting criteria and scoring and weighting methods in light of these. - Complying with the practices of good evidence-base medicine. - Ensuring that stakeholders can effectively participate in the MCDA by providing the appropriate training, information and support. - Validating at each step in the MCDA - Transparent reporting and justification of the methods employed. - Identification and testing of uncertainty in structure and inputs in the MCDA. We highlight the challenges and biases that researchers will face throughout the MCDA steps, and provide guidance on how to manage these. While this guidance is illustrated in the context of different decision problems, it is not the objective of this paper to address the many decision-specific issues facing the use of MCDA. For instance, an important debate concerning the use of MCDA for HTA is how to incorporate opportunity. The task force would like to explore issues such as these in a subsequent report(s). 14

15 REFERENCES 1. Al-Janabi H, Flynn TN, Coast J (2011). Estimation of a preference-based carer experience scale. Med Decis Mak. 31(3): Baltussen R, Ten Asbroek AHA, Koolman X, Shrestha N, Bhattarai P, Niessen LW. Priority setting using multiple criteria: should a lung health programme be implemented in Nepal? Health Policy Plan. 2007;22(3): Belton V, Stewart TJ. Multiple Criteria Decision Analysis: An Integrated Approach: Kluwer Academic Publishers Berkeley D, Humphreys P. Structuring decision problems and the bias heuristic. Acta Psychologica Bots PWG and Hulshof JAM (2000) Designing multi-criteria decision analysis processes for priority setting in health policy. Journal of Multi-criteria decision analysis, 9, 1-3, Broekhuizen H, Groothuis-Oudshoorn CG, van Til JA, Hummel JM, IJzerman MJ (2015), A review and classification of approaches for dealing with uncertainty in multi-criteria decision analysis for healthcare decisions. Pharmacoeconomics May;33(5): Bridges, JFP, Hauber, AB, Marshall, D, et al. Conjoint Analysis Applications in Health a Checklist: A Report of the ISPOR Good Research Practices for Conjoint Analysis Task Force.Value Health 2011;14: Dakin H., Devlin N., Feng Y., Rice N., O'Neill P. and Parkin D., (2014), the influence of costeffectiveness and other factors on nice decisions, Health Economics. DOI: /hec De Montis A, De Toro P, Droste-Franke B, Omann I, Stagl S Criteria for quality assessment of MCDA methods. In: 3rd Biennial Conference of the European Society for Ecological Economics, Vienna, De Montis A, De Toro P, Droste-Franke B, Omann I, Stagl S (2005) Assessing the quality of different MCDA methods. In: Getzner M, Spash C, Stagl S (eds) Alternatives for Environmental Evaluation. Routledge, Abingdon, Oxon 11. Defechereux T, Paolucci F, Mirelman A, Youngkong S, Botten G, Hagen TP, et al. Health care priority setting in Norway: a multicriteria decision analysis. BMC Health Serv Res. 2012;12: Devlin N, TBD (2015) Introduction to MCDA in Health Care Decision Making: Report 1 of the ISPOR MCDA Task Force. XXX 13. Dodgson J, Spackman M, Pearman A, Phillips L. (2009) Multi-criteria Analysis: A Manual. Department for Communities and Local Government Dolan JG. Patient priorities in colorectal cancer screening decisions. Health Expect Int J Public Particip Health Care Health Policy. 2005;8(4):

16 Dyer, J. S. and Sarin, R. (1979). Measurable multiattribute value functions. Operations Research, 22, EMA (2011), Benefit-risk methodology project. Work package 3 report: Field Tests. 17. EMA (2012), Benefit-risk methodology project. Work package 4 report: Benefit-risk tools and processes. 18. FDA (2015), Patient preference information submission, review in PMAs, HDE Applications, and de novo requests, and inclusion in device labelling. Draft guidance for industry, FDA staff, and other stakeholders. FDA. 19. Felli JC, Noel RA, Cavazzoni PA. A multiattribute model for evaluating the benefit-risk profiles of treatment alternatives. Med Decis Mak. 2009;29(1): Fischhoff B et al (2011) Communicating Risks and Benefits: An Evidence Based User s Guide. US FDA. 21. Getzner M, Splash CL, Stagl S (2005) Alternatives for environmental valuation. Routledge, New York 22. Goetghebeur MM, Wagner M, Khoury H, Levitt RJ, Erickson LJ, Rindress D. Bridging health technology assessment (HTA) and efficient health care decision making with multicriteria decision analysis (MCDA): applying the EVIDEM framework to medicines appraisal. Med Decis Mak. 2012;32(2): Guitouni A, Martel J-M (1998) Tentative guidelines to help choosing an appropriate MCDA method. Eur J Oper Res 109: Hansen P and Ombler F (2008), A new method for scoring additive multi-attribute value models using pairwise rankings of alternatives. Journal of Multi-Criteria Decision Analysis, Volume 15, Issue 3-4, pages Hansen P, Hendry A, Naden R, Ombler and Stewart R (2012) A new process for creating points systems for prioritizing patients for elective health services. Clinical Governance. An International Journal, 17, Helm R, Scholl A, Manthey L, Steiner M. Measuring customer preferences in new product development: comparing compositional and decompositional methods. IJPD. Inderscience Publishers; 2004;1(1): Hughes et al (2013) Recommendations for the methodology and visualization techniques to be used in the assessment of benefit and risk of medicines. IMI-PROTECT Benefit-Risk Group 28. Ivlev I, Kneppo P, Bartak M (2014) Multicriteria decision analysis: a multifaceted approach to medical equipment management. Technological and Economic Development of Economy 20 (3):

17 Johnson FR, Lancsar E, Marshall, D, et al. Constructing experimental designs for discrete-choice experiments: Report of the ISPOR conjoint analysis experimental design good research practices task force. Value Health 2013;16: Keeney RL (2002). Common Mistakes in Making Value Trade-Offs. Operations Research. Nov 1;50(6): Keeney RL and Raiffa H (1976). Decisions with multiple objectives: Preferences and value tradeoffs. New York: Wiley. Reprinted in 1993 by Cambridge University Press. 32. Keeney RL and von Winterfeldt D (2009) Practical Value Models. Published Articles and Papers. Paper Köksalan M, Wallenius J, Zionts S. Multiple Criteria Decision Making: From Early History to the 21st Century. Singapore: World Scientific Krantz DH, Luce RD, Suppes P, and Tversky A (1971). Foundations of measurement, Volume I. New York: Academic Press. 35. Kroese M, Burton H, Whittaker J, Lakshman R, Alberg C. A framework for the prioritization of investment in the provision of genetic tests. Public Health Genomics. 2010;13(7 8): Marsh K, Dolan P, Kempster J, Lugon M. Prioritizing investments in public health: a multicriteria decision analysis. J Public Health (Oxf) Marsh K, Lanitis T, Neasham D, Orfanos P, Caro J (2014) Assessing the value of healthcare interventions using multi-criteria decision analysis: a review of the literature. PharmacoEconomics 32 (4): Medical Device Innovation Consortium (MDIC) (2015), patient 39. Centered benefit-risk project report: A Framework for Incorporating Information on Patient Preferences Regarding Benefit and Risk into Regulatory Assessments of New Medical Technology 40. Montgomery AA, Harding J, Fahey T (2001) Shared decision making in hypertension: the impact of patient preferences on treatment choice. Fam Pract;18(3): Montibeller G and von Winterfeldt D (2014) Cognitive and motivational biases in decision and risk analysis. LSE working paper Mühlbacher A, Bridges J, Bethge S, Nübling M, Gerber-Grote A, Markos Dintsios C, Scheibler F, Schwalm A, Wiegard B (2013) Choice-based Conjoint Analysis pilot project to identify, weight, and prioritize multiple attributes in the indication hepatitis C. IQWiG Report 43. NHS England (2015), Standard Operating Procedures: The Cancer Drugs Fund (CDF) Guidance to support operation of the CDF in Olson DL, Mechitov AI, Moshkovich H. Comparison of MCDA paradigms. Advances in Decision Analysis

THE ISPOR MCDA TASK FORCE: HOW BEST TO USE IT IN HEALTH CARE DECISION MAKING. Kevin Marsh, Maarten IJzerman, Praveen Thokala and Nancy Devlin

THE ISPOR MCDA TASK FORCE: HOW BEST TO USE IT IN HEALTH CARE DECISION MAKING. Kevin Marsh, Maarten IJzerman, Praveen Thokala and Nancy Devlin THE ISPOR MCDA TASK FORCE: HOW BEST TO USE IT IN HEALTH CARE DECISION MAKING Kevin Marsh, Maarten IJzerman, Praveen Thokala and Nancy Devlin May 19, 2015 Kevin Marsh Evidera 1 To share preliminary recommendations

More information

Weighted Summation (WSum)

Weighted Summation (WSum) Table of Contents Weighted summation...1/6 1 Introduction...1/6 2 Methodology...1/6 3 Process...1/6 3.1 Value functions and standardization methods...2/6 3.2 Weighting methods...2/6 4 Review...3/6 4.1

More information

Benefit-Risk Assessment Using Bayesian Choice-Based Conjoint: An Example

Benefit-Risk Assessment Using Bayesian Choice-Based Conjoint: An Example Benefit-Risk Assessment Using Bayesian Choice-Based Conjoint: An Example Kimberley Dilley Panel Session: Bayesian Methods in Assessing Benefit-Risk Preference in a Structured Framework Report on work performed

More information

Overview of Multi Criteria Decision Analysis for Benefit Risk Analysis. Praveen Thokala University of Sheffield

Overview of Multi Criteria Decision Analysis for Benefit Risk Analysis. Praveen Thokala University of Sheffield Overview of Multi Criteria Decision Analysis for Benefit Risk Analysis Praveen Thokala University of Sheffield Outline of my talk Why do we need MCDA? What is MCDA and how can it support health care decision

More information

Using Multiple Criteria Decision Analysis (MCDA) in the context of HTA: an experimental case study on metastatic colorectal cancer

Using Multiple Criteria Decision Analysis (MCDA) in the context of HTA: an experimental case study on metastatic colorectal cancer Using Multiple Criteria Decision Analysis (MCDA) in the context of HTA: an experimental case study on metastatic colorectal cancer Aris Angelis and Panos Kanavos Advance HTA Conference, London, November

More information

OPTIMAL USE PROGRAM DRUG Therapeutic Review Framework and Process

OPTIMAL USE PROGRAM DRUG Therapeutic Review Framework and Process OPTIMAL USE PROGRAM DRUG Therapeutic Review Framework and Process Version: 3.0 Publication Date: June 27, 2018 Report Length: 16 Pages Revision History From time to time, CADTH may amend the therapeutic

More information

GE/McKINSEY MATRICES REVISITED: A MIXED MODE TOOL FOR MULTI-CRITERIA DECISION ANALYSIS

GE/McKINSEY MATRICES REVISITED: A MIXED MODE TOOL FOR MULTI-CRITERIA DECISION ANALYSIS GE/McKINSEY MATRICES REVISITED: A MIXED MODE TOOL FOR MULTI-CRITERIA DECISION ANALYSIS Mariza Tsakalerou Department of Production Engineering & Management, Democritus University of Thrace, Xanthi, Greece

More information

DEVELOPMENT OF MULTIPLE CRITERIA DECISION ANALYSIS FRAMEWORK FOR OFF-PATENT PHARMACEUTICALS DECISION MAKING IN VIETNAM

DEVELOPMENT OF MULTIPLE CRITERIA DECISION ANALYSIS FRAMEWORK FOR OFF-PATENT PHARMACEUTICALS DECISION MAKING IN VIETNAM DEVELOPMENT OF MULTIPLE CRITERIA DECISION ANALYSIS FRAMEWORK FOR OFF-PATENT PHARMACEUTICALS DECISION MAKING IN VIETNAM Introduction Authors: Pham Le Tuan 1, Pham Huy Tuan Kiet 2, Diana Brixner 3, Van Huy

More information

Panos Kanavos and Aris Angelis. Multiple Criteria Decision Analysis for Value Based Assessment of New Medical Technologies: A Conceptual Framework

Panos Kanavos and Aris Angelis. Multiple Criteria Decision Analysis for Value Based Assessment of New Medical Technologies: A Conceptual Framework Working Paper No: 33/2013 March 2013 LSE Health Panos Kanavos and Aris Angelis Multiple Criteria Decision Analysis for Value Based Assessment of New Medical Technologies: A Conceptual Framework Multiple

More information

Models in Engineering Glossary

Models in Engineering Glossary Models in Engineering Glossary Anchoring bias is the tendency to use an initial piece of information to make subsequent judgments. Once an anchor is set, there is a bias toward interpreting other information

More information

Practice Based Competencies For Canadian Genetic Counsellors

Practice Based Competencies For Canadian Genetic Counsellors Practice Based Competencies For Canadian Genetic Counsellors Contents Acknowledgements....3 Preamble. 4 Background....4 Overview. 5 Assumptions...6 Competency Domains 6 1. Counselling and Communication

More information

The MACBETH approach to multicriteria value analysis

The MACBETH approach to multicriteria value analysis The MACBETH approach to multicriteria value analysis Introduction of key issues to building a good MCA model by Carlos A. Bana e Costa Decision Support Models 2012/2013 The spirit of decision analysis

More information

The Multi criterion Decision-Making (MCDM) are gaining importance as potential tools

The Multi criterion Decision-Making (MCDM) are gaining importance as potential tools 5 MCDM Methods 5.1 INTRODUCTION The Multi criterion Decision-Making (MCDM) are gaining importance as potential tools for analyzing complex real problems due to their inherent ability to judge different

More information

Draft agreed by Scientific Advice Working Party 5 September Adopted by CHMP for release for consultation 19 September

Draft agreed by Scientific Advice Working Party 5 September Adopted by CHMP for release for consultation 19 September 23 January 2014 EMA/CHMP/SAWP/757052/2013 Committee for Medicinal Products for Human Use (CHMP) Qualification Opinion of MCP-Mod as an efficient statistical methodology for model-based design and analysis

More information

Value of Information (VOI) Analysis: Principles, Applications and Good Practice Recommendations. ISPOR-AP, Tokyo, Speakers:

Value of Information (VOI) Analysis: Principles, Applications and Good Practice Recommendations. ISPOR-AP, Tokyo, Speakers: Value of Information (VOI) Analysis: Principles, Applications and Good Practice Recommendations ISPOR-AP, Tokyo, 2018 Speakers: Paul Scuffham, PhD, Centre for Applied Health Economics, Menzies Health Institute

More information

Multiple criteria decision analysis (MCDA) for drug benefit-risk assessment

Multiple criteria decision analysis (MCDA) for drug benefit-risk assessment Multiple criteria decision analysis (MCDA) for drug benefit-risk assessment Gert van Valkenhoef 2014-11-17 @ Brown Biostats Section 1 Introduction Credits This is joint work with Tommi Tervonen (Erasmus

More information

COMMUNICATION FROM THE COMMISSION TO THE EUROPEAN PARLIAMENT, THE COUNCIL, THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE AND THE COMMITTEE OF THE REGIONS

COMMUNICATION FROM THE COMMISSION TO THE EUROPEAN PARLIAMENT, THE COUNCIL, THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE AND THE COMMITTEE OF THE REGIONS EUROPEAN COMMISSION Brussels, 2.10.2013 COM(2013) 686 final COMMUNICATION FROM THE COMMISSION TO THE EUROPEAN PARLIAMENT, THE COUNCIL, THE EUROPEAN ECONOMIC AND SOCIAL COMMITTEE AND THE COMMITTEE OF THE

More information

DISCUSSION PAPER 3/2016

DISCUSSION PAPER 3/2016 Appraising the sustainability of infrastructure projects DISCUSSION PAPER 3/2016 August 2016 TABLE OF CONTENTS 1. Introduction... 3 2. Background... 4 3. The need for a proper sustainability method for

More information

Incorporating DSM Uncertainty and Flexibility into Integrated Resource Planning

Incorporating DSM Uncertainty and Flexibility into Integrated Resource Planning Incorporating DSM Uncertainty and Flexibility into Integrated Resource Planning Eric W. Hildebrandt, RCG/Hagler, Bailly, Inc. Robert M. Wirtshafter, University of Pennsylvania This paper presents an approach

More information

MACBETH Method, software and applications

MACBETH Method, software and applications MACBETH Method, software and applications DECISION SUPPORT MODELS 2012/2013 The MACBETH Approach for Multi-criteria Decision Analysis Purpose: To help people make better decisions Broad methodological

More information

INTERAGENCY GUIDANCE ON THE ADVANCED MEASUREMENT APPROACHES FOR OPERATIONAL RISK

INTERAGENCY GUIDANCE ON THE ADVANCED MEASUREMENT APPROACHES FOR OPERATIONAL RISK INTERAGENCY GUIDANCE ON THE ADVANCED MEASUREMENT APPROACHES FOR OPERATIONAL RISK Robert Rell February 29, 2012 Disclaimer: The views expressed do not necessarily reflect the views of the Federal Reserve

More information

Decision Making in a Constrained Environment

Decision Making in a Constrained Environment Decision Making in a Constrained Environment David Jeffrey Service Group Manager - Infrastructure Strategy Group GHD T: +64 9 368 6215 M: +64 27 250 3699 E: david.jeffrey@ghd.com ASB Centre, Level 16,

More information

Lloyd s Minimum Standards MS12 Scope, Change and Use

Lloyd s Minimum Standards MS12 Scope, Change and Use Lloyd s Minimum Standards MS12 Scope, Change and Use January 2019 2 Contents 3 Minimum Standards and Requirements 3 Guidance 3 Definitions 3 Section 1: Scope 4 SCU 1.1 Model Scope 4 Section 2: External

More information

HTA methodology at HIQA. Conor Teljeur

HTA methodology at HIQA. Conor Teljeur HTA methodology at HIQA Conor Teljeur What is HTA? Health technology assessment (HTA) is a multidisciplinary process that summarises information about the medical, social, economic and ethical issues related

More information

A Decision Support System for Performance Evaluation

A Decision Support System for Performance Evaluation A Decision Support System for Performance Evaluation Ramadan AbdelHamid ZeinEldin Associate Professor Operations Research Department Institute of Statistical Studies and Research Cairo University ABSTRACT

More information

Multi-objective Decision Analysis for Workforce Planning: A Case Study

Multi-objective Decision Analysis for Workforce Planning: A Case Study Proceedings of the 2012 Industrial and Systems Engineering Research Conference G. Lim and J.W. Herrmann, eds. Multi-objective Decision Analysis for Workforce Planning: A Case Study Natalie M. Scala, Richard

More information

Characterizing the long-term PM mortality response function: Comparing the strengths and weaknesses of research synthesis approaches

Characterizing the long-term PM mortality response function: Comparing the strengths and weaknesses of research synthesis approaches Characterizing the long-term PM 2.5 - mortality response function: Comparing the strengths and weaknesses of research synthesis approaches Neal Fann*, Elisabeth Gilmore & Katherine Walker* 1 * Usual institutional

More information

Assessing trends in SMC Advice Decisions (October September 2015)

Assessing trends in SMC Advice Decisions (October September 2015) Consulting Report November 2015 Assessing trends in SMC Advice Decisions (October 2009- September 2015) This report was initiated and funded by Pfizer Ltd For further information please contact: Phill

More information

A Stochastic AHP Method for Bid Evaluation Plans of Military Systems In-Service Support Contracts

A Stochastic AHP Method for Bid Evaluation Plans of Military Systems In-Service Support Contracts A Stochastic AHP Method for Bid Evaluation Plans of Military Systems In-Service Support Contracts Ahmed Ghanmi Defence Research and Development Canada Centre for Operational Research and Analysis Ottawa,

More information

Evaluation method for climate change mitigation instruments

Evaluation method for climate change mitigation instruments Evaluation method for climate change mitigation instruments Popi A. Konidari* National and Kapodistrian University of Athens Department of Informatics and Telecommunications pkonidar@kepa.uoa.gr Abstract.

More information

U.S. Response to APEC-OECD Integrated Checklist on Regulatory Reform - Presentation

U.S. Response to APEC-OECD Integrated Checklist on Regulatory Reform - Presentation 2006/SOM3/EC/022 Agenda Item: 3.5 U.S. Response to APEC-OECD Integrated Checklist on Regulatory Reform - Presentation Purpose: Information Submitted by: US Economic Committee Meeting Hoi An, Viet Nam 11-12

More information

HEALTH STATE UTILITY VALUES: MEASURING, MODELLING, AND MAPPING

HEALTH STATE UTILITY VALUES: MEASURING, MODELLING, AND MAPPING FORUM HEALTH STATE UTILITY VALUES: MEASURING, MODELLING, AND MAPPING Tuesday, 11 November 2014 FORUM Moderator Sorrel Wolowacz, PhD, Head, European Health Economics, RTI Health Solutions, Manchester, UK

More information

Figure 1: Overview of approach for Aim 1

Figure 1: Overview of approach for Aim 1 Significance Despite allocating more than $11 billion each year to support clinical research, the system of federal funding for medical research lacks a coordinated approach for allocating resources based

More information

UK STEWARDSHIP CODE RESPONSE BY GENERATION INVESTMENT MANAGEMENT LLP OCTOBER 2016

UK STEWARDSHIP CODE RESPONSE BY GENERATION INVESTMENT MANAGEMENT LLP OCTOBER 2016 UK STEWARDSHIP CODE RESPONSE BY GENERATION INVESTMENT MANAGEMENT LLP OCTOBER 2016 THE UK STEWARDSHIP CODE The UK Stewardship Code (the Code ) was published by the (UK) Financial Reporting Council in 2010

More information

Module 11 SCENARIO PLANNING

Module 11 SCENARIO PLANNING Stage 2 - LEADERSHIP FOR STRATEGY Module 11 SCENARIO PLANNING The Leadership Academy is a learning and action programme for mayors, senior officials and elected representatives of local government. January

More information

Implementation of multiple criteria decision analysis approaches in the supplier selection process: a case study

Implementation of multiple criteria decision analysis approaches in the supplier selection process: a case study Implementation of multiple criteria decision analysis approaches in the supplier selection process: a case study Anabela Tereso, Department of Production and Systems Engineering / ALGORITMI Research Centre

More information

HTA Principles Survey Questionnaire

HTA Principles Survey Questionnaire I. Introduction This survey aims to inform health care decision makers in their rational and efficacious applications of health technology assessment (HTA) in Asia, while adhering to key established benchmarks.

More information

Manažment v teórii a praxi 2/2006

Manažment v teórii a praxi 2/2006 SELECTION OF THE INFORMATION SYSTEMS' DEVELOPMENT IN ENTERPRISES BY MULTI-CRITERIA DECISION-MAKING Vesna Čančer ABSTRACT This article presents the methodology for the selection of information systems development

More information

NICE methods of technology appraisal

NICE methods of technology appraisal NICE methods of technology appraisal Zoe Garrett Senior Technical Adviser, Centre for Health Technology Evaluation National Institute for Health and Care Excellence (NICE) Contents Function of NICE Work

More information

Evaluation & Decision Guides

Evaluation & Decision Guides SURGERY STRATEGIC CLINICAL NETWORK EVIDENCE DECISION SUPPORT PROGRAM Evaluation & Decision Guides 2014 Revision (v3) New ideas & Improvements Department of Surgery Evidence Decision Support Program Resource

More information

Making priority setting and resource allocation decisions from principles to practices

Making priority setting and resource allocation decisions from principles to practices Making priority setting and resource allocation decisions from principles to practices Craig Mitton and Howard Waldner University of British Columbia New Caledonia Solutions Prioritize Consulting craig.mitton@ubc.ca

More information

ADMINISTRATION OF QUALITY ASSURANCE PROCESSES

ADMINISTRATION OF QUALITY ASSURANCE PROCESSES ADMINISTRATION OF QUALITY ASSURANCE PROCESSES The organizational arrangements procedures outlined in this chapter have been found to be effective in higher education institutions in many parts of the world.

More information

NICE Guidelines: A Methodological Basis for Decision Making. Rod Taylor MSc, PhD Dept of Public Health & Epidemiology University of Birmingham

NICE Guidelines: A Methodological Basis for Decision Making. Rod Taylor MSc, PhD Dept of Public Health & Epidemiology University of Birmingham NICE Guidelines: A Methodological Basis for Decision Making Rod Taylor MSc, PhD Dept of Public Health & Epidemiology University of Birmingham Pre Meeting Symposium - ISPOR Annual Conference Washington

More information

Decision Support System (DSS) Advanced Remote Sensing. Advantages of DSS. Advantages/Disadvantages

Decision Support System (DSS) Advanced Remote Sensing. Advantages of DSS. Advantages/Disadvantages Advanced Remote Sensing Lecture 4 Multi Criteria Decision Making Decision Support System (DSS) Broadly speaking, a decision-support systems (DSS) is simply a computer system that helps us to make decision.

More information

Interview Tools for Common Job Clusters for the Behavioural Competencies

Interview Tools for Common Job Clusters for the Behavioural Competencies IV 72 This tool provides a selection of sample behavioural questions for the Behavioural Competencies and proficiency levels relevant to the Supervision competency profile. It also includes the procedures

More information

Developing practices for supporting EIA with Multi-Criteria Decision Analysis

Developing practices for supporting EIA with Multi-Criteria Decision Analysis Developing practices for supporting EIA with Multi-Criteria Decision Analysis Jyri Mustajoki 1*, Mika Marttunen 1, Timo P. Karjalainen 2, Joonas Hokkanen 3 and Anne Vehmas 4 1 Finnish Environment Institute,

More information

Good Practices for Synthesizing and Using Evidence in Health Care Decision Making?

Good Practices for Synthesizing and Using Evidence in Health Care Decision Making? Good Practices for Synthesizing and Using Evidence in Health Care Decision Making? An ISPOR Workshop presented by the ISPOR HTA Council Working Group ISPOR 22 nd Annual International Meeting Boston, MA,

More information

Introduction to Business Research 3

Introduction to Business Research 3 Synopsis Introduction to Business Research 3 1. Orientation By the time the candidate has completed this module, he or she should understand: what has to be submitted for the viva voce examination; what

More information

Introduction. Key Words: Analytic Hierarchy Process (AHP), Fuzzy AHP, Criteria weightings, Decisionmaking

Introduction. Key Words: Analytic Hierarchy Process (AHP), Fuzzy AHP, Criteria weightings, Decisionmaking Application of AHP and Fuzzy AHP to Decision-Making Problems in Construction Sangwook Lee, Ph.D. University of Wisconsin-Platteville Platteville, Wisconsin The approach to selecting key factors has been

More information

Value assessment methods and application of Multiple Criteria Decision Analysis for HTA

Value assessment methods and application of Multiple Criteria Decision Analysis for HTA Value assessment methods and application of Multiple Criteria Decision Analysis for HTA Aris Angelis Medical Technology Research Group, LSE Health Advance HTA Capacity Building, Mexico City November 2014

More information

Forum 1: BETTER, CHEAPER, FASTER: A REPORT ON THE SCIENCE OF OPTIMIZATION FROM THE ISPOR OPTIMIZATION TASK FORCE 6 November, 2017

Forum 1: BETTER, CHEAPER, FASTER: A REPORT ON THE SCIENCE OF OPTIMIZATION FROM THE ISPOR OPTIMIZATION TASK FORCE 6 November, 2017 Forum 1: BETTER, CHEAPER, FASTER: A REPORT ON THE SCIENCE OF OPTIMIZATION FROM THE ISPOR OPTIMIZATION TASK FORCE 6 November, 2017 ISPOR OPTIMIZATION METHODS EMERGING GOOD PRACTICES TASK FORCE Optimization

More information

TOOL #57. ANALYTICAL METHODS TO COMPARE OPTIONS OR ASSESS

TOOL #57. ANALYTICAL METHODS TO COMPARE OPTIONS OR ASSESS TOOL #57. ANALYTICAL METHODS TO COMPARE OPTIONS OR ASSESS PERFORMANCE 1. INTRODUCTION A crucial part of any retrospective evaluation is the assessment of the performance of the existing policy intervention.

More information

Issues in Strategic Decision Modelling

Issues in Strategic Decision Modelling Issues in Strategic Decision Modelling Paula Jennings BDO Stoy Hayward 8 Baker Street LONDON W1U 3LL ABSTRACT Models are invaluable tools for strategic planning. Models help key decision makers develop

More information

How Much Do We Know About Savings Attributable to a Program?

How Much Do We Know About Savings Attributable to a Program? ABSTRACT How Much Do We Know About Savings Attributable to a Program? Stefanie Wayland, Opinion Dynamics, Oakland, CA Olivia Patterson, Opinion Dynamics, Oakland, CA Dr. Katherine Randazzo, Opinion Dynamics,

More information

NATIONAL INSTITUTE FOR HEALTH AND CARE EXCELLENCE. Health and Social Care Directorate. Indicator Process Guide. Published December 2017

NATIONAL INSTITUTE FOR HEALTH AND CARE EXCELLENCE. Health and Social Care Directorate. Indicator Process Guide. Published December 2017 NATIONAL INSTITUTE FOR HEALTH AND CARE EXCELLENCE Health and Social Care Directorate Indicator Process Guide Published December 2017 Please note that this is an interim factual update to the NICE Indicator

More information

RESEARCH SUPPORT SERVICES FRAMEWORK. Streamlining the management and governance of R&D studies in the NHS

RESEARCH SUPPORT SERVICES FRAMEWORK. Streamlining the management and governance of R&D studies in the NHS RESEARCH SUPPORT SERVICES FRAMEWORK Streamlining the management and governance of R&D studies in the NHS Page 1 of 22 Contents 1. INTRODUCTION... 3 How to use this document... 3 Background... 4 Purpose

More information

Opportunities in the Health Care Practice at Analysis Group

Opportunities in the Health Care Practice at Analysis Group Opportunities in the Health Care Practice at Analysis Group Johns Hopkins Blomberg School of Public Health October 3, 2016 You Will Learn Today: Who the Analysis Group is What our Health Care Practice

More information

Cost-effectiveness and cost-utility analysis accompanying Cancer Clinical trials. NCIC CTG New Investigators Workshop

Cost-effectiveness and cost-utility analysis accompanying Cancer Clinical trials. NCIC CTG New Investigators Workshop Cost-effectiveness and cost-utility analysis accompanying Cancer Clinical trials NCIC CTG New Investigators Workshop Keyue Ding, PhD. NCIC Clinical Trials Group Dept. of Public Health Sciences Queen s

More information

Multi Criteria Evaluation

Multi Criteria Evaluation Working Group 3, task 3.2 'Options for integrating EST indicators' Multi Criteria Evaluation Working paper for the COST 356 Meeting in Torino, October 10-12, 2007 prepared by Patrick Wäger Technology and

More information

DECISION SUPPORT FOR SOFTWARE PACKAGE SELECTION: A MULTICRITERIA METHODOLOGY

DECISION SUPPORT FOR SOFTWARE PACKAGE SELECTION: A MULTICRITERIA METHODOLOGY 5-03-30 INFORMATION MANAGEMENT: STRATEGY, SYSTEMS, AND TECHNOLOGIES DECISION SUPPORT FOR SOFTWARE PACKAGE SELECTION: A MULTICRITERIA METHODOLOGY Farrokh Mamaghani INTRODUCTION Software selection is a critical

More information

Scientific Advisory Groups (SAG)

Scientific Advisory Groups (SAG) Scientific Advisory Groups (SAG) Experience and impact of patient involvement Presented by: Francesco Pignatti Human Medicines Evaluation Division, EMA Training session for patients and consumers involved

More information

Revolution, Evolution, or Status Quo? Guidelines for Efficiency Measurement in Health Care

Revolution, Evolution, or Status Quo? Guidelines for Efficiency Measurement in Health Care Revolution, Evolution, or Status Quo? Guidelines for Efficiency Measurement in Health Care Professor Bruce Hollingsworth Director, Monash University E-mail: bruce.hollingsworth@monash.edu Efficiency Measurement

More information

Future directions of benefit-risk assessment in Europe

Future directions of benefit-risk assessment in Europe Future directions of benefit-risk assessment in Europe ISPE Mid-year meeting, Munich, Germany 12 th April 2013 Presented by: Deborah Ashby Imperial College London Outline Challenges in medical decision-making

More information

Chapter 4 Fuzzy Analytic Hierarchy Process of Green Supply Chain Management in the Pharmaceutical Industry

Chapter 4 Fuzzy Analytic Hierarchy Process of Green Supply Chain Management in the Pharmaceutical Industry Chapter 4 Fuzzy Analytic Hierarchy Process of Green Supply Chain Management in the Pharmaceutical Industry 4.1 Introduction During the past decade with increasing environmental concerns, a consensus, the

More information

Identify Risks. 3. Emergent Identification: There should be provision to identify risks at any time during the project.

Identify Risks. 3. Emergent Identification: There should be provision to identify risks at any time during the project. Purpose and Objectives of the Identify Risks Process The purpose of the Identify Risks process is to identify all the knowable risks to project objectives to the maximum extent possible. This is an iterative

More information

MAKING OUTCOME-BASED PAYMENTS A REALITY IN THE NHS RESEARCH STUDY: PHASE INTRODUCTION. Research Brief

MAKING OUTCOME-BASED PAYMENTS A REALITY IN THE NHS RESEARCH STUDY: PHASE INTRODUCTION. Research Brief MAKING OUTCOME-BASED PAYMENTS A REALITY IN THE NHS RESEARCH STUDY: PHASE 1 1.0 INTRODUCTION Cancer Research UK (CRUK) believes cancer patients should have access to the best, evidence-based innovative

More information

Development of a Decision Support Tool for Assessing Vessel Traffic Management Requirements for U.S. Ports

Development of a Decision Support Tool for Assessing Vessel Traffic Management Requirements for U.S. Ports Development of a Decision Support Tool for Assessing Vessel Traffic Management Requirements for U.S. Ports John R. Harrald The George Washington University Washington, D.C. 20052 harrald@seas.gwu.edu Jason

More information

Frequently Asked Questions about Integrating Health Impact Assessment into Environmental Impact Assessment

Frequently Asked Questions about Integrating Health Impact Assessment into Environmental Impact Assessment Updated 11/20/13 Frequently Asked Questions about Integrating Health Impact Assessment into Environmental Impact Assessment 1. What is Health Impact Assessment (HIA)? Many land use and transportation decisions

More information

TIPS PREPARING AN EVALUATION STATEMENT OF WORK ABOUT TIPS

TIPS PREPARING AN EVALUATION STATEMENT OF WORK ABOUT TIPS NUMBER 3 2 ND EDITION, 2010 PERFORMANCE MONITORING & EVALUATION TIPS PREPARING AN EVALUATION STATEMENT OF WORK ABOUT TIPS These TIPS provide practical advice and suggestions to USAID managers on issues

More information

Including Real World Evidence (RWE) in network meta-analysis

Including Real World Evidence (RWE) in network meta-analysis Including Real World Evidence (RWE) in network meta-analysis David Jenkins, Reynaldo Martina, Sylwia Bujkiewicz, Pascale Dequen & Keith Abrams Department of Health Sciences, Background GetReal is a three-year

More information

Product Evaluation of Rijal Tashi Industry Pvt. Ltd. using Analytic Hierarchy Process

Product Evaluation of Rijal Tashi Industry Pvt. Ltd. using Analytic Hierarchy Process Proceedings of IOE Graduate Conference, 2016 pp. 239 244 Product Evaluation of Rijal Tashi Industry Pvt. Ltd. using Analytic Hierarchy Process Rojan Shrestha 1, Shree Raj Sakya 2 1,2 Department of Mechanical

More information

Citation for the original published paper (version of record):

Citation for the original published paper (version of record): http://www.diva-portal.org Postprint This is the accepted version of a paper published in Patient. This paper has been peer-reviewed but does not include the final publisher proof-corrections or journal

More information

House of Commons Science and Technology Committee: Inquiry on clinical trials and disclosure of data

House of Commons Science and Technology Committee: Inquiry on clinical trials and disclosure of data House of Commons Science and Technology Committee: Inquiry on clinical trials and disclosure of data Evidence submitted by the Medical Research Council, 26 February 2013 Introduction 1. The Medical Research

More information

DATA-INFORMED DECISION MAKING (DIDM)

DATA-INFORMED DECISION MAKING (DIDM) DATA-INFORMED DECISION MAKING (DIDM) Leadership and decision-making can benefit from data-informed decision making, also called evidencebased practice. DIDM supports decisions informed by measurable outcomes;

More information

Report on Guidelines for Health economic analyses of medicinal products

Report on Guidelines for Health economic analyses of medicinal products Anita Alban Hans Keiding Jes Søgaard March 1998 Report on Guidelines for Health economic analyses of medicinal products In 1997, the Danish Ministry of Health set up a working group consisting of: Research

More information

FDA from a Former FDAer: Secrets and insights into regulatory review and drug development

FDA from a Former FDAer: Secrets and insights into regulatory review and drug development FDA from a Former FDAer: Secrets and insights into regulatory review and drug development Andrew E. Mulberg, MD, FAAP Vice-President, Global Regulatory Affairs; Former Division Deputy, DGIEP, U.S. FDA

More information

White Paper January 2017 META-ANALYSIS FOR HEALTH TECHNOLOGY SUBMISSIONS WORLDWIDE: A REPORT CHECKLIST FOR BEST PRACTICE. Sarah Batson, Neil Webb

White Paper January 2017 META-ANALYSIS FOR HEALTH TECHNOLOGY SUBMISSIONS WORLDWIDE: A REPORT CHECKLIST FOR BEST PRACTICE. Sarah Batson, Neil Webb 05 NETWORK White Paper January 2017 META-ANALYSIS FOR HEALTH TECHNOLOGY SUBMISSIONS WORLDWIDE: A REPORT CHECKLIST FOR BEST PRACTICE Sarah Batson, Neil Webb Network meta-analysis (NMA) is an accepted statistical

More information

Minimum Elements and Practice Standards for Health Impact Assessment. North American HIA Practice Standards Working Group

Minimum Elements and Practice Standards for Health Impact Assessment. North American HIA Practice Standards Working Group Minimum Elements and Practice Standards for Health Impact Assessment Version 2 November 2010 Authorship and Acknowledgements This document represents a revision of version one of Practice Standards for

More information

Early Engagement: One Stop Shop

Early Engagement: One Stop Shop Early Engagement: One Stop Shop Indranil Bagchi, Ph.D. Vice President, Payer Insights & Access April 7, 2014 Insert presentation title, GH&V tagline, you and your groups name and date What is Early Scientific

More information

Life Cycle Assessment A product-oriented method for sustainability analysis. UNEP LCA Training Kit Module f Interpretation 1

Life Cycle Assessment A product-oriented method for sustainability analysis. UNEP LCA Training Kit Module f Interpretation 1 Life Cycle Assessment A product-oriented method for sustainability analysis UNEP LCA Training Kit Module f Interpretation 1 ISO 14040 framework Life cycle assessment framework Goal and scope definition

More information

An evaluation of preference weighting methods in multicriteria analysis: testing on sediment remediation alternatives in the Bergen Harbour

An evaluation of preference weighting methods in multicriteria analysis: testing on sediment remediation alternatives in the Bergen Harbour Sediment and Society - Technical Brief An evaluation of preference weighting methods in multicriteria analysis: testing on sediment remediation alternatives in the Bergen Harbour David N. Barton, Norwegian

More information

STAFFORD & SURROUNDS DECOMMISSIONING & DISINVESTMENT OF SERVICES

STAFFORD & SURROUNDS DECOMMISSIONING & DISINVESTMENT OF SERVICES Stafford & Surrounds Clinical Commissioning Group STAFFORD & SURROUNDS DECOMMISSIONING & DISINVESTMENT OF SERVICES Agreed at Governing Body 09 December 2013 Date:.. Signature:. Chair Stafford & Surrounds

More information

Convergence and difference in HTA approaches in UK, Germany and France: reflections on recent and proposed changes

Convergence and difference in HTA approaches in UK, Germany and France: reflections on recent and proposed changes Convergence and difference in HTA approaches in UK, Germany and France: reflections on recent and proposed changes Professor Ron Akehurst School of Health and Related Research Content of Talk During this

More information

Civil & Environmental Engineering

Civil & Environmental Engineering Journal of Civil & Environmental Engineering Civil & Environmental Engineering Peris-Mora and Velasco, 2015, 5:6 http://dx.doi.org/10.4172/2165-784x.1000193 ISSN: 2165-784X Research Article Open Access

More information

Composite Performance Measure Evaluation Guidance. April 8, 2013

Composite Performance Measure Evaluation Guidance. April 8, 2013 Composite Performance Measure Evaluation Guidance April 8, 2013 Contents Introduction... 1 Purpose... 1 Background... 2 Prior Guidance on Evaluating Composite Measures... 2 NQF Experience with Composite

More information

Supervisory Statement SS3/18 Model risk management principles for stress testing. April 2018

Supervisory Statement SS3/18 Model risk management principles for stress testing. April 2018 Supervisory Statement SS3/18 Model risk management principles for stress testing April 2018 Prudential Regulation Authority 20 Moorgate London EC2R 6DA Supervisory Statement SS3/18 Model risk management

More information

ACRE Consultation. Managing the Footprint of Agriculture: Towards a Comparative Assessment of Risks and Benefits for Novel Agricultural Systems

ACRE Consultation. Managing the Footprint of Agriculture: Towards a Comparative Assessment of Risks and Benefits for Novel Agricultural Systems ACRE Consultation Managing the Footprint of Agriculture: Towards a Comparative Assessment of Risks and Benefits for Novel Agricultural Systems Response from GeneWatch UK 8th June 2006 GeneWatch UK is a

More information

October 25, Division of Dockets Management (HFA-305) Food and Drug Administration 5630 Fishers Lane, Room 1061 Rockville, MD 20852

October 25, Division of Dockets Management (HFA-305) Food and Drug Administration 5630 Fishers Lane, Room 1061 Rockville, MD 20852 701 Pennsylvania Avenue, NW Suite 800 Washington, D.C. 20004 2654 Tel: 202 783 8700 Fax: 202 783 8750 www.advamed.org Division of Dockets Management (HFA-305) Food and Drug Administration 5630 Fishers

More information

LOSS DISTRIBUTION ESTIMATION, EXTERNAL DATA

LOSS DISTRIBUTION ESTIMATION, EXTERNAL DATA LOSS DISTRIBUTION ESTIMATION, EXTERNAL DATA AND MODEL AVERAGING Ethan Cohen-Cole Federal Reserve Bank of Boston Working Paper No. QAU07-8 Todd Prono Federal Reserve Bank of Boston This paper can be downloaded

More information

Decision Analysis Frameworks for Life-Cycle Impact Assessment

Decision Analysis Frameworks for Life-Cycle Impact Assessment Decision Analysis Frameworks for Life-Cycle Impact Assessment Jyri Seppälä, Lauren Basson, and Gregory A. Norris Keywords decision analysis life-cycle impact assessment (LCIA) methods multiple attribute

More information

Technology appraisal guidance Published: 28 September 2016 nice.org.uk/guidance/ta407

Technology appraisal guidance Published: 28 September 2016 nice.org.uk/guidance/ta407 Secukinumab for active ankylosing spondylitis after treatment with non- steroidal anti-inflammatory drugs or TNF-alpha inhibitors Technology appraisal guidance Published: 28 September 16 nice.org.uk/guidance/ta407

More information

Uncertainty, Expert Judgment, and the Regulatory Process: Challenges and Issues

Uncertainty, Expert Judgment, and the Regulatory Process: Challenges and Issues Uncertainty, Expert Judgment, and the Regulatory Process: Challenges and Issues Robert Hetes USEPA, National Health and Environmental Effects Research Laboratory DIMACS Workshop on the Science of Expert

More information

Transactions on Information and Communications Technologies vol 11, 1995 WIT Press, ISSN

Transactions on Information and Communications Technologies vol 11, 1995 WIT Press,   ISSN A quality assessment method for application management R.M. Hather, E.L. Burd, C. Boldyreff Centre for Software Maintenance, University of Durham, Durham, DEI 3EL, UK, Email: ames@durham.ac.uk Abstract

More information

ADAPTIVE PATHWAYS WORKSHOP

ADAPTIVE PATHWAYS WORKSHOP ADAPTIVE PATHWAYS WORKSHOP Stockholm Friday, November 10, 2017 9:00 13:00 Adaptive Pathways Applications for Scientific Insights in Europe PRESENTED BY: An ICON plc Company Agenda Overview 9:00 9:10 9:10

More information

TÁMOP /2/A/KMR

TÁMOP /2/A/KMR HEALTH ECONOMICS HEALTH ECONOMICS Sponsored by a Grant TÁMOP-4.1.2-08/2/A/KMR-2009-0041 Course Material Developed by Department of Economics, Faculty of Social Sciences, Eötvös Loránd University Budapest

More information

Comments on Key Performance Indicators ( KPI ) Matrix and Statistical Validity

Comments on Key Performance Indicators ( KPI ) Matrix and Statistical Validity D. 3826/11 SF 3.860 Appendix 2 Comments on Key Performance Indicators ( KPI ) Matrix and Statistical Validity We would note a number of concerns in relation to the approach that has been adopted in relation

More information

How to Separate Risk from Uncertainty in Strategic Forecasting Christian Schäfer

How to Separate Risk from Uncertainty in Strategic Forecasting Christian Schäfer How to Separate Risk from Uncertainty in Strategic Forecasting Christian Schäfer Preview Christian Schäfer takes us on a tour through pharmacological forecasting of the market potential for a new drug.

More information

Up to 68,400 (includes 14% contribution to pension) TBC - likely to be up to c 500,000 per annum

Up to 68,400 (includes 14% contribution to pension) TBC - likely to be up to c 500,000 per annum Job Description Job title: Salary: Base: Reports to: Senior Programme Manager Up to 68,400 (includes 14% contribution to pension) Wentworth House, Crawley Programme Director(s) Direct reports: Project

More information

Session 3 Uncertainty Assessment

Session 3 Uncertainty Assessment Session 3 Uncertainty Assessment Hand-on Training Workshop on National GHG Inventories - IPCC Cross-Cutting Issues 4-5-6 November 2015, Ankara Turkey Introduction Perfect accuracy and certainty impossible

More information

DIAGNOSTICS ASSESSMENT PROGRAMME

DIAGNOSTICS ASSESSMENT PROGRAMME Diagnostics Consultation Document s THEME: LEVEL OF EVIDENCE 1. Section Response 1. Provisional Recommendations (page 2) Biomedical, Inc., the manufacturer of the My5-FU assay, is disappointed in the draft

More information