PERFORMANCE ASSESSMENT METRICS Project title

Size: px
Start display at page:

Download "PERFORMANCE ASSESSMENT METRICS Project title"

Transcription

1 PERFORMANCE ASSESSMENT METRICS Project title Project number Document number: xxx Revision number: 0 Framework Leader: Document contributors: Name Title Name Title Name Title Name Title Probate Party: Name Title Date approved: Page 1 Performance Assessment Metrics xxxx Revision 0

2 History Release Reason for release number A Initial draft for review B Second draft for review C Final draft for review 0 Initial release 1 Changes to sections 3 and 4 Detail 3.2.1: Corrected the spelling of baef 3.3.4: Added reference to section : Changed the title of the header References 1. Investment-Centric Project Management, Chapters 4 and Web-Added-Value (WAV) download Unit Transformation Process (UTP) 3. Web-Added-Value (WAV) download Baseline Asset Execution Framework 4. Web-Added-Value (WAV) free download Investment-Centric Project Management 5. Web-Added-Value (WAV) free download Direct Accountability Page 2 Performance Assessment Metrics xxxx Revision 0

3 Executive summary This document prescribes the methodology for measuring quantitatively the performance of the execution works of a project. The metrics developed herein will reveal the underlying proficiencies and deficiencies of the work done by all parties in order to learn from them, adjust the delivery strategy, and plan the follow-on work. Page 3 Performance Assessment Metrics xxxx Revision 0

4 Table of Contents Section Page No. Section 1 Introduction Summary Quantifying what is actionable Theoretical foundation 5 Section 2 Method General approach Modelling the scope of work Activities and tasks Activity metrics Activity calculations Metric calculation for the activity the R factor Rolled-up productivity metrics Functional calculations Execution metrics across a set of activities Activity roll-up the F factor Lifecycle Phase assessment Functional Groups Execution metrics for a project phase - K Factor Project assessment the SK factor The effects of multiple project phases Asset performance assessment Assessment of project delivery performance 14 List of Figures List of Tables Name Page No. Figure 1 - Basis of metrics is the UTP 6 Figure 2 - A sequence of UTP activities 7 Name Page No. Table 1: Typical deliverables and metrics 8 Table 2: Three-drawing example for R calculations 9 Table 3: Productivity metrics for the three-drawing example 10 Table 4: Interface metrics 10 Table 5: Productivity metrics for the interfaces 11 Table 6: F calculations 11 Table 7: Weight factors by functional 12 Page 4 Performance Assessment Metrics xxxx Revision 0

5 Section 1 Introduction 1.1 Summary Quantifying what is actionable The Performance Assessment Metrics (PAMs) model utilizes quantifiable metrics gathered from real-time performance data derived from the execution of the development work (conceptualization and realization). The metrics provide a powerful project management process for real-time risk identification, execution management and continuous improvements. The model is predicated upon the measurement of execution outcomes inherent to a project s evolution throughout its lifecycle. The model is prescriptive only in the tabulation of the success factors that impact the assessment of the project; the types and number of those factors are left to the discretion of the project s participants. The model applies to single and multi-phase projects (continuously executed or not) and quantifies the impact of successive phases unto each other, and their influence upon the overall performance of the resulting outcome Theoretical foundation The Unit Transformation Process (UTP), shown in Figure 1, is the underlying b9uilding block of the PAM methodology. The work is schematically represented by the process of transforming the inputs into outputs. The attributes, targets and enablers provide the data and resources required to execute the work. The constraints define the acceptance criteria by which an output is gauged and approved. The characteristics yield the derivative data conveyed by the outputs. The metrics, finally, quantify the performance of the execution underlying the work the PAMs, effectively. The diagonal line divides the activity into what s required to do the work (below the line) and what is yielded by the work (above the line). The PAMs measure the efficacy of the work to produce the yield. Page 5 Performance Assessment Metrics xxxx Revision 0

6 Figure 1 - Basis of metrics is the UTP Page 6 Performance Assessment Metrics xxxx Revision 0

7 Section 2 Method 2.1 General approach Modelling the scope of work The execution of a project is broken down into a series of discrete activities, similar to Figure 1, that transform a set of inputs into another set of outputs. An example of a sequence involving two activities is shown in Figure 2. Note that an activity that does not produce an output, or one that does not require an input, is not an activity that subject to measurement. Figure 2 - A sequence of UTP activities Activities and tasks To be measurable, an activity must produce an output that is a project deliverable, defined as an outcome that acquired by the project s owner. Intermediate steps that lead to the deliverable are called tasks. The outcome of a task is considered a content element of the deliverable, and thus judged either as a constraint or a characteristic. For example, an engineering drawing may require several discrete calculations, decisions on material selections, datasheets for instruments, CAD drafting, internal quality checks, and simulation modelling. The number of errors (compliance with a code specification, proper font size on the drawing, incorrect sizing, and wrong data specifications) would be a typical assessment metric for them. The distinction between activity and task is formulated to limit the number of controlled variables for the execution work. The Framework Leader or the PMO project manager can, however, elect to assign a task as an activity, at their discretion. For example, engineering calculations could be treated as activities, rather than tasks. The decision should always consider the justification for greater granularity than necessary, which will otherwise increase the administration complexity of the PAM method Activity metrics Each activity s performance is individually assessed against the activity performance metrics chosen for the activity. The types and number of metrics are selected by the Framework Leader or PMO project manager at the outset. Metrics are divided into three types: execution, interface and equipment. Execution metrics assess the correctness of a deliverable resulting from the UTP process directly. Interface metrics pertain to the efficacy of the transition Page 7 Performance Assessment Metrics xxxx Revision 0

8 of a UTP output into an input to a subsequent UTP. The equipment metrics pertain to the physical, algorithmic or operational performance of plant s configuration. A summary of typical metrics in shown in Table 1. Table 1: Typical deliverables and metrics Deliverables Execution metrics Interface metrics Equipment metrics 3D models Number of errors Number of noncompliance instances (constraints) Throughput Agreements and letters Budget Number of incorrect inputs Power / fuel consumption Algorithms, codes, software Contract value Number of incorrect output-input interface specifications Analyses Duration Number of incorrect outputs Audits Duration of review Number of junction transfers processed simultaneously Calculations Hours Number of missing inputs Codes, standards, prescriptions Contracts, purchase orders, material requisitions Datasheets Capacity range Cumulative yearly downtime for planned maintenance Emissions Mean-Time Between Failures (MTTF) Negotiation time Number of missing outputs Mean-Time To Repair (MTBR) Number of acceptable bids Number of bidders Number of omitted regulatory requirements Number of standard noncompliance instances Price Profitability Drawings and sketches Number of changes Number of task errors Schedule maintenance cycle Evaluations and Number of deviations Remedial hours to fix interface assessments from SOE Spare program cost Inspections Number of formatting Remedial labor costs to fix interface errors Total Installed Costs (TIC) Letters, memoranda and directives Number of incorrect inputs Remedial material costs to fix interface Total Costs of Ownership (TCO) Lists and Tables Number of incorrect outputs Turn down ratio Permits and licenses Number of review cycles Uptime rating Plans and descriptions Number of reviews Utility consumption Procedures and processes Preparation time Weight Publications Registers Regulatory applications Reports Simulations Specifications Tender documentation Schedule delays Translation errors 2.2 Activity calculations Metric calculation for the activity the R factor The activity metric is calculated as a ratio of the actual outcome to its predefined target using Equation 1: Page 8 Performance Assessment Metrics xxxx Revision 0

9 M actual + min R = ( target + min ) 1 Where: M =1 if the intent is to maximize R, or -1 if R is to be minimized. Maximizing implies better cost performance, in the grand scheme of things (higher valunomy); minimizing implies lesser deficiencies (less re-work desired). For example, given a fixed budget and schedule, one wants to maximize the number of deliverables produced (M=1) but minimize the number of errors found in those deliverables (M=-1). The term min is the minimum measurable increment for the target value. For example, if the target error rate is 2 per drawing, the minimum measurable increment is 1. If the target is a fuel emission rate per kw, say 12 grams of CO 2 per kw, the minimum measurable increment could be set at 0.5 grams. The role of the variable min is to allow null values for actual and target to be recorded without causing a division error. Table 3. An example for three engineering drawings in given in Table 2 and Deliverable Drawing abcd- 1 Drawing abcd- 2 Drawing abcd- 3 Execution metric Table 2: Three-drawing example for R calculations Target MIN Objective M value Actual R Errors 0 1 Minimize Revisions 3 1 Minimize Reviews 2 1 Minimize Duration (days) Minimize Hours Minimize Cost $ 960 $ 25 Minimize -1 $ 1, Errors 0 1 Minimize Revisions 1 1 Minimize Reviews 4 1 Minimize Duration (days) Minimize Hours Minimize Cost $ 700 $ 25 Minimize -1 $ Errors 0 1 Minimize Revisions 3 1 Minimize Reviews 2 1 Minimize Duration (days) Minimize Hours Minimize Page 9 Performance Assessment Metrics xxxx Revision 0

10 Cost $ 550 $ 25 Minimize -1 $ Rolled-up productivity metrics Once the R value for each metric is calculated for a given activity (drawing abcd-1 in the example above), they abcd-1 in the example above), they are summed across all activities (drawing abcd-1, -2 and -3). The results abcd-1, -2 and -3). The results are tabulated in the bottom half of Table 3. These rolled-up R values will be used in the functional calculations below. Note in Table 3 that the target number of total drawings is 2, while the actual was 3. This is to illustrate the maximization M = 1. Productivity metrics Table 3: Productivity metrics for the three-drawing example Number Min Objective Value Total R Quantity 2 Maximize Errors 0 1 Minimize Revisions 7 1 Minimize Reviews 8 1 Minimize Duration (days) Minimize Hours Minimize Cost $ 2,210 $ 25 Minimize Functional calculations Execution metrics across a set of activities The example above includes three drawings, each quantified with several, individual R values. The group of drawings forms a functional, through which the efficiency of the execution work can be calculated. The efficiency is a function of the seamless transfer of a UTP s outputs over to a subsequent UTP, as inputs, and is tallied via the interface metrics. The metrics are applied at the input-output junctions in Figure 2 and are tallied similarly to the execution metrics, as shown in Table 4 and Table 5. Table 4: Interface metrics Deliverable Execution metric Target MIN Objective M value Actual R Drawing abcd-1 Number of incorrect inputs 0 1 Minimize Number of standard noncompliance 1 1 Minimize Number of junction 4 1 Maximize transfers processed simultaneously Approval duration for Minimize junction Hours to fix interface Minimize Costs to fix interface $- $25 Minimize -1 $1, Drawing abcd-2 Number of incorrect inputs 0 1 Minimize Page 10 Performance Assessment Metrics xxxx Revision 0

11 Number of standard noncompliance 1 1 Minimize Number of junction 6 1 Maximize transfers processed simultaneously Approval duration for Minimize junction Hours to fix interface Minimize Costs to fix interface $- $25 Minimize -1 $ Drawing abcd-3 Number of incorrect inputs 0 1 Minimize Number of standard noncompliance Number of junction transfers processed simultaneously Approval duration for junction Hours to fix interface Costs to fix interface 1 1 Minimize Maximize Minimize Minimize $- $25 Minimize -1 $1, Table 5: Productivity metrics for the interfaces Productivity metrics Number Min Objective Value Total R Quantity 3 Maximize Number of incorrect inputs 0 1 Minimize Number of standard noncompliance 3 1 Minimize Number of junction transfers 14 1 Maximize processed simultaneously Approval duration for junction Minimize Hours to fix interface Minimize Costs to fix interface $- $25 Minimize Activity roll-up the F factor The productivity results for the activities are rolled-up into a single metric named the F factor, which is calculated using Equation 2: n F = 1 n R i i=1 2 Where n is the total number of productivity metrics and i a summation index. Using Table 3 and Table 5 to illustrate, each table includes seven metrics (coincidentally), yielding a value for n of 14. The corresponding F value is shown in Table 6. Productivity metrics Table 6: F calculations R Page 11 Performance Assessment Metrics xxxx Revision 0

12 Quantity 1.50 Errors 0.14 Revisions 0.73 Reviews 1.29 Duration (days) 1.00 Hours 0.98 Cost 0.90 Quantity 1.00 Number of incorrect inputs 0.14 Number of standard non-compliance 0.80 Number of junction transfers processed simultaneously Lifecycle Phase assessment Approval duration for junction 1.29 Hours to fix interface 0.01 Costs to fix interface 0.01 Total Number of metrics 14 F Functional Groups Functionals associated with labor are grouped by labor type (engineering, procurement, management, construction, etc.). Functionals associated with equipment are grouped by installation. In both instances, the impact of one functional relative to the others will not, in general be equal. In other words, whereas all labor activities must be carried out, the performance of one, say engineering, bears a greater impact on the overall performance of the project than, say, Health and Safety. This relative impact is quantified using a weight factor, W, assigned to each functional. The value ranges from 0 to 1, with zero having no influence and 1, maximum impact. An example is shown in Table 7: Table 7: Weight factors by functional Functional W Functional W Functional W Engineering Supply Chain Administration Architectural 0.5 Contracts 1 Clerical 0.1 Chemical 1 Expediting.6 Document control 1 Electrical 1 Logistics.8 IT systems 0.6 Instrumentation 1 Procurement 1 Public relations 0.4 Mechanical 1 Materiel control 1 Regulatory relations 1 Project 0.8 Transportation 0.6 Structural 1 Warehousing 0.4 Page 12 Performance Assessment Metrics xxxx Revision 0

13 2.4.2 Execution metrics for a project phase - K Factor The overall execution performance of a project phase is calculated with Equation 3, where m is the number of Functionals and j, a summation index. K = m j=1 m W j F j 3 The value K generates a single number that can be less, equal or greater to one. The unitary value corresponds to project that meets 100% of the performance targets established at the outset of a project. K values exceeding one also exceed the targets. Conversely, values lower than one highlight missed targets (or, perhaps, unrealistic targets). 2.5 Project assessment the SK factor The effects of multiple project phases The assessment of the execution across two or more lifecycle phases is done with Equation 4, where P is the number of project phases and l is a summation index. P SK = Y l K l C pl l=1 4 This equation takes into account the execution performance of each phase and its impact upon the performance of inter-dependent phases. The impact is quantified through three additional parameters: the phase s allocation A p, the completeness factor of phase, C p, and the phase s relative weight upon the entire execution, Y p. These parameters are defined as follows: A p = (Cost Time) p 5 C p = A p A extra + A p 6 Y p = A p A S 7 In these formulae, the subscript p refers to a given phase; l is a summation index; A S is the allocation for the entire project; and P is the number of phases in the project. Page 13 Performance Assessment Metrics xxxx Revision 0

14 The variable A p represents the additional time and cost incurred to address the deficiencies of the deliverables of the preceding phase. Note that it is possible for A p to be negative, which would signify that the outcome of the preceding phases was more efficient than expected and helped accelerate the execution of the next phase. 2.6 Asset performance assessment The K and SK metrics pertain to the execution efficiency of a project, not the actual performance of the asset. The latter requires quantifying its operational success based on parameters that include productivity, plant throughput, energy consumption, material rejection rates, plant reliability, daily number of visitors, orders from online marketing campaigns, market share and revenue growth. These metrics are captured by the equipment metrics discussed in article The metrics calculations for the asset follow the same method explained above, using the operational parameters R o, F o and K o. 2.7 Assessment of project delivery performance The overall assessment of the delivery of a project is derived from an economic analysis of the resulting asset. Financial metrics are an intuitive choice for assessing the valunomy of the capital deployed to complete the project. However, the effects of the execution may not emerge explicitly from those tabulations. One may wish to understand how efficiently the project allocations were deployed to produce the asset performance. Such an understanding is critical to continuously improve the project execution process and improve the probability of success in delivering similar future assets. To that end, a new coefficient Q is utilized to measure the efficiency of the allocation deployment, calculated using Equation 8: Q = K o SK 8 A null value signifies a complete financial failure of the project, indicative of an asset that is inoperative. A value of 1.0 for Q validates the initial investment decision and the development strategy. Higher values mean greater bang for the investment buck, and better financial returns for the owner. Page 14 Performance Assessment Metrics xxxx Revision 0