Cost Model Comparison Report

Size: px
Start display at page:

Download "Cost Model Comparison Report"

Transcription

1 Cost Model Comparison Report October 31, 2006 Update Version Prepared for: NASA Ames Prepared by: University of Southern California Center for Software Engineering 941 West 37 th Place Los Angeles, CA Cooperative Agreement No. NNA06CB29A 1

2 Table of Contents 1. Introduction Cost Models Used COCOMO II True S (formerly PRICE S) SEER-SEM Model Comparison Algorithms Size Cost Factor Rosetta Stones COCOMO II to SEER-SEM COCOMO II to PRICE S and True S Integrated Detailed Rosetta Stone Phases and Activities True S SEER Critical Domain Factors Model Analysis Against NASA 94 Project Data Analysis Method Outlier Analysis COCOMO SEER True S Effects of Calibration and Knowledge Bases COCOMO II SEER True S Model Performance Summaries Additional Analysis Runs Conclusions and Future Work References Acknowledgements Appendix A: Model Analysis Results with Outlier Project COCOMO II SEER-SEM True S Appendix B: NASA 94 Original and Transformed Data Introduction This research is assessing the strengths, limitations, and improvement needs of existing cost, schedule, quality and risk models for critical flight software for the NASA AMES project Software Risk Advisory Tools. This particular report focuses only on the cost model aspect and supersedes the cost model sections in a previously delivered draft report [USC-CSE 2006]. 2

3 A comparative survey and analysis of cost models used by NASA flight projects is described. The models include COCOMO II, SEER-SEM and True S. We look at evidence of accuracy, the need for calibration, and the use of knowledge bases to reflect specific domain factors. The models are assessed against a common database of relevant NASA projects. The overriding primary focus is on flight projects, but part of the work also looks at related sub-domains for critical NASA software. They are assessed as applicable in some of the following analyses. This report also addresses the critical NASA domain factors of high reliability and high complexity, and how the cost models address them. Transformations between the models are also developed, so projects can be represented in all models in a consistent manner and to help understand why estimates may vary. There is a more thorough treatment of the USC public domain COCOMO II related models as they planned for usage in the current research, and the current datasets are in the COCOMO format. Conclusions for existing model usage and new model development are provided. The SEER-SEM and True S model vendors provided minor support, but do not yet certify or sanction the data nor information contained in this report. 1.1 Cost Models Used The most frequently used cost and schedule models for critical flight software being evaluated are the COCOMO II, True S (previously PRICE S) and SEER-SEM parametric models. COCOMO II is a public domain model that USC continually updates and is implemented in several commercial tools. True S and SEER-SEM are both proprietary commercial tools with unique features but also share some aspects with COCOMO. All three have been extensively used and tailored for flight project domains. Other industry cost models such as SLIM, Checkpoint and Estimacs have not been nearly as frequently used for flight software and are more oriented towards business applications. A previous comparative survey of software cost models can be found in [Boehm et al. 2000b]. A previous study at JPL analyzed the same three models, COCOMO II, SEER-SEM and PRICE S with respect to some of their flight and ground projects [Lum et al. 2001]. In that case each model estimate was a separate data point. This current approach varied since the data used only came in the COCOMO model format and required translation to the other models COCOMO II The COCOMO (COnstructive COst MOdel) cost and schedule estimation model was originally published in [Boehm 1981]. The COCOMO II research effort was started in 1994, and the model continues to be updated at USC, the home institution of research for the COCOMO model family. COCOMO II defined in [Boehm et al. 2000] has three submodels: Applications Composition, Early Design and Post-Architecture. They can be combined in various ways to deal with different software environments. The Application Composition model is used to estimate effort and schedule on projects typically done as rapid application development. The Early Design model involves the exploration of alternative system architectures and concepts of operation. Typically, not enough is known to make a detailed fine-grained estimate. This model 3

4 is based on function points (or lines of code when available) and a set of five scale factors and seven effort multipliers. The Post-Architecture model is used when top level design is complete and detailed information about the project is available and the software architecture is well defined. It uses Source Lines of Code and/or Function Points for the sizing parameter, adjusted for reuse and breakage; a set of 17 effort multipliers and a set of five scale factors that determine the economies/diseconomies of scale of the software under development. USC provides a public domain tool for COCOMO II. The primary vendor tools that offer the COCOMO II model family include the following: Costar offered by Softstar Systems has a complete COCOMO II implementation with tools for calibration and the Constructive Systems Engineering Model (COSYSMO). See Softstar Systems provided a COCOMO II calibration spreadsheet used in support of this research (see Acknowledgments). The Cost Xpert tool offered by the Cost Xpert Group has a superset of the COCOMO II postarchitecture submodel. It has additional linear cost drivers and additional constraint factors on effort and schedule. See The True Planner tool from PRICE Systems has a COCOMO II model that can be invoked in lieu of the True S model. The remainder of this report only considers the COCOMO II post-architecture submodel True S (formerly PRICE S) True S is the updated product to the PRICE S model offered by PRICE Systems. PRICE S was originally developed at RCA for use internally on software projects such as the Apollo moon program, and was then released in 1977 as a proprietary model. Many of the model s central algorithms were published in [Park 1988]. See the PRICE Systems website at The PRICE S model consists of three submodels that enable estimating costs and schedules for the development and support of computer systems. The model covers business systems, communications, command and control, avionics, and space systems. PRICE S includes features for reengineering, code generation, spiral development, rapid development, rapid prototyping, object-oriented development, and software productivity measurement. Size inputs include SLOC, function points and/or Predictive Object Points (POPs). The switch to True S is taking place during this work. Hence some of the descriptions retain the old PRICE S terminology (such as the Rosetta Stone) while we move towards a complete True S implementation. All numeric estimate results are for the latest True S model SEER-SEM SEER-SEM is a product offered by Galorath, Inc. This model is based on the original Jensen model [Jensen 1983], and has been on the market for over 15 years. Its parametric modeling 4

5 equations are proprietary. Descriptive material about the model can be found in [Galorath-Evans 2006]. The scope of the model covers all phases of the project life-cycle, from early specification through design, development, delivery and maintenance. It handles a variety of environmental and application configurations, and models different development methods and languages. Development modes covered include object oriented, reuse, COTS, spiral, waterfall, prototype and incremental development. Languages covered are 3rd and 4th generation languages (C++, FORTRAN, COBOL, Ada, etc.), as well as application generators. The SEER-SEM cost model allows probability levels of estimates, constraints on staffing, effort or schedule, and it builds estimates upon a knowledge base of existing projects. Estimate outputs include effort, cost, schedule, staffing, and defects. Sensitivity analysis is also provided. Many sizing methods are available including lines of code and function points. See the Galorath Inc. website at 2. Model Comparison This section describes the major similarities and differences between the models. Analyses of their performance against project data, calibration and knowledge bases are addressed in Sections 3 and Algorithms As described in [Lum et al. 2001], all three models essentially boil down to the common effort formula shown in Figure 1. Size of the software is provided in a number of available units, cost factors describe the overall environment and calibrations may take the form of coefficients adjusted for actual data or other types of factors that account for domain-specific attributes. The total effort is calculated and then decomposed by phases or activities according to different schemes in the model. Size Cost Factors Effort = A * Size B * EM Effort Phase, Activity Decomposition Calibrations Effort in person-months A - calibrated constant B - scale factor EM - effort multiplier from cost factors Figure 1: Common Core Effort Formula All models allow size to be expressed as lines of code, function points, object-oriented metrics and others. Each model has its own respective cost factors for the linear effort multiplier term, and each model specifies the B scale factor in slightly different ways (either directly or through other factors). True S and SEER-SEM models have factors for the project type or domain, which COCOMO II currently does not. The model WBS phases and activities are addressed in section

6 2.2 Size All models support size inputs for new and adapted software, where adapted software can be modified or reused without change. Automatically translated or generated code is also supported in some of the models. The models differ with respect to their detailed parameters for the categories of software as shown in Table 1. Commercial Off-The-Shelf (COTS) software is not addressed, but is a future research activity. COCOMO II can treat COTS as reused software or be used in conjunction with the COCOTS model [Boehm et al. 2000]. SEER-SEM and True S have more extensive COTS models. Table 1: Model Size Inputs COCOMO II Size Inputs SEER-SEM Size Inputs True S Size Inputs New Software New Size New Size New Size New Size Non-executable Adapted Software Adapted Size Pre-exists Size 2 Adapted Size % Design Modified (DM) Deleted Size Adapted Size Non-executable % Code Modified (CM) Redesign Required % % of Design Adapted % Integration Required (IM) Reimplementation Required % % of Code Adapted Assessment and Assimilation (AA) Retest Required % % of Test Adapted Software Understanding (SU) 1 Reused Size Programmer Unfamiliarity (UNFM) 1 Reused Size Non-executable Automatically Translated and Generated Code Deleted Size Code Removal Complexity Adapted SLOC Auto Generated Code Size Automatic Translation Productivity Auto Generated Size Non-executable % of Code Reengineered Auto Translated Code Size Auto Translated Size Non-executable 1 - Not applicable for reused software 2 Specified separately for Designed for Reuse and Not Designed for Reuse COCOMO II allows for sizing in SLOC or function points. SEER-SEM and True S provide both of those along with additional size units. User-defined proxy sizes can be developed for any of the models and converted back to SLOC or function points. Future work can also be undertaken to develop model translations between the size input parameters. These would consist of rules or guidelines to convert size inputs between models, and can be supplemented with knowledge base settings. 6

7 2.3 Cost Factor Rosetta Stones This section describes the mappings, or transformations between cost factors in the different models. With this information COCOMO II estimate inputs can be converted into corresponding SEER-SEM or True S (or PRICE S) inputs, or vice-versa. It also illustrates differences in the models to help understand why estimates may vary. Top-level Rosetta Stones map the factors between the models, and the detailed ones map the individual ratings between the corresponding factors. An integrated top-level Rosetta Stone for all of the COCOMO II factors is shown in Table 2. Most of the mappings between factors are one to one, but some are one to n (e.g. SEER-SEM has platform factor ratings split into target and host). In the case of True S, many of the COCOMO II factors have direct corollaries to sub-factors in aggregate True S factors. For example the COCOMO personnel factors are represented as sub-factors under the aggregate True S factor for Development Team Complexity. Table 3 and Table 4 show the additional factors in SEER-SEM and True S for which there are no analogs in COCOMO II. Table 2: Integrated Top-Level Rosetta Stone for COCOMO II Factors COCOMO II Factor SEER-SEM Factor(s) True S Factor(s) SCALE DRIVERS Precedentedness none none Development Flexibility none Operating Specification Architecture/Risk Resolution none none Team Cohesion none Development Team Complexity Process Maturity none 1 Organization Productivity - CMM Level PRODUCT ATTRIBUTES Required Software Reliability Specification Level - Reliability Operating Specification Data Base Size none Code Size non Executable Product Complexity Required Reusability Documentation Match to Lifecycle Needs PLATFORM ATTRIBUTES - Complexity (Staffing) - Application Class Complexity - Reusability Level Required - Software Impacted by Reuse none Functional Complexity Design for Reuse Operating Specification Execution Time Constraint Time Constraints Project Constraints - Communications and Timing Main Storage Constraint Memory Constraints Project Constraints - Memory & Performance 7

8 Platform Volatility PERSONNEL ATTRIBUTES - Target System Volatility - Host System Volatility Hardware Platform Availability 3 Analyst Capability Analyst Capability Development Team Complexity - Capability of Analysts and Designers Programmer Capability Programmer Capability Development Team Complexity - Capability of Programmers Personnel Continuity none Development Team Complexity - Team Continuity Application Experience Application Experience Development Team Complexity - Familiarity with Platform Platform Experience Language and Toolset Experience PROJECT ATTRIBUTES - Development System Experience - Target System Experience Programmer s Language Experience Development Team Complexity - Familiarity with Product Development Team Complexity - Experience with Language Use of Software Tools Software Tool Use Design Code and Test Tools Multi-site Development Multiple Site Development Multi Site Development Required Development Schedule none 2 Start and End Date 1 - SEER-SEM Process Improvement factor rates the impact of improvement, not the CMM level 2 - Schedule constraints handled differently 3 - A software assembly input factor Table 3: SEER-SEM Cost Factors with no COCOMO II Mapping PERSONNEL CAPABILITIES AND EXPERIENCE Practices and Methods Experience DEVELOPMENT SUPPORT ENVIRONMENT Modern Development Practices Logon thru Hardcopy Turnaround Terminal Response Time Resource Dedication Resource and Support Location Process Volatility PRODUCT DEVELOPMENT REQUIREMENTS Requirements Volatility (Change) 1 Test Level 2 Quality Assurance Level 2 Rehost from Development to Target 8

9 PRODUCT REUSABILITY Software Impacted by Reuse DEVELOPMENT ENVIRONMENT COMPLEXITY Language Type (Complexity) Host Development System Complexity Application Class Complexity 3 Process Improvement TARGET ENVIRONMENT Special Display Requirements Real Time Code Security Requirements 1 COCOMO II uses the Requirements Evolution and Volatility size adjustment factor 2 Captured in the COCOMO II Required Software Reliability factor 3 Captured in the COCOMO II Complexity factor Table 4: True S Cost Factors with no COCOMO II Mapping To-be-provided COCOMO II to SEER-SEM Table 5 shows the detailed correspondence between COCOMO II and SEER-SEM factors with guidelines to convert ratings between the two models for applicable factors. In some cases the SEER-SEM factors cover different ranges than COCOMO and some of the conversions in Table 5 are best approximations. Not all factors have direct corollaries. The settings of the SEER- SEM factors may be defaulted according to project type and domain choices in the knowledge bases. Table 5: COCOMO II to SEER-SEM Factors COCOMO II Factor(s) SEER-SEM Factor(s) SCALE DRIVERS Precedentedness none Development Flexibility none Architecture/Risk Resolution none Team Cohesion none Process Maturity none 1 9

10 COCOMO II Factor(s) SEER-SEM Factor(s) PRODUCT ATTRIBUTES Required Software Reliability Specification Level - Reliability 2 Very Low Very Low- Low Low Nominal Nominal High High Very High High+ Data Base Size none Product Complexity Complexity (Staffing) 3 Very Low Low Nominal High Very High Extra High Required Reusability Nominal High Very High Extra High Documentation Match to Lifecycle Needs PLATFORM ATTRIBUTES Very Low Low Nominal High Very High Extra High Reusability Level Required none Nominal High Very High Extra High Execution Time Constraint Nominal High Very High Extra High Main Storage Constraint Nominal High Very High Extra High Platform Volatility Low Nominal High Very High Time Constraints Nominal Nominal High Very High Memory Constraints Nominal High Very High Extra High Target System Volatility, Host System Volatility Low High Very High Extra High PERSONNEL ATTRIBUTES Analyst Capability Very Low Low Nominal High Analyst Capabilities Very Low Low Nominal High 10

11 COCOMO II Factor(s) Very High Programmer Capability Very Low Low Nominal High Very High Personnel Continuity Application Experience Very Low Low Nominal High Very High Platform Experience Very Low Low Nominal High Very High Language and Toolset Experience Very Low Low Nominal High Very High SEER-SEM Factor(s) Very High Programmer Capabilities Very Low Low Nominal High Very High none Analyst s Application Experience Very Low Low+ Low Nominal High Development System Experience, Target System Experience Very Low Low Nominal Very High Extra High Programmer s Language Experience Very Low Low Nominal Very High Extra High PROJECT ATTRIBUTES Use of Software Tools Very Low Low Nominal High Very High Multi-site Development Very Low Low Nominal High Very High Extra High Required Development Schedule none 4 Automated Tools Use Very Low Low Nominal High Very High Multiple Site Development Extra High Very High High or Nominal High or Nominal Nominal Nominal 1 - SEER-SEM Process Improvement factor rates the impact of improvement instead of the CMM level 2 Related SEER-SEM factors include Test Level and Quality Assurance Level which are also usually driven by reliability requirements 11

12 3 SEER-SEM also has Application Class Complexity to rate at the program level, and other complexity factors for the development environment 4 - Schedule constraints handled differently in models 12

13 2.3.2 COCOMO II to PRICE S and True S Table 2 showed the top-level view of the COCOMO II to True S Rosetta Stone. Due to the new product being currently phased in, the Rosetta Stone will be refined for the next level of True S subfactors and will be provided at a later date. The more complete Rosetta Stone in Table 6 and Table 7 shows the correspondence to the PRICE S model. The factor names shown are being replaced with the modernized terms in True S in order to elaborate the detailed Rosetta Stone between COCOMO II and True S. 13

14 Table 6: COCOMO II to PRICE S Rosetta Stone COCOMO Factor(s) PRICE S Factor(s) SCALE DRIVERS Precedentedness Development Flexibility Architecture/Risk Resolution Team Cohesion Process Maturity none none none none none PRODUCT ATTRIBUTES Required Software Reliability PLTFM Very Low 0.65 Low 0.8 Nominal 1 High 1.2 Very High 1.4 Data Base Size Product Complexity PROFAC APPL Very Low 0.86 Low 2.3 Nominal 5.5 High 6.5 Very High 8.5 Extra High Required Reusability CPLX1 Nominal +0 High +0.1 Very High +0.3 Extra High +0.5 Documentation Match to Lifecycle Needs PLATFORM ATTRIBUTES Execution Time Constraint none UTIL - time Nominal 0.5 High 0.7 Very High 0.85 Extra High 0.95 Main Storage Constraint Platform Volatility UTIL - memory Nominal 0.5 High 0.7 Very High 0.85 Extra High 0.95 CPLX2 Low Nominal +0 High Very High

15 Table 7: COCOMO II to PRICE S Rosetta Stone (Continued) PERSONNEL ATTRIBUTES Analyst Capability CPLX1 Very Low +0.1 Low Nominal +0 High Very High -0.1 Programmer Capability CPLX1 Very Low +0.1 Low Nominal +0 High Very High -0.1 Personnel Continuity Application Experience none CPLX1 Very Low +0.1 Low Nominal +0 High Very High -0.1 Platform Experience CPLX1 Very Low +0.1 Low Nominal +0 High Very High -0.1 Language and Toolset Experience PROFAC Very Low +0.1 Low Nominal +0 High Very High -0.1 PROJECT ATTRIBUTES Use of Software Tools CPLX1 Very Low Very Low Low Low Nominal Nominal High High Very High Very High Multi-site Development Required Development Schedule None Development Start date - mandatory 15

16 Another aspect is "normalizing" True S against COCOMO II nominal conditions and matching their diseconomy of scale. A baseline normalization is needed against which factors can be changed to represent the projects already modeled with COCOMO II. Figure 2 shows the normalization between True S and COCOMO II. Effort (Person-Months) COCOMO II (nominal) True complexity = 5.5 True complexity = 5 True complexity = 6 True S Parameters Functional Complexity = 5-6 Operating Specification = 1.0 Organizational Productivity = 1.33 Development Team Complexity = Size (KSLOC) Figure 2: Example of Normalizing True S and COCOMO II The final determined values that most closely match True S to all nominal conditions in COOOMO II are listed below: Functional Complexity is in the range of 5-6 and a value of 5.5 is suggested Organization Productivity = 1.33 Development Team Complexity = 2.5 Operational Specification = Integrated Detailed Rosetta Stone An integrated detailed Rosetta Stone for three models is shown in Table 8 and Table 9. It is being updated for the latest True S model. 16

17 Table 8: Integrated Rosetta Stone COCOMO II Factor SEER Factor(s) PRICE S Factor(s) SCALE DRIVERS Precedentedness none none Development Flexibility none none Architecture/Risk Resolution none none Team Cohesion none none Process Maturity none 1 none PRODUCT ATTRIBUTES Required Software Reliability Specification Level - Reliability PLTFM Very Low Very Low Low Low 0.8 Nominal Nominal 1 High High 1.2 Very High High+ 1.4 Data Base Size none PROFAC Product Complexity none APPL Very Low 0.86 Low 2.3 Nominal 5.5 High 6.5 Very High 8.5 Extra High Required Reusability Reusability Level Required CPLX1 Nominal Nominal +0 High High +0.1 Very High Very High +0.3 Extra High Extra High +0.5 Documentation Match to Lifecycle Needs none none PLATFORM ATTRIBUTES Execution Time Constraint Time Constraints UTIL - time Nominal Nominal 0.5 High Nominal 0.7 Very High High 0.85 Extra High Very High 0.95 Main Storage Constraint Memory Constraints UTIL - memory Platform Volatility Nominal Nominal 0.5 High High 0.7 Very High Very High 0.85 Extra High Extra High 0.95 Target System Volatility, Host System Volatility CPLX2 Low Low Nominal High +0 High Very High Very High Extra High

18 Table 9: Integrated Rosetta Stone (Continued) PERSONNEL ATTRIBUTES Analyst Capability Analyst Capability CPLX1 Very Low Very Low +0.1 Low Low Nominal Nominal +0 High High Very High Very High -0.1 Programmer Capability Programmer Capability CPLX1 Very Low Very Low +0.1 Low Low Nominal Nominal +0 High High Very High Very High -0.1 Personnel Continuity none none Application Experience Application Experience CPLX1 Very Low Very Low +0.1 Low Low Nominal Low +0 High Nominal Very High High -0.1 Platform Experience Development System Experience, Target System Experience CPLX1 Very Low Very Low +0.1 Low Low Nominal Nominal +0 High Very High Very High Extra High -0.1 Language and Toolset Experience Programmer s Language Experience PROFAC Very Low Very Low +0.1 Low Low Nominal Nominal +0 High Very High Very High Extra High -0.1 PROJECT ATTRIBUTES Use of Software Tools Software Tool Use CPLX1 Very Low Very Low Very Low Low Low Low Nominal Nominal Nominal High High High Very High Very High Very High Multi-site Development Multiple Site Development None Very Low Extra High Low Very High Nominal High or Nominal High High or Nominal Very High Nominal Extra High Nominal Required Development Schedule none 2 Development Start date - mandatory 18

19 2.4 Phases and Activities Reconciliation of the effort work breakdown structures (WBS) is necessary for valid comparison between models. If estimates are to be compared they need to cover the same activities. The common estimate baseline consists of the elaboration and construction phases for software activities (per the COCOMO II default), and shown in Figure 3. Additionally the NASA 94 data came in the COCOMO format and is assumed to cover those activities; hence a model that estimates more must have some activities subtracted out for a valid comparison. The correspondence of the common baseline and the core effort coverage of the different models are also shown in Figure 3. 19

20 Activities Inception Phases Construction COCOMO II Elaboration Maintenance Management Environment/CM Requirements Design Implementation Assessment Deployment Transition True S Design Programming Data SEPGM Q/A CFM Concept System Requirements Software Requirements Preliminary Design Detailed Design Code / Unit Test Integration & Test Hardware / Software Integration Field Test System Integration & Test Maintenance SEER Management SW Reqmnts Design Code Data Prep Test CM QA System Requirements Design Software Requirements Preliminary Design Analysis Detailed Design Code and Unit Test Component Integrate and Test Program Test System Integrate Thru OT&E Legend core effort coverage per model common estimate baseline effort add-on as % of core coverage effort add-on with revised model Figure 3: Model Phases and Activities Coverage 20

21 Due to the differences, the SEER-SEM and True S estimates were refined by subtracting the activities described below True S True S provides a two-tier tree of estimates. The lower tier contains core engineering effort only as shown in Figure 4. The figure shows what elements should be subtracted for the common estimate baseline. The upper tier is a superset line item that includes systems engineering and program management (SEPM) activities. A table of its outputs is shown in Table 5 with the corresponding items to subtract for the common baseline set of activities. Figure 4: Sample True S Engineering Estimate with Effort Items to Subtract 21

22 Table 10: Sample True S SEPM-Level Estimate with Effort Items to Subtract Labor Requirement Table : Engine Control - [Software Assembly] \ Labor Requirements in Months Total 1 Software Maintenance Manage Project Perform Configuration Management Perform Joint Technical Reviews Perform Quality Assurance Plan and Oversee Plan Software Development Write Documentation Analyze System Requirements Design System 6.4 Perform Assembly Integration and 11 Test Software Requirements Analysis Software Design Code and Unit Test Software Integration and Test Software Qualification Test Perform HW/SW Integration and Test 26.1 Perform Software Product Evaluations Perform System Qualification Test Total subtracted effort = revised total = SEER Figure 10 shows a summarized SEER-SEM estimate and the items subtracted out for this analysis to make equivalent estimates. 22

23 1.1 Program - 81 Activity Management SW Reqmnts Design Code Data Prep Test CM QA Total System Requirements Design S/W Requirements Analysis Preliminary Design Detailed Design Code and Unit Test Component Integrate and Test Program Test System Integrate Thru OT&E Development Total , Maintenance Life Cycle Total , Figure 5: Sample SEER-SEM Estimate with Effort Items to Subtract 2.5 Critical Domain Factors The vendor models provide elaborations of reliability and complexity factors beyond what COCOMO II provides. These are critical domain factors of relevance to NASA flight projects. Table 11 shows how the models address them. Table 11: Vendor Elaborations of Critical Domain Factors COCOMO II SEER-SEM 1 True S Required Software Reliability Specification Level Reliability Test Level Quality Assurance Level Operating Specification Level (platform and environment settings for required reliability, portability, structuring and documentation) Product Complexity Complexity (Staffing) Language Type (Complexity) Host Development System Complexity Application Class Complexity Functional Complexity Application Type Language Language Object-Oriented 1 - SEER-SEM factors supplemented with and may be impacted via knowledge base settings for Platform Application Acquisition method Development method Development standard Class Component type (COTS only) SEER-SEM has an extensive set of knowledge base choices. Table 12 shows the subset of knowledge bases that may be applicable to (portions of) NASA flight projects. 23

24 Table 12: SEER-SEM Knowledge Bases Possibly Relevant to NASA Projects Platform Knowledge Bases Avionics Ground System Non Critical Ground-Based Mission Critical Manned Space Unmanned Space Application Knowledge Bases Command/Control Communications Device Driver Diagnostics Embedded Electronics/Appliance Flight Systems Graphical User Interface (GUI) Mathematical & Complex Algorithm OS/Executive Process Control Radar Robotics Signal Processing Simulation Acquisition Method Knowledge Bases Code Generator Concept Reuse Full Design Reuse Integrate As-Is Integrate with Configuration Language Conversion, Automated Language Conversion, Manual Maintenance, Complete Maintenance, Sustaining Modification, Major Modification, Minor New Development Redocumentation Reengineering, Major Reengineering, Minor Rehost, Major Rehost, Minor Salvage Code Subsequent Incremental Build Development Method Knowledge Bases Ada Development Ada Development with Incremental 24

25 Ada Full Use Ada Object Oriented CASE Full Code Generation Evolutionary Incremental Off-the-Shelf Integration OO-All OOD-OOP Prototype Purchase RUP Full RUP Lite Spiral Trusted System Level 3 Waterfall Development Standard Knowledge Bases with IV&V [Independent Verification & Validation] 2167A 2167A Full Set 2167A Minimal Set ANSI J-STD-016 Full ANSI J-STD-016 Min ANSI J-STD-016 Nom FAA IEEE IEEE Full IEEE/EIA ISO 9001 In True S the Operating Specification factor describes the intended operating environment and defines the degree of required reliability, portability, structuring and documentation. Table 13 lists the primary categories that may apply to aspects of NASA flight projects (not shown is the category for Commercial Proprietary Software). Table 13: True S Operating Specification Choices for NASA Projects Commercial Production Software Nominal Reliability High Reliability Very High Reliability (Airborne) Military Software Ground Mobile Airborne Space Software 25

26 Unmanned Manned The last two choices for Manned and Unmanned Space Software would normally be the recommended selections for NASA projects. 3. Model Analysis Against NASA 94 Project Data The research team was provided the NASA 94 set of projects. Of the 95 projects only 13 are listed as flight projects. All the remaining analysis is predicated on those 13 projects alone except noted otherwise where COCOMO II was also applied to five ground embedded projects. The data came in the COCOMO 81 format. They were converted to COCOMO II per the guidelines in [Reifer et al. 2000] and further converted to SEER-SEM or True S factors per the Rosetta Stones in this report. See Appendix B for examples of the original and transformed data. The database covers flight and ground projects, and some ground projects are embedded. An analysis of the critical factor distributions for reliability and complexity indicate that flight projects, as expected, exhibit patterns for both higher reliability and complexity. Figure 6 and Figure 7 show the distributions of these factors in the database. These spreads are as expected, which also supports the contention that these projects provided in the COCOMO 81 format are well conditioned. They are internally consistent and standardized in their reporting of these factors. 26

27 Flight Projects Flight Projects Percent of Projects Very Low Low Nominal High Very High Rating Ground Embedded Projects Percent of Projects Very Low Low Nominal High Very High Rating Ground Embedded Projects Extra High Percent of Projects Percent of Projects Percent of Projects 0 Very Low Low Nominal High Very High Rating Ground Other Projects Very Low Low Nominal High Very High Rating Percent of Projects Very Low Very Low Low Nominal High Very High Extra High Rating Ground Other Projects Low Nominal High Very High Extra High Rating Figure 6: Reliability Distribution Figure 7: Complexity Distribution 3.1 Analysis Method The process flow in Figure 8 shows the sequence of steps used in this analysis. Only the first pass through is described in this overall section. In the next iterations with refined and/or additional data not all of the steps will be performed. For example the need for calibration was amply demonstrated in the first round of analysis, and the uncalibrated model runs are not necessary in subsequent data iterations. The sequences of the vendor tool runs will also vary slightly to reflect their recommended best practices. 27

28 Start NASA 94 database Not all steps performed on iterations 2-n Apply COCOMO 81 COCOMO II Rosetta Stone Select relevant domain projects COCOMO II Uncalibrated COCOMO II analysis COCOMO II calibration (via Costar) Outlier analysis SEER-SEM Apply COCOMO II SEER Rosetta Stone Uncalibrated SEER analysis w/ additional factors defaulted Apply SEER knowledge base settings True S Apply COCOMO II True S Rosetta Stone Set additional True S factors for application domain Calibrated COCOMO II analysis SEER analysis with knowledge base settings True S analysis with application domain settings SEER analysis with calibration and refined settings Consolidated analysis Y Update analysis? N End Additional data Figure 8: Model Analysis Flow The model performances were evaluated with standard figures-of-merit per the equations below based on comparisons between actual and estimated effort for n projects in a dataset: Relative Error (RE) = ( Estimated Effort Actual Effort ) / Actual Effort Magnitude of Relative Error (MRE) = Estimated Effort Actual Effort / Actual Effort Mean Magnitude of relative error (MMRE) = (ΣMRE) / n Root Mean Square (RMS) = ((1/n) Σ (Estimated Effort Actual Effort) 2 ) ½ Prediction Level PRED(L) = k/n, where k = the number projects in a set of n projects whose MRE <=L. Each run consists of a logical group of projects against with estimates are compared to actuals using the above measures. The progressive effects of calibrations and other adjustments are evaluated this way. For each run the detailed outputs as shown in Figure 9 are calculated. 28

29 To help interpret some of the results we also looked at the error distribution such as shown in Figure 10 for any biases. All the numeric estimated values for each model run are contained in the calibration analysis spreadsheets for further investigation. Examples of the estimated and actual values are also shown in Appendix A. Dataset Name: NASA 94 Category: Avionics + Science Mode: Embedded Number of Projects = 12 A = 5.80 Actual Effort (Personmonths) Calibrated Effort Estimates vs. Actuals Estimated Effort (Person-months) Effort Prediction Summary Uncalibrated Calibrated MMRE 43% 29% RMS PRED(10) 9% 36% PRED(20) 9% 45% PRED(30) 18% 64% PRED(40) 36% 73% Figure 9: Sample Summary Results from Model Runs 29

30 Number of Projects < > 50 Relative Estimation Error (Percent) Figure 10: Sample Error Distribution 3.2 Outlier Analysis Figure 11 shows the productivities of the flight projects and Figure 12 shows their sizes. In the figures projects 81 through 100 are the Avionics Embedded projects and the last two projects shown dashed in each figure are the Science Embedded projects. Investigation of the project data and the calibration results indicate a single particular outlier project # 100. As shown in Figure 11 its productivity is way out of bounds, by orders of magnitude, with the rest of the productivity spread. Figure 12 shows it is the second smallest project (though the other small projects on its scale are inline with the rest of the productivity distribution). The data reporting is potentially suspect or it may be indicative of a single individual or extremely small team. At very small project sizes the effects of individuals tend to predominate, and this could be an extremely productive and unencumbered individual or small team. On modern NASA projects with increased complexity and size, it is highly unlikely that a single individual will create an entire CSCI. 30

31 Productivity (SLOC/Person- Month) Project # Figure 11: Flight Project Productivities (Avionics #s , Science #s ) Size (KSLOC) Project # Figure 12: Flight Project Sizes (Avionics #s , Science #s ) Because of the giant disparity in productivity and these other reasons, subsequent analyses are performed without the outlier in the dataset except where noted otherwise. It does not seem to be 31

32 representative of projects we wish to estimate. For additional reference Appendix A lists all results with the outlier included. The dataset lists size as PSLOC. If it is physical SLOC then the size is probably overstated with respect to the COCOMO II standard of logical sources statements. USC has not been able to determine the exact unit of measurements or obtain further context information such as the language. If physical lines were counted then a conversion factor can be used to estimate logical statements. Without that information the current calibrations are predicated on the reported size units (i.e. the A constant is relevant when size is estimated using the same measure as the reported results.) 3.3 COCOMO The results of the COCOMO II calibrations are shown in Figure 13, Figure 14, Figure 15 and Figure 16. We calibrated for each embedded domain and the combined embedded flight domains. See Appendix A for the corresponding results when the outlier project is included. In order to expand the analysis space, we also assessed COCOMO against embedded ground projects (the closest domain neighbors) seen in Figure 16. In that case the calibrated coefficient turned out less than the default COCOMO value. This result can be further explored with additional data or clarifications against the current dataset. 32

33 Dataset Name: NASA 94 Category: Avionics Mode: Embedded Number of Projects = 10 A = 6.13 Actual Effort (Personmonths) Calibrated Effort Estimates vs. Actuals Estimated Effort (Person-months) Effort Prediction Summary Uncalibrated Calibrated MMRE 47% 36% RMS PRED(10) 10% 40% PRED(20) 10% 40% PRED(30) 20% 50% PRED(40) 40% 70% Figure 13: Summary of COCOMO Predictability for Embedded Avionics Projects 33

34 Dataset Name: NASA 94 Category: Avionics + Science Mode: Embedded Number of Projects = 12 A = 5.80 Actual Effort (Personmonths) Calibrated Effort Estimates vs. Actuals Estimated Effort (Person-months) Effort Prediction Summary Uncalibrated Calibrated MMRE 43% 29% RMS PRED(10) 9% 36% PRED(20) 9% 45% PRED(30) 18% 64% PRED(40) 36% 73% Figure 14: Summary of COCOMO Predictability for Embedded Flight Projects (Avionics + Science) 34

35 Dataset Name: NASA 94 Category: Science Mode: Embedded Number of Projects = 2 A = 4.38 Calibrated Effort Estimates vs. Actuals Actual Effort (Personmonths) Estimated Effort (Person-months) Effort Prediction Summary Uncalibrated Calibrated MMRE 31% 25% RMS PRED(10) 0% 0% PRED(20) 50% 0% PRED(30) 50% 100% PRED(40) 50% 100% Figure 15: Summary of COCOMO Predictability for Embedded Science Projects 35

36 Dataset Name: NASA 94 Category: Ground (Avionics Monitoring + Operating System) Mode: Embedded Number of Projects = 5 A = 1.82 Calibrated Effort Estimates vs. Actuals Actual Effort (Personmonths) Estimated Effort (Person-months) Effort Prediction Summary Uncalibrated Calibrated MMRE 96% 57% RMS PRED(10) 40% 20% PRED(20) 40% 20% PRED(30) 60% 40% PRED(40) 60% 80% Figure 16: Summary of COCOMO Predictability for Embedded Ground Projects 3.4 SEER The first SEER-SEM runs for uncalibrated and initial knowledge base settings were a blind study whereby actuals were not used. The Rosetta Stone was used to map to SEER-SEM inputs and used for the uncalibrated run. No further factors were touched. In the second round USC chose three knowledge base settings without extensive knowledge of SEER-SEM. The previous COCOMO-derived factors overrode any conflicts in the settings. In the most recent iteration, SEER-SEM experts made further settings in the knowledge bases, with use of actuals for calibration, and including the outlier project. 36

37 Figure 17 summarizes the successive SEER-SEM predictability (with the outlier) and shows the graph for the last iteration by SEER-SEM experts. A summary of the first two runs for avionics only without the outlier is shown in Table SEER Effort Estimates vs. Actuals Calibrated and Adjusted (50%) - All Flight Projects Actual Effort (Personmonths) Estimated Effort (Person-months) Uncalibrated (Avionics Only) w/ Initial KB Settings (Avionics Only) Calibrated and Adjusted (All Flight) MMRE 99% 122% 54.0% RMS PRED(10) 0% 18% 0% PRED(20) 0% 45% 31% PRED(30) 0% 45% 38% PRED(40) 18% 55% 54% Figure 17: Summary of SEER-SEM Predictability (with Outlier) Table 14: Summary of SEER-SEM Predictability for Avionics Only (without Outlier) Uncalibrated (Avionics Only) w/ Initial KB Settings (Avionics Only) MMRE 53% 39% RMS PRED(10) 0% 20% PRED(20) 0% 50% PRED(30) 0% 50% PRED(40) 20% 60% 37

38 3.5 True S The predictability summary for True S is shown in Figure 18. These results are for the first round of applying an expert-determined mapping to True S without calibration. Table 15 shows the True S project settings. There are still additional calibrations being done with True S and those results will be provided later. Dataset Name: NASA 94 Category: Avionics + Science Mode: Embedded Number of Projects = 12 Actual Effort (Person-months) True S Effort Estimates vs. Actuals Avionics + Science Embedded - w/o Outlier Estimated Effort (Person-months) Effort Prediction Summary MMRE 49% RMS PRED(10) 17% PRED(20) 42% PRED(30) 50% PRED(40) 58% Figure 18: Summary of True S Predictability for Flight Projects 38

39 Table 15: True S Project Settings Organizational Productivity Project Constraints Development Team Complexity Hardware Platform Availability (Assembly Input) Analysts Programmers Team Con Functional Design for Product Platform Exp. Multiple Site Start and Project Operational Specification IPT Use CMM Complexity Reuse Fam Fam Lang. Development End Date SLOC 81 Unmanned Spac NA IPT Casual and Team Some Success CMM nominal na 0.95 Unstable Capable Capable 5-10% 2-5 yrs <2 yrs <2 yrs nominal all Not 32, Unmanned Spac NA IPT Casual and Team Some Success CMM nominal na 0.88 Unstable Highly Cap Highly Cap 5-10% 5-10 yrs 5-10 yrs 5-10 yrs nominal set Enough 53, Unmanned Spac NA IPT Casual and Team Some Success CMM nominal na 0.88 Very Stable Expert Expert 5-10% > 10 yrs novice novice high at Time 41, Unmanned Spac NA IPT Casual and Team Some Success CMM nominal na 0.88 Very Stable Expert Expert 5-10% > 10 yrs novice novice high Entire for 24, Unmanned Spac NA IPT Casual and Team Some Success CMM nominal na 0.95 Mod. Stable Highly Cap Highly Cap 5-10% 5-10 yrs 5-10 yrs 5-10 yrs nominal Team Analysis 165, Unmanned Spac NA IPT Casual and Team Some Success CMM nominal na 0.95 Mod. Stable Highly Cap Highly Cap 5-10% 5-10 yrs 5-10 yrs 5-10 yrs nominal in of 70, Unmanned Spac NA IPT Casual and Team Some Success CMM nominal na 0.95 Mod. Stable Highly Cap Highly Cap 5-10% 5-10 yrs 5-10 yrs 5-10 yrs nominal Same Schedule 50, Unmanned Spac NA IPT Casual and Team Some Success CMM nominal na 0.88 Very Stable Highly Cap Capable 5-10% <2 yrs novice <2 yrs high Place 7, Unmanned Spac NA IPT Casual and Team Some Success CMM nominal na 0.95 Mod. Stable Highly Cap Highly Cap 5-10% 5-10 yrs 5-10 yrs 5-10 yrs nominal 233, Unmanned Spac NA IPT Casual and Team Some Success CMM nominal na 0.82 Unstable Capable Capable 5-10% 2-5 yrs <2 yrs <2 yrs nominal 16, Unmanned Spac NA IPT Casual and Team Some Success CMM nominal na 0.82 Unstable Capable Capable 5-10% 2-5 yrs <2 yrs <2 yrs nominal 6, Military Airpborne NA IPT Casual and Team Some Success CMM nominal na 0.95 Mod. Stable Highly Cap Highly Cap 5-10% 5-10 yrs 5-10 yrs 5-10 yrs nominal 65, Military Airpborne NA IPT Casual and Team Some Success CMM nominal na 0.82 Mod. Stable Capable Capable 5-10% 2-5 yrs 2-5 yrs 2-5 yrs nominal 3,000 Design, Code and Test Tools 39

40 Some data uncertainties may impact True S like the other models. The True S results may need adjustment for the PSLOC size definition. There are also unknowns about the Operational Specification used. Some of the projects may be manned space, but without that knowledge subsystems cannot be assigned to manned or unmanned. The level of specification may also not be consistent and homogeneous on a spacecraft. 4. Effects of Calibration and Knowledge Bases The effects of calibration, knowledge base settings and other adjustments are evaluated with the respective models. 4.1 COCOMO II The improvement effects of calibration for the different project subgroups are clearly seen in the performance measures. It is evident that MMRE improves in Figure 19 for all cases and that PRED(40) also improves in each case shown in Figure Uncalibrated Calibrated Percent Flight Avionics Embedded Flight Science Embedded Flight (All) Ground Embedded Project Types Figure 19: COCOMO II MMRE Calibration Effect 40

41 Uncalibrated Calibrated Percent Flight Avionics Embedded Flight Science Embedded Flight (All) Ground Embedded Project Types Figure 20: COCOMO II PRED(40) Calibration Effect A summary of the COCOMO II calibrations for the different sub-groups are shown in Table 16. The same information when using the outlier project is provided in Appendix A. Table 16: Summary of COCOMO II Linear Effort Calibrations Project Group (# of Projects) Calibrated Coefficient Flight Avionics Embedded (10) A = 6.13 Flight Science Embedded (2) A = 4.38 Flight (All) (12) A = 5.80 Ground Embedded (5) A = SEER Similar progressive improvement trends were exhibited with the SEER-SEM model runs in Figure 21 and Figure 22. The first run used the translated COCOMO into SEER-SEM parameter settings with no further adjustments the uncalibrated case. The next progressive improvement was the setting of knowledge base settings by USC personnel without extensive knowledge of SEER-SEM. This action considerably improved the model performance. The last set was calibrated by SEER-SEM personnel with further adjustments to the knowledge base settings and project specific adjustments. 41

42 Percent Uncalibrated Initial Knowledge Base Settings Calibrated and Project-Adjusted w/ Outlier Flight Avionics Embedded Flight (All) Project Types Percent Figure 21: SEER-SEM II MMRE Progressive Adjustment Effects Uncalibrated Initial Knowledge Base Settings Calibrated and Project-Adjusted w/ Outlier Flight Avionics Embedded Flight (All) Project Types Figure 22: SEER-SEM PRED(40) Progressive Adjustment Effects One conclusion from the SEER-SEM analysis this is that although statistical calibration helps, it is very important to properly characterize the technical and programmatic characteristics of the software being estimated. The SEER-SEM results, both uncalibrated and calibrated, improved significantly with more accurate information about platform, application, effective size and other parameters. It is suspected performance could be even better, without calibration, with a complete technical description of the software modules and for a "full-up" estimate. 4.3 True S In the True S model the factor Application Type serves as a domain setting, similar to the SEER- SEM Knowledge Base settings. By setting Application Type then other factors are affected including Functional Complexity Operating Specification 42

43 Development Team Productivity Sizing-related parameters. True S was handled somewhat differently than SEER-SEM. Instead of choosing a particular application type to preset them, other factors were independently adjusted to represent the detailed project characteristics. The results of the factor adjustments used for input in the True S runs are shown in Table Model Performance Summaries A summary of the model performances in their last runs (COCOMO calibrated, SEER-SEM with refined knowledge base settings, True S with application domain settings) is shown in Table 17. A scatterplot of the effort estimates vs. actuals is shown in Figure 23. Table 17: Model Performance Summaries COCOMO II SEER-SEM 1 True S MMRE 29% 39% 49% RMS PRED(10) 36% 20% 17% PRED(20) 45% 50% 42% PRED(30) 64% 50% 50% PRED(40) 73% 60% 58% 1 Data for two science projects not available 43

44 Effort Estimates vs. Actuals COCOMO II SEER-SEM True S Estimated Effort (Person-months) Additional Analysis Runs Actual Effort (Person-months) Figure 23: Effort Estimates vs. Actuals for All Models This analysis will be repeating as more data is received on the research project. As described only a subset of the steps will be performed on the next iterations of analysis. The process may also vary if project data is received in multiple model format, e.g. using the full set of SEER- SEM or True S parameters and bypassing the Rosetta Stones between COCOMO II. A minor revision of the first round of results will be done first. This is necessary since Galorath has provided refined data from the NASA 94 dataset. In particular, the equivalent size of three of the flight projects is lower than the values used in the initial runs. This largely explains why Galorath was able to include the apparent outlier project in their final SEER-SEM run. The other model analyses will be re-performed, but a quick assessment of the differences indicates the overall results will vary negligibly. More importantly the root cause of the data discrepancy will be investigated. Further impacts on the overall results and conclusions, if any, will be reported then. A data collection initiative is also underway at NASA to collect more modern data for this project. On the NASA MOCA research grant, USC has received actuals on the recent Shuttle Abort and Flight Management (SAFM) project and it will be incorporated into the analysis [Madachy-Boehm 2006]. 44

45 45

46 5. Conclusions and Future Work This paper has presented an overview of software cost models with a focus on critical flight software. The primary cost models were assessed against a relevant database of NASA and performed well, particularly with the absence of contextual data and potential flaws in the factor transformations. When using the NASA 94 dataset, calibration and knowledge base judgments for the domain improved all model performance versus using default parameter values. This study was performed by persons highly familiar with COCOMO but not necessarily with SEER-SEM or True S. The vendors of these models provided minor support, but do not yet certify or sanction the data nor information contained in this report. Specific vendor concerns include: the study limited to a COCOMO viewpoint only current Rosetta Stones need review and may be weak translators from the original data results not indicative of model performance due to ignored parameters risk and uncertainty were ground ruled out data sanity checking needed. NASA flight projects are typified by extremely high reliability and complexity. Characterizations of the database projects in terms of these important descriptors provided useful and interesting results. Distributions of factor ratings for complexity and reliability factors showed relevant patterns across the subgroups in the database. It also helped to confirm that the COCOMO factor ratings were done consistently across the projects, and adhere to the COCOMO rating criteria. All models support effort variance due to these factors, but the vendor models provide additional elaborations of these factors and domain-specific defaults. Within the COCOMO II scope and subset of analyses, it can be concluded that the overall embedded flight domain calibration with this data is a linear coefficient A 6. The value is slightly less for embedded science vs. avionics projects. But the sample size for science projects was extremely small, and more data should be incorporated for a more robust and credible calibration. Successive experiments with the SEER-SEM model illustrated that the model performance measures markedly improved when incorporating knowledge base information for the domains. Simple educated guesses on the knowledge base choices without extensive SEER-SEM knowledge produced far better estimates than strict uncalibrated estimates. The initial uncalibrated runs from COCOMO II and SEER-SEM both underestimated the projects by approximately 50% overall. That is also reflected in the calibrated COCOMO II coefficient being about twice the default (A 6 vs. A=2.96). For all models (COCOMO, SEER, True S), calibration against the different subgroups exhibited nearly equivalent trends for embedded flight projects. The model performance measures for either individual flight groups (avionics or science) or combined together (avionics plus science) 46

47 were about the same and the improvement trends between uncalibrated and calibrated were identical when the outlying project was excluded. Major future thrusts include refining and expanding the project dataset, and updating the COCOMO model(s) for flight applications. The calibrations are all derived with respect to the reported size termed PSLOC (likely to be physical lines). Further investigations to directly capture logical source statements or through the use of conversion factors may yield different calibrations for using COCOMO II with its current size definitions. The vendor models provide more granular factors for the overall effects captured in the COCOMO II Complexity (CPLX) factor. One consideration is to elaborate the current COCOMO definition with more levels of detail specifically interpreted for critical flight project applications. The COCOMO II Required Software Reliability (RELY) factor is also being elaborated for high dependability and security applications, and that research will be brought to bear on the current effort. The COCOMO II and COQUALMO models are being updated on this project for new technologies, IV&V techniques and new mission requirements (e.g. increased reliability for security and safety). Additional project data and Delphi studies are both being used. The revised models will undergo the same analysis and be re-calibrated again for flight projects with the additional data. One option to expand COCOMO II is to embed a knowledge-based capability into the model specifically for NASA projects. An example could be based on the observed reliability and complexity factor distributions. A sub-domain is selected by the user (flight science, flight avionics, ground embedded, etc.) and factor settings are defaulted in the model. This study has been helpful in reducing sources of misinterpretation across the models, and the current set of Rosetta Stones and other model comparisons have provided a usable framework for analysis, but considerably more should be done including: developing two-way and/or multiple-way Rosetta Stones explicit identification of residual sources of uncertainty across models and their estimates not fully addressable by Rosetta Stones factors unique to some models but not others develop translations between the size input parameter factors address COTS estimation and sizing many-to-many factor mappings partial factor-to-factor mappings similar factors that affect estimates in different ways: linear, multiplicative, exponential, other imperfections in data: subjective rating scales, code counting, counting of other size factors, effort/schedule counting, endpoint definitions and interpretations, WBS element definitions and interpretations. The study participants welcome sponsorship of further joint efforts to pin down sources of uncertainty, and to more explicitly identify the limits to comparing estimates across models. 47

48 A more rigorous review of the detailed cost factor Rosetta Stones should be completed. Some of the future work to complete this work rests on the vendors. They will be further reviewing their specific sections and clarifying the USC interpretation of their work breakdown structures, factor interpretations and other aspects. Additional approaches for calibration are also being evaluated. Remaining work on the Rosetta Stones includes elaborating the detailed Rosetta Stone for True S, and rigorous review of all the top-level and detailed Rosetta Stones. The analysis process can also be improved on several fronts. The recommended sequence for vendor tool usage is to first set knowledge bases before COCOMO translation parameter setting. It is also desirable to capture estimate inputs in all three model formats, and try different translation directionalities. This analysis has also identified additional information on the projects that could be useful. The vendors are involved in this aspect and the model analyses are likely to be re-iterated for several reasons including additional or refined data assumptions. In practice no one model should be preferred over all others. The key to arriving at sound estimates is to use a variety of methods and tools and then investigate the reasons why the estimates provided by one might differ significantly from those provided by another. If the practitioner can explain such differences to a reasonable level of satisfaction, then it is likely that he or she has a good grasp of the factors which are driving the costs of the project at hand. He/she will be better equipped to support the project planning and control functions performed by management. Future work involves repeating the analysis with updated calibrations, revised domain settings, improved models and new data. It is highly desirable to incorporate more recent NASA project data in the cost model analyses. The MOCA project collected actuals on the SAFM project, more data is being solicited and it all will be used to update the analysis and support research demands for current data. Other data concerns include the units of size measurement in the NASA 94 dataset, which should be investigated for reasons previously stated in the analyses. With modern and more comprehensive data, COCOMO II and the other models can be further improved and tailored as necessary for NASA project usage. 5.1 References [Boehm 1981] Boehm B., Software Engineering Economics. Englewood Cliffs, NJ, Prentice- Hall, 1981 [Boehm et al. 2000] Boehm B., Abts C., Brown W., Chulani S., Clark B., Horowitz E., Madachy R., Reifer D., Steece B., Software Cost Estimation with COCOMO II, Prentice-Hall, 2000 [Boehm et al. 2000b] Boehm B, Abts C, Chulani S, Software Development Cost Estimation Approaches A Survey, USC-CSE ,

49 [Boehm et al. 2004] Boehm B, Bhuta J, Garlan D, Gradman E, Huang L, Lam A, Madachy R, Medvidovic N, Meyer K, Meyers S, Perez G, Reinholtz KL, Roshandel R, Rouquette N, Using Empirical Testbeds to Accelerate Technology Maturity and Transition: The SCRover Experience, Proceedings of the 2004 International Symposium on Empirical Software Engineering, IEEE Computer Society, 2004 [Galorath 2005] Galorath Inc., SEER-SEM User Manual, 2005 [Galorath-Evans 2006] Galorath D, Evans M, Software Sizing, Estimation, and Risk Management, Auerbach Publications, 2006 [Jensen 1983] Jensen R, An Improved Macrolevel Software Development Resource Estimation Model, Proceedings of 5 th ISPA Conference, 1983 [Lum et al. 2001] Lum K, Powell J, Hihn J, Validation of Spacecraft Software Cost Estimation Models for Flight and Ground Systems, JPL Report, 2001 [Madachy 1997] Madachy R, Heuristic Risk Assessment Using Cost Factors, IEEE Software, May 1997 [Madachy-Boehm 2006] Madachy R, Boehm B, A Model of Options and Costs for Reliable Autonomy (MOCA) Final Report, reported submitted to NASA for USRA contract #4481, 2006 [Park 1988] Park R, The Central Equations of the PRICE Software Cost Model, COCOMO User s Group Meeting, 1988 [PRICE 2005] PRICE Systems, TRUE S User Manual, 2005 [Reifer et al. 1999] Reifer D, Boehm B, Chulani S, The Rosetta Stone - Making COCOMO 81 Estimates Work with COCOMO II, Crosstalk, 1999 [USC-CSE 2006] USC Center for Software Engineering, Model Comparison Report, Report to NASA AMES, Draft Version, July Acknowledgements This work would not have been possible without the contributions of other colleagues and generous organizations. We particularly thank Galorath Inc. and PRICE Systems for providing us with their tool information and people. Thanks are due to all the people mentioned below. Tim Hohmann at Galorath Inc. helped as our primary technical contact for SEER-SEM support. Additional assistance and support from Galorath came from Dan Galorath, Karen McRitchie and Bob Hunt. David Seaver was our primary contact and provided technical support from PRICE Systems, and James Otte also provided early assistance. Jairus Hihn and Sherry Stukes from NASA JPL supported this analysis. 49

50 Dan Ligett from Softstar Systems graciously provided a calibration spreadsheet that was modified for this research. 50

51 7. Appendix A: Model Analysis Results with Outlier Project 7.1 COCOMO II This section shows the detailed COCOMO II uncalibrated and calibrated model results, including the flight project groups with the outlier project. A summary of the COCOMO II calibrations when considering the outlier project in Table 18. The Ground Embedded calibration is not affected by the outlier. Table 18: COCOMO II Calibration Summary with Outlier Project Project Group (# of Projects) Calibrated Coefficient Flight Avionics Embedded (11) A = 4.86 Flight Science Embedded (2) A = 4.38 Flight (All) (13) A = 4.78 Ground Embedded (5) A = 1.82 Figure 24, Figure 25 and Figure 26 show the detailed results of the COCOMO II runs. 51

52 Dataset Name: NASA 94 Category: Avionics Mode: Embedded Number of Projects = 11 A = 4.85 Calibrated Effort Estimates vs. Actuals Actual Effort (Personmonths) Estimated Effort (Person-months) Effort Prediction Summary Uncalibrated Calibrated MMRE 91% 114% RMS PRED(10) 9% 18% PRED(20) 9% 27% PRED(30) 18% 64% PRED(40) 36% 64% Actual Effort (Person- Months) Uncalibrated Estimated Effort Calibrated Estimated Effort Project Record # 81 1, , , , , Figure 24: COCOMO II Summary Results for Flight Avionics Embedded Project 52

53 Dataset Name: NASA 94 Category: Science Mode: Embedded Number of Projects = 2 A = 4.38 Calibrated Effort Estimates vs. Actuals Actual Effort (Personmonths) Estimated Effort (Person-months) Effort Prediction Summary Uncalibrated Calibrated MMRE 31% 25% RMS PRED(10) 0% 0% PRED(20) 50% 0% PRED(30) 50% 100% PRED(40) 50% 100% Actual Effort (Person- Months) Uncalibrated Estimated Effort Calibrated Estimated Effort Project Record # 92 1, Figure 25: COCOMO II Summary Results for Flight Avionics Embedded Project 53

54 Dataset Name: NASA 94 Category: Ground (Avionics Monitoring + Operating System) Mode: Embedded Number of Projects = 5 A = 1.82 Calibrated Effort Estimates vs. Actuals Actual Effort (Personmonths) Estimated Effort (Person-months) Effort Prediction Summary Uncalibrated Calibrated MMRE 96% 57% RMS PRED(10) 40% 20% PRED(20) 40% 20% PRED(30) 60% 40% PRED(40) 60% 80% Actual Effort (Person- Months) Uncalibrated Estimated Effort Calibrated Estimated Effort Project Record # Figure 26: COCOMO II Summary Results for Flight Avionics Embedded Project 7.2 SEER-SEM Figure 27 and Figure 26 shows the detailed results of the SEER-SEM runs with the outlier. 54

55 Dataset Name: NASA 94 Category: Avionics Mode: Embedded Number of Projects = SEER Effort Estimates vs. Actuals (with Knowledge Base) Actual Effort (Person-months) Estimated Effort (Person-months) Effort Prediction Summary Uncalibrated With Knowledge Base MMRE 99% 122% RMS PRED(10) 0% 18% PRED(20) 0% 45% PRED(30) 0% 45% PRED(40) 18% 55% Figure 27: Summary of SEER-SEM Predictability for Embedded Ground Projects 7.3 True S 55

56 8. Appendix B: NASA 94 Original and Transformed Data Table 19: Original COCOMO 1981 Data for NASA 94 Avionics, Embedded Projects recordnum projectnamcat2 forg center year mode rely data cplx time stor virt turn acap aexp pcap vexp lexp modp tool sced equivphyskact_effort 81 hst Avionics f embedded h vh vh xh xh h h n n n l l n n h hst Avionics f embedded h h h vh xh h h h h h h h h n n spl Avionics f embedded h l vh vh xh l n vh vh vh vl vl h h n spl Avionics f embedded h l vh vh xh l n vh vh vh vl vl h h n sts Avionics f embedded vh h vh xh xh n n h h h h h h n h sts Avionics f embedded vh h vh xh xh n l h h h h h h n h sts Avionics f embedded vh h xh xh xh n n h h h h h h n h gal Avionics f embedded vh l vh vh xh l l h l n vl l l h h sts Avionics f embedded vh h vh xh xh n n h h h h h h n h gro Avionics f embedded h n vh vh vh h h n n n l l n n h gro Avionics f embedded h n vh vh vh h h n n n l l n n h Table 20: COCOMO II Transformed Data for NASA 94 All Embedded Projects 56

DRAFT. Effort = A * Size B * EM. (1) Effort in person-months A - calibrated constant B - scale factor EM - effort multiplier from cost factors

DRAFT. Effort = A * Size B * EM. (1) Effort in person-months A - calibrated constant B - scale factor EM - effort multiplier from cost factors 1.1. Cost Estimation Models Parametric cost models used in avionics, space, ground, and shipboard platforms by the services are generally based on the common effort formula shown in Equation 1. Size of

More information

A Process for Mapping COCOMO Input Parameters to True S Input Parameters

A Process for Mapping COCOMO Input Parameters to True S Input Parameters A Process for Mapping Input s to Input s Agenda > Overview > Rosetta Stone II > Analysis > Summary 2 Overview > Initial Comparison and Assessment was Completed by USC Center for Systems & Software Engineering

More information

COCOMO II Demo and ARS Example

COCOMO II Demo and ARS Example COCOMO II Demo and ARS Example CS 566 Software Management and Economics Lecture 5 (Madachy 2005; Chapter 3, Boehm et al. 2000) Ali Afzal Malik Outline USC COCOMO II tool demo Overview of Airborne Radar

More information

C S E USC. University of Southern California Center for Software Engineering

C S E USC. University of Southern California Center for Software Engineering COCOMO II: Airborne Radar System Example Dr. Ray Madachy C-bridge Internet Solutions Center for Software Engineering 15th International Forum on COCOMO and Software Cost Modeling October 24, 2000 C S E

More information

Software Cost Metrics Manual

Software Cost Metrics Manual MOTIVATION Software Cost Metrics Manual Mr. Wilson Rosa Dr. Barry Boehm Mr. Don Reifer Dr. Brad Clark Dr. Ray Madachy 21 st Systems & Software Technology Conference April 22, 2009 DOD desires more credible

More information

MTAT Software Economics. Session 6: Software Cost Estimation

MTAT Software Economics. Session 6: Software Cost Estimation MTAT.03.244 Software Economics Session 6: Software Cost Estimation Marlon Dumas marlon.dumas ät ut. ee Outline Estimating Software Size Estimating Effort Estimating Duration 2 For Discussion It is hopeless

More information

Headquarters U.S. Air Force

Headquarters U.S. Air Force Headquarters U.S. Air Force Software Sizing Lines of Code and Beyond Air Force Cost Analysis Agency Corinne Wallshein June 2009 1 Presentation Overview About software sizing Meaning Sources Importance

More information

RT 6 Software Intensive Systems Data Quality and Estimation Research in Support of Future Defense Cost Analysis

RT 6 Software Intensive Systems Data Quality and Estimation Research in Support of Future Defense Cost Analysis RT 6 Software Intensive Systems Data Quality and Estimation Research in Support of Future Defense Cost Analysis A013 - Annual and Final Scientific Technical Report SERC 2012-TR-032 March 13, 2012 Dr. Barry

More information

SEER-SEM to COCOMO II Factor Convertor

SEER-SEM to COCOMO II Factor Convertor SEER-SEM to COCOMO II Factor Convertor Anthony L Peterson Mechanical Engineering 8 June 2011 SEER-SEM to COCOMO II Factor Convertor The Software Parametric Models COCOMO II public domain model which continues

More information

Factors Influencing System-of-Systems Architecting and Integration Costs

Factors Influencing System-of-Systems Architecting and Integration Costs Paper # (unknown) Factors Influencing System-of-Systems Architecting and Integration Costs Jo Ann Lane University of Southern California Center for Software Engineering 941 W. 37th Place, SAL Room 328

More information

Synthesis of Existing Cost Models to Meet System of Systems Needs

Synthesis of Existing Cost Models to Meet System of Systems Needs Paper #128 Synthesis of Existing Cost Models to Meet System of Systems Needs Jo Ann Lane University of Southern California Center for Software Engineering 941 W. 37th Place, SAL Room 328 Los Angeles, CA

More information

Software Technology Conference

Software Technology Conference 30 April 2003 Costing COTS Integration Software Technology Conference Salt Lake City Linda Brooks 1 Objective Provide a roadmap for doing an estimate for a Commercial Off-the-Shelf (COTS) software intensive

More information

You document these in a spreadsheet, estimate them individually and compute the total effort required.

You document these in a spreadsheet, estimate them individually and compute the total effort required. Experience-based approaches Experience-based techniques rely on judgments based on experience of past projects and the effort expended in these projects on software development activities. Typically, you

More information

EVALUATION OF PERSONNEL PARAMETERS IN SOFTWARE COST ESTIMATING MODELS THESIS. Steven L. Quick, Captain, USAF AFIT/GCA/ENV/03-07

EVALUATION OF PERSONNEL PARAMETERS IN SOFTWARE COST ESTIMATING MODELS THESIS. Steven L. Quick, Captain, USAF AFIT/GCA/ENV/03-07 EVALUATION OF PERSONNEL PARAMETERS IN SOFTWARE COST ESTIMATING MODELS THESIS Steven L. Quick, Captain, USAF AFIT/GCA/ENV/03-07 DEPARTMENT OF THE AIR FORCE AIR UNIVERSITY AIR FORCE INSTITUTE OF TECHNOLOGY

More information

CHAPTER 6 AN ANALYSIS OF EXISTING SOFTWARE ESTIMATION TECHNIQUES

CHAPTER 6 AN ANALYSIS OF EXISTING SOFTWARE ESTIMATION TECHNIQUES 54 CHAPTER 6 AN ANALYSIS OF EXISTING SOFTWARE ESTIMATION TECHNIQUES This chapter describes the series of techniques that are implemented in the hybrid tool. Several programs, with Graphic User Interfaces

More information

COCOMO III. Brad Clark, PhD USC Center for Systems and Software Engineering 2017 Annual Research Review April 4, 2017

COCOMO III. Brad Clark, PhD USC Center for Systems and Software Engineering 2017 Annual Research Review April 4, 2017 COCOMO III Brad Clark, PhD USC 2017 Annual Research Review April 4, 2017 The COCOMO III Project COCOMO (COnstructure COst MOdel) is the most widely used, free, open source software cost estimation model

More information

Lessons Learned in Estimating the Software Cost of a Ground Station with COTS Integration. Kathy Bradford 22 February 2001

Lessons Learned in Estimating the Software Cost of a Ground Station with COTS Integration. Kathy Bradford 22 February 2001 Lessons Learned in Estimating the Software Cost of a Ground Station with COTS Integration Kathy Bradford 22 February 2001 1 Short History of an Integrated COTS Procurement RFP requested a mostly COTS ground

More information

Simple Empirical Software Effort Estimation Models

Simple Empirical Software Effort Estimation Models University of Southern California Center for Systems and Software Engineering Simple Empirical Software Effort Estimation Models Presenter: Brad Clark Co-Authors: Wilson Rosa, Barry Boehm, Ray Madachy

More information

Software cost estimation

Software cost estimation Software cost estimation Joseph Bonello (based on slides by Ian Sommerville) Objectives To introduce the fundamentals of software costing and pricing To describe three metrics for software productivity

More information

Figure 1 Function Point items and project category weightings

Figure 1 Function Point items and project category weightings Software measurement There are two significant approaches to measurement that project managers need to be familiar with. These are Function Point Analysis (Albrecht, 1979) and COCOMO (Boehm, 1981). 1.

More information

COSYSMO: Constructive Systems Engineering Cost Model

COSYSMO: Constructive Systems Engineering Cost Model COSYSMO: Constructive Systems Engineering Cost Model Barry Boehm, USC CSE Annual Research Review February 6, 2001 Outline Background Scope Proposed Approach Strawman Model Size & complexity Cost & schedule

More information

The Rosetta Stone: Making COCOMO 81 Files Work With COCOMO II

The Rosetta Stone: Making COCOMO 81 Files Work With COCOMO II The Rosetta Stone: Making COCOMO 81 Files Work With COCOMO II Donald J. Reifer, Reifer Consultants, Inc. Barry W. Boehm, University of Southern California Sunita Chulani, University of Southern California

More information

COCOMO II Model. Brad Clark CSE Research Associate 15th COCOMO/SCM Forum October 22, 1998 C S E USC

COCOMO II Model. Brad Clark CSE Research Associate 15th COCOMO/SCM Forum October 22, 1998 C S E USC COCOMO II Model Brad Clark CSE Research Associate 15th COCOMO/SCM Forum October 22, 1998 Brad@Software-Metrics.com COCOMO II Model Overview COCOMO II Overview Sizing the Application Estimating Effort Estimating

More information

Current and Future Challenges for Ground System Cost Estimation

Current and Future Challenges for Ground System Cost Estimation Current and Future Challenges for Ground System Cost Estimation Barry Boehm, Jim Alstad, USC-CSSE GSAW 2014 Working Group Session 11F Cost Estimation for Next-Generation Ground Systems February 26, 2014

More information

Determining How Much Software Assurance Is Enough?

Determining How Much Software Assurance Is Enough? Determining How Much Software Assurance Is Enough? Tanvir Khan Concordia Institute of Information Systems Engineering Ta_k@encs.concordia.ca Abstract It has always been an interesting problem for the software

More information

3. December seminar cost estimation W 2002/2003. Constructive cost model Department of Information Technology University of Zurich

3. December seminar cost estimation W 2002/2003. Constructive cost model Department of Information Technology University of Zurich I 3. December 2002 seminar cost estimation W 2002/2003 COCOMO Constructive cost model Department of Information Technology University of Zurich Nancy Merlo-Schett Nancy Merlo-Schett, Department of Information

More information

Contents. Today Project Management. What is Project Management? Project Management Activities. Project Resources

Contents. Today Project Management. What is Project Management? Project Management Activities. Project Resources Contents Last Time - Software Development Processes Introduction Software Development Processes Project Management Requirements Engineering Software Construction Group processes Quality Assurance Software

More information

COCOMO II.2003 Calibration Status USC-CSE 1

COCOMO II.2003 Calibration Status USC-CSE 1 COCOMO II.2003 Calibration Status 2003-3-19 USC-CSE 1 Outline Introduction to COCOMO II COCOMO II.2003 Calibration Conclusion 2003-3-19 USC-CSE 2 COCOMO II Model Usage COCOMO II Estimation Endpoints I

More information

SOFTWARE EFFORT AND SCHEDULE ESTIMATION USING THE CONSTRUCTIVE COST MODEL: COCOMO II

SOFTWARE EFFORT AND SCHEDULE ESTIMATION USING THE CONSTRUCTIVE COST MODEL: COCOMO II SOFTWARE EFFORT AND SCHEDULE ESTIMATION USING THE CONSTRUCTIVE COST MODEL: COCOMO II Introduction Jongmoon Baik, Sunita Chulani, Ellis Horowitz University of Southern California - Center for Software Engineering

More information

Presented at the 2008 SCEA-ISPA Joint Annual Conference and Training Workshop -

Presented at the 2008 SCEA-ISPA Joint Annual Conference and Training Workshop - DEVELOPMENT AND PRODUCTION COST EQUATIONS DERIVED FROM PRICE-H TO ENABLE RAPID AIRCRAFT (MDO) TRADE STUDIES 2008 Society Cost Estimating Analysis (SCEA) Conference W. Thomas Harwick, Engineering Specialist

More information

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING SYLLABUS

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING SYLLABUS KINGS COLLEGE OF ENGINEERING DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING SYLLABUS Sub.Code : CS1021 Branch / Year / Sem : B.E CSE/IV/VIII Sub.Name : Software Project Management Batch : 2009-2013 Staff

More information

Software Growth Analysis

Software Growth Analysis Naval Center for Cost Analysis Software Growth Analysis June 2015 Team: Corinne Wallshein, Nick Lanham, Wilson Rosa, Patrick Staley, and Heather Brown Software Growth Analysis Introduction to software

More information

Software User Manual Version 3.0. COCOMOII & COCOTS Application. User Manual. Maysinee Nakmanee. Created by Maysinee Nakmanee 2:07 PM 9/26/02 1

Software User Manual Version 3.0. COCOMOII & COCOTS Application. User Manual. Maysinee Nakmanee. Created by Maysinee Nakmanee 2:07 PM 9/26/02 1 COCOMOII & COCOTS Application User Manual Maysinee Nakmanee Created by Maysinee Nakmanee 2:07 PM 9/26/02 1 Created by Maysinee Nakmanee 2:07 PM 9/26/02 2 Contents INTRODUCTION... 4 MODEL OVERVIEW... 5

More information

COSOSIMO Parameter Definitions Jo Ann Lane University of Southern California Center for Software Engineering

COSOSIMO Parameter Definitions Jo Ann Lane University of Southern California Center for Software Engineering Constructive System-of-Systems Integration Cost Model COSOSIMO Parameter Definitions Jo Ann Lane University of Southern California Center for Software Engineering jolane@usc.edu Introduction The Constructive

More information

Software cost estimation

Software cost estimation Software cost estimation Objectives To introduce the fundamentals of software costing and pricing To describe three metrics for software productivity assessment To explain why different techniques should

More information

Estimation for Software Projects. Slides copyright 1996, 2001, 2005, 2009 by Roger S. Pressman. For non-profit educational use only

Estimation for Software Projects. Slides copyright 1996, 2001, 2005, 2009 by Roger S. Pressman. For non-profit educational use only Chapter 26 Estimation for Software Projects Slide Set to accompany Software Engineering: A Practitioner s Approach, 7/e by Roger S. Pressman Slides copyright 1996, 2001, 2005, 2009 by Roger S. Pressman

More information

Software Estimation. Estimating Software Size

Software Estimation. Estimating Software Size Appendix C - Software Estimation 1 Software Estimation Accurately estimating software size, cost, effort, and schedule is probably the biggest challenge facing software developers today. A discussion of

More information

Cost Estimation for Secure Software & Systems Workshop Introduction

Cost Estimation for Secure Software & Systems Workshop Introduction Cost Estimation for Secure Software & Systems Workshop Introduction Edward Colbert, Sr. Research Associate Dr. Barry Boehm, Director Center for System & Software Engineering {ecolbert, boehm}@csse.usc.edu

More information

Goals of course. Themes: What can you do to evaluate a new technique? How do you measure what you are doing?

Goals of course. Themes: What can you do to evaluate a new technique? How do you measure what you are doing? MSWE 607: Software Life Cycle methods and Techniques Instructor: Professor Marvin V. Zelkowitz Office: 4121 AV Williams Phone: 405-2690 or 403-8935 (Fraunhofer Center) Email (Best way to contact) mvz@cs.umd.edu

More information

Resource Model Studies

Resource Model Studies Resource Model Studies MODELING AND MEASURING RESOURCES Model Validation Study Walston and Felix build a model of resource estimation for the set of projects at the IBM Federal Systems Division. They did

More information

Fundamental estimation questions. Software cost estimation. Costing and pricing. Software cost components. Software pricing factors

Fundamental estimation questions. Software cost estimation. Costing and pricing. Software cost components. Software pricing factors Fundamental estimation questions Software cost estimation How much effort is required to complete an activity? How much calendar time is needed to complete an activity? What is the total cost of an activity?

More information

SENG380:Software Process and Management. Software Size and Effort Estimation Part2

SENG380:Software Process and Management. Software Size and Effort Estimation Part2 SENG380:Software Process and Management Software Size and Effort Estimation Part2 1 IFPUG File Type Complexity Table 1 External user type External input types External output types Low Average High 3 4

More information

Project Plan: MSE Portfolio Project Construction Phase

Project Plan: MSE Portfolio Project Construction Phase Project Plan: MSE Portfolio Project Construction Phase Plans are nothing; planning is everything. Dwight D. Eisenhower September 17, 2010 Prepared by Doug Smith Version 2.0 1 of 7 09/26/2010 8:42 PM Table

More information

COSYSMO: A Systems Engineering Cost Model

COSYSMO: A Systems Engineering Cost Model COSYSMO: A Systems Engineering Cost Model Ricardo Valerdi and Barry W. Boehm Abstract: Building on the synergy between Systems engineering and Software Engineering, we have developed a parametric model

More information

Software Cost Estimation Models and Techniques: A Survey

Software Cost Estimation Models and Techniques: A Survey Software Cost Estimation Models and Techniques: A Survey 1 Yansi Keim, 1 Manish Bhardwaj, 2 Shashank Saroop, 2 Aditya Tandon Department of Information Technology Ch. Brahm Prakash Government Engineering

More information

BAE Systems Insyte Software Estimation

BAE Systems Insyte Software Estimation BAE Systems Software Estimation Steve Webb BAE Systems Estimating Focus Group Chairman Engineering Estimation & Measurement Manager 22 April 2008 1 Background BAE Systems Estimating Focus Group covers

More information

CORADMO and COSSEMO Driver Value Determination Worksheet

CORADMO and COSSEMO Driver Value Determination Worksheet 1. COCOMO Stage Schedule and Effort MODEL (COSSEMO) COSSEMO is based on the lifecycle anchoring concepts discussed by Boehm 3. The anchor points are defined as Life Cycle Objectives (LCO), Life Cycle Architecture

More information

Question Paper Solution (75:25), April 2015 Subject : Software Project Management

Question Paper Solution (75:25), April 2015 Subject : Software Project Management Question Paper Solution (75:25), April 2015 Subject : Software Project Management Ques1. (a) Discuss the significance, of reducing the product size, on ROI (returns on investment). Explain, briefly, how

More information

COCOMO II Bayesian Analysis

COCOMO II Bayesian Analysis COCOMO II Bayesian Analysis Sunita Chulani (sdevnani@sunset.usc.edu) Center for Software Engineering University of Southern California Annual Research Review March 9, 1998 Outline Motivation Research Approach

More information

Software Economics Homework I

Software Economics Homework I Software Economics Homework I Function Point Analysis and Effort Estimation Martin Vels Raido Seene Rauno Kiss Tartu 2012 2 Table of Contents TABLE OF CONTENTS... 3 OVERVIEW... 5 SCOPE... 5 DOMAIN MODEL...

More information

COCOMO Summary. USC-CSE COCOMO Team

COCOMO Summary. USC-CSE COCOMO Team K. Appendix 1. COCOMO II Summary COCOMO Summary Constructive Cost Model(COCOMO) is USC-CSE COCOMO Team Abstract 1-1 Table of Contents 1 Introduction...3 2 Overall Model Definition...3 2.1 COCOMO II Models

More information

Objectives. The software process. Topics covered. Waterfall model. Generic software process models. Software Processes

Objectives. The software process. Topics covered. Waterfall model. Generic software process models. Software Processes Objectives Software Processes To introduce software process models To describe three generic process models and when they may be used To describe outline process models for requirements engineering, software

More information

CLASS/YEAR: II MCA SUB.CODE&NAME: MC7303, SOFTWARE ENGINEERING. 1. Define Software Engineering. Software Engineering: 2. What is a process Framework? Process Framework: UNIT-I 2MARKS QUESTIONS AND ANSWERS

More information

Software Engineering

Software Engineering Software Engineering (CS550) Software Development Process Jongmoon Baik Software Development Processes (Lifecycle Models) 2 What is a S/W Life Cycle? The series of stages in form and functional activity

More information

Software Cost Models

Software Cost Models Software Cost Models Y.Sangeetha M.Tech (Ph.d) P.Madhavi Latha Dr R.Satya Prasad Asst.Professor, Asst.Professor Associate Professor VRSEC, VRSEC Acarya Nagarjuna University Vijayawada. Vijayawada. Vijayawada.

More information

A Comparative study of Traditional and Component based software engineering approach using models

A Comparative study of Traditional and Component based software engineering approach using models A Comparative study of Traditional and Component based software engineering approach using models Anshula Verma 1, Dr. Gundeep Tanwar 2 1, 2 Department of Computer Science BRCM college of Engineering and

More information

Chapter 5 Estimate Influences

Chapter 5 Estimate Influences Dilbert Scott Adams Dilbert Scott Adams Chapter 5 Estimate Influences How much is 68 + 73? ENGINEER: It s 141. Short and sweet. MATHEMATICIAN: 68 + 73 = 73 + 68 by the commutative law of addition. True,

More information

System-of-Systems Cost Estimation: Analysis of. Lead System Integrator Engineering Activities

System-of-Systems Cost Estimation: Analysis of. Lead System Integrator Engineering Activities System-of-Systems Cost Estimation: Analysis of Lead System Integrator Engineering Activities Jo Ann Lane, University of Southern California, USA; E-mail: TUjolane@usc.eduUT Dr. Barry Boehm, University

More information

Introduction to Cost Estimation - Part I

Introduction to Cost Estimation - Part I Introduction to Cost Estimation - Part I Best Practice Checklists Best Practice 1: Estimate Purpose and Scope The estimate s purpose is clearly defined The estimate s scope is clearly defined The level

More information

Management of Software Engineering. Ch. 8 1

Management of Software Engineering. Ch. 8 1 Management of Software Engineering Ch. 8 1 Project control Ch. 8 2 Work Breakdown Structure WBS describes a break down of project goal into intermediate goals Each in turn broken down in a hierarchical

More information

COCOMO I1 Status and Plans

COCOMO I1 Status and Plans - A University of Southern California c I S IE I Center for Software Engineering COCOMO I1 Status and Plans Brad Clark, Barry Boehm USC-CSE Annual Research Review March 10, 1997 University of Southern

More information

Software Quality Engineering Courses Offered by The Westfall Team

Software Quality Engineering Courses Offered by The Westfall Team Building Skills is a 3-day course that is a subset of our course. The course is designed to provide a fundamental knowledge base and practical skills for anyone interested in implementing or improving

More information

RESULTS OF DELPHI FOR THE DEFECT INTRODUCTION MODEL

RESULTS OF DELPHI FOR THE DEFECT INTRODUCTION MODEL RESULTS OF DELPHI FOR THE DEFECT INTRODUCTION MODEL (SUB-MODEL OF THE COST/QUALITY MODEL EXTENSION TO COCOMO II) Sunita Devnani-Chulani USC-CSE Abstract In software estimation, it is important to recognize

More information

Calibrating the COCOMO II Post-Architecture Model

Calibrating the COCOMO II Post-Architecture Model Calibrating the COCOMO II Post-Architecture Model Sunita Devnani-Chulani Bradford Clark Barry Boehm Center for Software Engineering Computer Science Department University of Southern California Los Angeles,

More information

COSYSMO-IP COnstructive SYStems Engineering Cost Model Information Processing. Headed in a new direction

COSYSMO-IP COnstructive SYStems Engineering Cost Model Information Processing. Headed in a new direction COSYSMO-IP COnstructive SYStems Engineering Cost Model Information Processing Headed in a new direction Dr. Barry Boehm Ricardo Valerdi Gary Thomas Don Reifer October 24, 2002 Outline Workshop Objectives

More information

Software Quality Engineering Courses Offered by The Westfall Team

Software Quality Engineering Courses Offered by The Westfall Team Courses is a 2-day course that is a subset of our course. The course is designed to provide an overview of techniques and practices. This course starts with an overview of software quality engineering

More information

Lecture 1. In practice, most large systems are developed using a. A software process model is an abstract representation

Lecture 1. In practice, most large systems are developed using a. A software process model is an abstract representation Chapter 2 Software Processes Lecture 1 Software process descriptions When we describe and discuss processes, we usually talk about the activities in these processes such as specifying a data model, designing

More information

Modeling Software Maintenance in SEER-SEM

Modeling Software Maintenance in SEER-SEM Modeling Software Maintenance in SEER-SEM Each program-level work element can include a detailed, year-by-year maintenance model. SEER- SEM estimates the effort and cost of software maintenance using a

More information

Software Processes. Objectives. Topics covered. The software process. Waterfall model. Generic software process models

Software Processes. Objectives. Topics covered. The software process. Waterfall model. Generic software process models Objectives Software Processes To introduce software process models To describe three generic process models and when they may be used To describe outline process models for requirements engineering, software

More information

Topics covered. Software process models Process iteration Process activities The Rational Unified Process Computer-aided software engineering

Topics covered. Software process models Process iteration Process activities The Rational Unified Process Computer-aided software engineering Software Processes Objectives To introduce software process models To describe three generic process models and when they may be used To describe outline process models for requirements engineering, software

More information

CSCI 510 Final Exam, Fall 2017 v10 of solution & rubric Monday, December 11, questions, 300 points

CSCI 510 Final Exam, Fall 2017 v10 of solution & rubric Monday, December 11, questions, 300 points CSCI 510 Final Exam, Fall 2017 v10 of solution & rubric Monday, December 11, 2017 4 questions, 300 points If registered DEN student, please circle: Yes Last Name: First Name: USC ID: Question 1 (48) Question

More information

Systems Cost Modeling

Systems Cost Modeling Systems Cost Modeling Affiliate Breakout Group Topic Gary Thomas, Raytheon 0 1900 USC Center for Software Engineering Sy~C~stModelingBreakoutTopicVisual-v0-1 vl.o - 10/27/00 University of Southern California

More information

Model-Driven Development of Integrated Support Architectures

Model-Driven Development of Integrated Support Architectures Model-Driven Development of Integrated Support Architectures Stan Ofsthun Associate Technical Fellow The Boeing Company (314) 233-2300 October 13, 2004 Agenda Introduction Health Management Framework rocess

More information

SEER-SEM 8.0 Release Notes

SEER-SEM 8.0 Release Notes SEER-SEM 8.0 Release Notes Page 1 SEER-SEM 8.0 Release Notes Welcome to the SEER for Software (SEER-SEM) 8.0 August 2011 maintenance release. These release notes are a supplement to your existing user

More information

Software Processes. Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 4 Slide 1

Software Processes. Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 4 Slide 1 Software Processes Ian Sommerville 2004 Software Engineering, 7th edition. Chapter 4 Slide 1 Objectives To introduce software process models To describe three generic process models and when they may be

More information

Program Lifecycle Methodology Version 1.7

Program Lifecycle Methodology Version 1.7 Version 1.7 March 30, 2011 REVISION HISTORY VERSION NO. DATE DESCRIPTION AUTHOR 1.0 Initial Draft Hkelley 1.2 10/22/08 Updated with feedback Hkelley 1.3 1/7/2009 Copy edited Kevans 1.4 4/22/2010 Updated

More information

Assessing Accuracy of Formal Estimation Models and Development of an Effort Estimation Model for Industry Use

Assessing Accuracy of Formal Estimation Models and Development of an Effort Estimation Model for Industry Use Assessing Accuracy of Formal Estimation Models and Development of an Effort Estimation Model for Industry Use Paula S. Esplanada Department of Computer Science,University of the Philippines Cebu College

More information

COCOMO II Based Project Cost Estimation and Control

COCOMO II Based Project Cost Estimation and Control 3rd International Conference on Education, Management, Arts, Economics and Social Science (ICEMAESS 2015) COCOMO II Based Project Cost Estimation and Control Aihua Ren1, a, Yun Chen1, b 1 School of Computer

More information

INDEX. As-is analysis, tool supporting, 302 Attributes, FPA, Availability, software contract requirement, 258

INDEX. As-is analysis, tool supporting, 302 Attributes, FPA, Availability, software contract requirement, 258 INDEX A Acceptance test phase, 200 Actual Effort (Person Hours), as estimation unit, 16 ADD (Added FP), 185, 188 Add elementary process, 79 Agile software projects case study, 202 204 complex issues in,

More information

SEER for Software User's Guide Page 1. for Software. User s Guide. Galorath Incorporated. (310)

SEER for Software User's Guide Page 1. for Software. User s Guide. Galorath Incorporated.  (310) SEER for Software User's Guide Page 1 SEER for Software User s Guide Galorath Incorporated www.galorath.com (310) 414-3222 Page 2 SEER for Software, the SEER Software Estimation Model SEER for Software

More information

Project Plan. CxOne Guide

Project Plan. CxOne Guide Project Plan CxOne Guide CxGuide_ProjectPlan.doc November 5, 2002 Advancing the Art and Science of Commercial Software Engineering Contents 1 INTRODUCTION... 1 1.1 DELIVERABLE PURPOSE... 1 1.2 LIFECYCLE...

More information

Software Project Management. Software effort

Software Project Management. Software effort Software Project Management Chapter Five Software effort estimation 1 Objectives The lecture discusses: why estimating is problematic (or challenging ) the main generic approaches to estimating, including:

More information

Software Reuse Economics

Software Reuse Economics Software Reuse Economics Barry Boehm DARPA Workshop January 14, 1997 http://sunset.usc.edu Barry Boehm - 1 Outline Motivation Software Reuse Economics Framework Return on Investment Elements Investments

More information

Agile Software Development Cost Modeling for the US DoD

Agile Software Development Cost Modeling for the US DoD Agile Software Development Cost Modeling for the US DoD Wilson Rosa, Naval Center for Cost Analysis Ray Madachy, Naval Postgraduate School Bradford Clark, Software Metrics, Inc. Barry Boehm, University

More information

USC Los Angeles, CA 9-10 October 1997 "It's A Real BEACH Driving on the 405!"

USC Los Angeles, CA 9-10 October 1997 It's A Real BEACH Driving on the 405! CALIBRATION OF THE COCOMO II MODEL TO AN AIR FORCE DATABASE Prof Dan Ferens, AFlT Lt Wayne Bernheisel, ASC Forum on COCOMO -. and Software Cost Modeling., :...,? -. '-, '.,.

More information

This document describes the overall software development process of microcontroller software during all phases of the Company Name product life cycle.

This document describes the overall software development process of microcontroller software during all phases of the Company Name product life cycle. Maturity Process Owner Check Release Description Valid Name / Department Name / Department Name / Department Detailed procedure for software development Title: Software Development Procedure Purpose: This

More information

SIZING AND ESTIMATION - Key Information for SLIM Forecasting GENERAL INFORMATION LIFECYCLE PHASES

SIZING AND ESTIMATION - Key Information for SLIM Forecasting GENERAL INFORMATION LIFECYCLE PHASES SIZING AND ESTIMATION - Key Information for SLIM Forecasting GENERAL INFORMATION 1. Project Name 2. Date form Completed 3. Completed by 4. Telephone and Fax Numbers 5. Role in the project 6. Group/Division

More information

Lecture 10 Effort and time estimation

Lecture 10 Effort and time estimation 1 Lecture 10 Effort and time estimation Week Lecture Exercise 10.3 Quality in general; Patterns Quality management systems 17.3 Dependable and safety-critical systems ISO9001 24.3 Work planning; effort

More information

Project Plan. CivicPlus Activity Metrics Tool. Version 1.0. Keith Wyss CIS 895 MSE Project Kansas State University

Project Plan. CivicPlus Activity Metrics Tool. Version 1.0. Keith Wyss CIS 895 MSE Project Kansas State University Project Plan CivicPlus Activity Metrics Tool Version 1.0 Keith Wyss CIS 895 MSE Project Kansas State University Table of Contents 1. INTRODUCTION... 5 1.1. REFERENCES... 5 2. WORK BREAKDOWN STRUCTURE...

More information

COCOMO II Model Definition Manual

COCOMO II Model Definition Manual COCOMO II Model Definition Manual Version 2.0 Table of Contents Acknowledgements...ii Copyright Notice...iii Warranty...iii 1. Introduction... 1 1.1 Overview... 1 1.2 Nominal-Schedule Estimation Equations...

More information

Management and MDD. Peter Dolog dolog [at] cs [dot] aau [dot] dk E2-201 Information Systems March 6, 2007

Management and MDD. Peter Dolog dolog [at] cs [dot] aau [dot] dk E2-201 Information Systems March 6, 2007 Management and MDD Peter Dolog dolog [at] cs [dot] aau [dot] dk E2-201 Information Systems March 6, 2007 2 Management Software Engineering Management 3 Req. Design Const. Test Iterations Management 4 5

More information

Darshan Institute of Engineering & Technology for Diploma Studies

Darshan Institute of Engineering & Technology for Diploma Studies RESPONSIBILITY OF SOFTWARE PROJECT MANAGER Job responsibility Software project managers take the overall responsibility of project to success. The job responsibility of a project manager ranges from invisible

More information

Elsevier Editorial System(tm) for Information and Software Technology Manuscript Draft

Elsevier Editorial System(tm) for Information and Software Technology Manuscript Draft Elsevier Editorial System(tm) for Information and Software Technology Manuscript Draft Manuscript Number: INFSOF-D-10-00267 Title: A Controlled Experiment in Assessing and Estimating Software Maintenance

More information

version NDIA CMMI Conf 3.5 SE Tutorial RE - 1

version NDIA CMMI Conf 3.5 SE Tutorial RE - 1 Requirements Engineering SE Tutorial RE - 1 What Are Requirements? Customer s needs, expectations, and measures of effectiveness Items that are necessary, needed, or demanded Implicit or explicit criteria

More information

Test Workflow. Michael Fourman Cs2 Software Engineering

Test Workflow. Michael Fourman Cs2 Software Engineering Test Workflow Michael Fourman Introduction Verify the result from implementation by testing each build Plan the tests in each iteration Integration tests for every build within the iteration System tests

More information

Number: DI-IPSC-81427B Approval Date:

Number: DI-IPSC-81427B Approval Date: DATA ITEM DESCRIPTION Title: Software Development Plan (SDP) Number: Approval Date: 20170313 AMSC Number: N9775 Limitation: N/A DTIC Applicable: No GIDEP Applicable: No Preparing Activity: EC Project Number:

More information

Number: DI-IPSC-81427B Approval Date:

Number: DI-IPSC-81427B Approval Date: DATA ITEM DESCRIPTION Title: Software Development Plan (SDP) Number: DI-IPSC-81427B Approval Date: 20170313 AMSC Number: N9775 Limitation: N/A DTIC Applicable: No GIDEP Applicable: No Preparing Activity:

More information

The software process

The software process Software Processes The software process A structured set of activities required to develop a software system Specification; Design; Validation; Evolution. A software process model is an abstract representation

More information

Software Effort Estimation using Radial Basis and Generalized Regression Neural Networks

Software Effort Estimation using Radial Basis and Generalized Regression Neural Networks WWW.JOURNALOFCOMPUTING.ORG 87 Software Effort Estimation using Radial Basis and Generalized Regression Neural Networks Prasad Reddy P.V.G.D, Sudha K.R, Rama Sree P and Ramesh S.N.S.V.S.C Abstract -Software

More information

Breakout Session 1: Business Cases and Acquisition Strategies Outbrief Marilee J. Wheaton TRW S&ITG Session Chair.

Breakout Session 1: Business Cases and Acquisition Strategies Outbrief Marilee J. Wheaton TRW S&ITG Session Chair. Breakout Session 1: Business Cases and Acquisition Strategies Outbrief Marilee J. Wheaton TRW S&ITG Session Chair 23 February 2001 Chris Abts, USC Center for Software Engineering COCOTS Estimation Model:

More information