A Software Metric Set for Program Maintenance Management

Size: px
Start display at page:

Download "A Software Metric Set for Program Maintenance Management"

Transcription

1 A Software Metric Set for Program Maintenance Management George E. Stark and Louise C. Kern The MITRE Corporation, Houston, TX C. W. Vowell NASA Johnson Space Center, Houston, TX Abstract - Managers at the National Aeronautics and Space Administration's (NASA) Mission Operations Directorate (MOD) at the Johnson Space Center wanted to increase their insight into the cost, schedule, and quality of the software-intensive systems maintained for the Space Shuttle programs. We defined and imple-mented a software metrics set that contains thirteen metrics related to corrective and adaptive maintenance actions. Management support and tools were necessary for effective implementation. The start-up cost was low because much of the data is already collected by the projects. Management s ability to make decisions and take actions to improve software quality and reduce software maintenance costs have made early results encouraging. Introduction The Assistant Director for Program Support of NASA's MOD is responsible for planning and controlling the development and maintenance of all ground-based systems used to support mission operations of the Space Shuttle program. These systems involve large software efforts: the smallest is approximately 1.1 million lines of code and the largest is over 7 million lines. The systems are a mixture of assembly, high-level, and fourth generation languages. Both the Shuttle mission control center and the Shuttle mission training facility execute in real-time. The Assistant Director and his supporting MOD management team needed an approach that would allow them clear, consistent insight into the status of each system's software maintenance activities at various levels (i.e., project, subsystem, module). MOD responded to this need by using the goal/question/metric paradigm popularized by Basili [1] to identify a set of twelve metrics for use on current and future systems. This paper is intended to assist software maintenance managers and quality assurance personnel considering developing a software maintenance metric program. The paper provides background into the MOD metrics effort and explains the approach taken in defining the maintenance metric set. Next, it describes each metric in the set, and explains the implementation process with its associated roadblocks. Finally, it discusses future plans for the MOD metric program and presents our conclusions. Background MOD initiated its software measurement program in May of 1990 to help project managers make decisions about the status of their projects [2]. This effort studied development test metrics and showed important trends in testing progress and product quality. Because of this success, MOD expanded the measurement program focus to cover the entire development cycle and documented it in a measurement handbook [3]. The development metric handbook is currently being used on five software development projects and will be included on all future MOD Requests for Proposal. 1 Draft

2 As the next step in the overall program, MOD developed and implemented a set of measures designed to aid in the management and decisions on software maintenance. For example, How large a staff do I need? How long should it take to close a software problem report? Metric Set Definition We developed the MOD software maintenance metric set in three steps. 1. We reviewed the software maintenance literature. This review revealed a large amount of work in government and industry on software maintenance. Only two articles [4,5] outlined a metric set for software maintenance management. While the other references discussed potential data to be kept [6-8], or the causes of maintenance [9-11], they did not outline a metric set (as is commonly the case for software development [12-14]). 2. Based on this review, MOD decided to formulate a working group of NASA and contractor personnel responsible for software maintenance to develop a metric set applicable to the MOD environment. The working group decided to use the goal/question/metric paradigm popularized by Basili [1]. Table 1 summarizes the results of the working group. 3. The software sustaining engineering metrics handbook [15] documents the standardized set (shown in the third column of table 1). Following a detailed review item disposition (RID) process, MOD formally accepted the handbook. The RID process included formal comments from other NASA organizations and contractors involved in software maintenance. At thirteen metrics, the MOD software sustaining engineering metric baseline is a moderately sized set of metrics. It provides more than one metric being reported for each goal, and some metrics provide across-goal coverage (e.g., software reliability). This multiple coverage not only provides better insight into the project but also allows consistency checks of reported data 2 Draft

3 Maximize Customer Satisfaction Minimize effort and schedule Table 1: MOD Software Sustaining Goals/Questions/Metrics The metrics are useful individually, but the greatest benefit is derived when they are used as a set for making trades among competing goals (e.g., quality vs productivity, or quality vs release date). Project Managers can combine the metrics to investigate trends not visible with a one metric alone. The MOD Software Maintenance Metric Set The following sections discuss each metric individually and, where available, present sample graphs that indicate results of the metric application. The graphs contain MOD data where it was available. Software Size Goal Question Metric(s) How many problems are affecting th customer? How long does it take to fix a problem? Where are the bottlenecks? Where are the resources going? How maintainable is the system? Discrepancy Report (DR) & Service Request (SR) Open Duration Software Reliability Break/Fix Ratio DR/SR Closure DR/SR Open Duration Staff Utilization Staff Utilization SR Scheduling Ada Instantiations Fault Type Distribution Computer Resource Utilization Software Size Fault Density Software Volatility Software Complexity Minimize defects Is software sustaining engineering Fault Density effective? Break/Fix Ratio Ada Instantiations Software Reliability Software size is the primary input parameter to several software cost and quality estimation models (e.g., see [16-18]). The metric is the number of source lines of code (SLOC) maintained by the project. This metric may be used by project management to: Track changes in the effort required to maintain the code. An increase in software size can lead to staffing inadequacies. A decrease in software size can lead to a reduction in staff. 3 Draft

4 Anticipate the performance problems in hard real-time systems. An increasing trend in software size should initiate corrective actions that either counter the trend or accommodate it with increased effort or computer power. Figure 1 depicts the software size of the MOD systems for the six years from 1986 through This chart indicates a relatively steady growth in the amount of software. Approximately 600,000 lines of code have been added to the MOD baseline each year. This is an average 7.5% per year increase Year Executable SLOC Total SLOC Figure 1. Executable and Total SLOC by Year Software Staffing Changes in software staffing have a direct impact on project cost and software maintenance progress. This metric is the number of staff-hours expended per month by the software engineering and management personnel directly involved with software maintenance. The metric provides management with the data to forecast future staffing requirements. It may also be used for the following actions: Estimating the efficiency of the sustaining engineering activities Identifying the most resource-intensive activities (e.g., change request assignment, evaluation, scheduling, implementation, test, release) Identifying the most resource-intensive systems The staffing metric allows management to examine initial project staffing levels to ensure a good start. It also allows "sanity checks" on the contractor s project plan through comparisons with industry-recognized rules-of-thumb for staffing expenditures during the maintenance phase of the software life cycle. One goal is to analyze the efficiency of the software change request process. The objective is to reduce the effort used to close maintenance requests regardless of actions taken to reduce the backlog of requests. Figure 2 is an example of such a graph. Ideally, the line should exhibit an upward trend. Any system that deviates significantly from the planned profile or shows a potentially troublesome trend gets attention. 4 Draft

5 DRs/SRs Closed Staff-months DRs SRs Maintenance Request Processing Time Figure 2. Software Change Request Closure Efficiency The maintenance request processing metric monitors the software maintenance work flow and the level of customer satisfaction. To maximize customer satisfaction, it is important to handle the requests most visible to the customers as well as the other day-to-day nuisances. This measure allows an analyst to perform the following functions: Predict the amount of work that is necessary because of newly requested enhancements or reported problems Determine the level of customer satisfaction with the product Perform trade studies among available staff, computer resources, and customer satisfaction to reduce available backlog Figure 3 gives an example of this metric. The line labeled "progress" is the difference between closed requests and the incoming requests. This metric is a good way to see a cumulative trend of backlogged work or to estimate the total amount of work required. Ideally, it should be near zero. Setting upper and lower boundary limits on the progress line to trigger special actions is a good idea. For example, a progress line rising to +30 warrants an investigation into the reason for the increase in backlog. If the progress line decreases to -30, then the contractor provides a description of the methods used to close the large number of requests. Management then decides whether the practice will become institutionalized. Management sets the actual value for special actions (e.g., ±30). 5 Draft

6 Incoming SRs or DRs Progress SR's or DR's Closed Jul-91 Aug-91 Sep-91 Oct-91 Nov-91 Dec-91 Jan-92 Feb-92 Mar-92 Apr-92 May-92 Jun-92 Software Enhancement Scheduling Figure 3. Software Maintenance Request Closure Analysis The software enhancement scheduling metric tracks the length of time to close an enhancement request and the engineering effort spent on enhancements. Two data primitives are used to calculate this metric: (1) the planned and actual number of engineering-hours spent on the enhancement, and (2) the amount of calendar time from request submission until the enhancement is released to the facility for use. The MOD metric set includes the software enhancement scheduling metric because it allows project managers to identify high risk schedule predictions, estimate the time required to have an enhancement request available to the user community, and predict the amount of effort required to close a request. Computer Resource Utilization The Computer Resource Utilization (CRU) metric tracks the actual use of the system's resources. Four resources are included in the MOD set: CPU, disk, memory, and input/output channel utilization. There is also provision for including a project-specific CRU metric (e.g., transport lag in the case of real-time flight simulators) if the standard MOD set is insufficient to identify project risks. The CRU metrics are reported as a percent of resource capacity. This metric provides project managers with early warnings if users are approaching resource capacity limits. It also provides a forum for presenting the results of contractor-performed system analysis [15]. Fault Density The number of discrepancy reports closed with a software fix per 1000 source lines of code over time defines the fault density metric. Fault density measures code quality and can be used to predict the number of remaining faults in the code using a quality model [17, 18] establish standard fault densities for comparison and prediction for each failure's severity level or fault type identify processes of systems that require scrutiny by management. 6 Draft

7 Figure 4 is a Pareto chart of fault density per system over five years on one MOD project. This graph is a useful tool for "filtering" proposed quality improvement projects (it would be a better graph if it were a stacked bar chart by severity). Because time and money are scarce resources, it makes sense to attack the most significant problems first A B C D E F G H I J System Figure 4. Fault Density by System Engineers track fault density per system and module to develop more thorough test plans for these components. Managers can also use this information to identify fault-prone code and make longerterm plans to re-engineer the software. Management may also use this information to exercise caution when considering enhancements to these modules or systems (i.e., considering design stability). Software Volatility The software volatility metric is a ratio of the number of modules changed due to a software maintenance request to the total number of modules in a release over time. The goal of this metric is to measure the amount of change in the structure of the software over time. Belady and Lehman were the first to define software volatility [19]. In the MOD metric set, it characterizes the overall maintainability of the software system. Used in concert with reliability, CRU, and design complexity, it helps managers to decide if a qualification test procedure should be re-executed and to identify the need to re-engineer the software. Discrepancy Report (DR) Open Duration The DR open duration metric monitors the amount of time required to close software DRs once they are discovered. The open duration of the discrepancy is calculated as reporting date - submitted date and provides insight into the efficiency of the debugging process. A significant number of old DRs may indicate more resources (computer time, staff, etc.) are required to close them. Examining the number and age of high-priority critical DRs allows a project manager to estimate debugging response time. Also, a small number of old DRs may indicate difficult problems that require extensive changes to correct, or problems that are not well understood and require more attention. Figure 5 is a sample DR open duration report. This chart illustrates that, for this particular quarterly report, 1000 DRs remained open for 0-30 days (600 minor, 300 major, Draft

8 critical), 700 for days (500 minor, 150 major, 50 critical), 300 for days (550 minor, 50 major, 10 critical) and finally, 200 remained open greater than 720 days (or two calendar years) Minor Major Critical >720 Days Figure 5. DR Open Duration by Criticality Break/fix Ratio The break/fix ratio is the count of faults inserted into the operational software baseline divided by the total number of changes made to the software. For example, if three DRs closed with a software fix result in one new defect because of the repairs, the break/fix ratio is This means the sustaining organization was 67% effective in closing these DRs. This metric also indicates customer satisfaction, since the sustaining process introduces most defects found by customers [18]. The break/fix ratio metric helps perform the following functions: Estimate the number of problems affecting the customer Estimate the number of faults remaining in the code Identify the source of software problems as either development or sustaining The break/fix ratio metric measures the effectiveness of the maintenance organization at making changes to the operational software baseline. Software Reliability Software reliability is the probability that software will not fail for a specified time period in a specified environment. Based on operational failure data, the software reliability metric provides an indication of the expected number of failures during a mission of given duration. It tracks the current software failure rate by date. The metric can be used to control the change traffic to a system so that an acceptable failure rate is maintained. If the software is performing well above a management set threshold, large complex changes may be allowed. If the software is performing at or below the target failure rate, only bug fixes are allowed. No current MOD project has explicit software reliability requirements; however, all projects have an implicit software reliability requirement since there is a point at which the failure rate of the 8 Draft

9 software renders the system unsuitable for operation. Isolated software reliability studies have been performed on MOD systems [20-22]. Figure 6 shows the shape of the software failure rate in the Mission Control Center for a series of Shuttle missions. This data helps determine the acceptable threshold and establish future requirements Design Complexity STS Mission Number Figure 6. Software Failure Rate for Twelve Missions Design complexity tracks the number of modules with a complexity measure greater than the guideline which is established for MOD. The metric is measured using the Extended McCabe complexity metric of each module (e.g., FORTRAN subroutines, C language functions) [23]. The Extended McCabe complexity metric counts the number of independent execution paths through a given piece of software. This metric allows project management to track the contractors ability to maintain an acceptable level of complexity at the module level. Complexity is highly correlated with programmer maintenance effort and with the number of faults found during testing and operation [24, 25]. Figure 7 shows the cumulative distribution functions of the Extended McCabe complexity measure for fifteen systems. Four of these were written in Ada (shown as dashed lines in the figure), eight in C (shown as light lines), and three in FORTRAN (shown as dark solid lines). Note that the figure is a logarithmic scale on the x-axis, hence 50% of the C functions in the furthest right program had a McCabe complexity of less than or equal to 25 and 90% were less than or equal to 120. From this analysis, the C system in the lower right of the figure was considered too complex to maintain. Using this information in combination with the number of discrepancy reports against this system and the number of users of the system, MOD decided to find another approach to implement this function and retire the system. This graph was also used to establish thresholds of acceptability for complexity of MOD programs. From the figure three clusters of complexity exist. An "A" class cluster shown by the two Ada programs in the upper left, an "average" cluster shown by the twelve programs in the center of the figure, and an unacceptable program on the right side. It was decided that any program to the right of the "average" cluster would be reviewed as a candidate for re-engineering based on maintenance. 9 Draft

10 McCabe Complexity Figure 7. Cumulative Distribution Function of the McCabe Complexity Value for Applications Software Fault Type Distribution This metric tracks discrepancy report closure in three ways: (1) by closure code (i.e., hardware, software, human, unable to duplicate, etc); (2) by the type of problem that was found (i.e., logic, computational, interface, data input, etc); and (3) by the process that introduced the problem (i.e., requirements, design, code, test). This serves to identify those aspects of the process that are errorprone as well as the most common types of faults introduced. This is the first step toward a root cause analysis approach to software development and maintenance. The information from this metric goes back into the development management chain so that managers can take effective risk reduction measures (e.g., better inspections) to reduce the type and cause of errors. Ada Instantiations The Ada instantiation metric is a count of the number of Ada generic units designed and coded during the course of maintenance, this size of the generic units (in SLOC), and a count of the number of times the generic unit is instantiated. The goal of this metric is to track the number and size of reusable components developed by a project for use within that project and other projects. This metric tracks the reuse of Ada generic units. It is included in the MOD metric set as a data collection metric with the purpose of providing MOD management with some insight into code reuse during software development. Metric Program Implementation After defining the metric set, we set about getting the contractors to collect, analyze, report, and use the metrics. The first step in implementing the MOD metric set was to issue a management directive. That is, MOD took the lead on the definition and reporting of software metrics, even though several of their contracts already had specified metrics deliverables. This step turned out to be a positive action because it ensured consistency across the projects and showed that upper-level MOD management supported the effort. The importance of this support cannot be overstated; 10 Draft

11 without it, the metrics program would not have been taken seriously, particularly by contractors not using some form of metrics. The second step was to provide automated tools to support the metrics program. We used spreadsheets to summarize and report the metric data. Using spreadsheets for metric reporting allowed the specification of an easy-to-use, standard format which reduced the amount of impact to ongoing projects. While spreadsheets are limited in their data analysis capability, high-powered statistical techniques are not necessary for real-time monitoring and control. Two of the baselined metrics, design complexity and software reliability, are difficult to compute by hand and require automated tools. We surveyed tools for each of these metrics [26] and recommended two for use on current MOD projects. We recommended the Statistical Modeling and Estimation of Reliability Functions for Software (SMERFS) tool for software reliability measurement [27]. This is a public domain tool that runs on a variety of platforms including IBM PCs or compatibles. It supports a large number of models for estimating software reliability. The recommended complexity measurement tool is UX-metric/PC-metric from Set Laboratories [28]. This is an inexpensive commercial-off-the-shelf tool that counts SLOC, comments, blank lines, and computes complexity metrics for a variety of third generation languages, including FORTRAN, C, and Ada. It runs on workstations that support the UNIX operating system as well as on IBM PCs and compatibles. Finally, the cost of development and implementation of the software maintenance set was less than we experienced with the development metric set. Implementing the software development metric set cost approximately 0.32% of the yearly MOD development budget. The cost of the implementing the maintenance set was approximately 0.1% of that budget. We assume this lower cost is because of the team's experience with metrics, and the integration complexities of the development effort were not as severe. This cost will remain relatively stable since very little additional data is required to implement the metric set from that already collected. Future Plans The need for validation of the metric set is clear. Obviously, the MOD set of metrics is our first iteration and lacks refinement and proven value in our environment. Since it will be used to make decisions and evaluate projects, it is important for MOD to ensure that the metrics measure what they were intended to measure. Some validation has been done empirically through analysis of the data and discussions with the contractors, but a more formal, statistical approach should be taken as additional data becomes available. MOD will continue to monitor (and increase its participation in) the efforts of various metric-related working groups being sponsored by the Software Engineering Institute (SEI). These groups consist of nationally recognized experts and include many key researchers. MOD will also remain cognizant of metric standardization efforts, such as those by the IEEE [29] and the American Institute of Aeronautics and Astronautics (AIAA) [30]. As they near fruition, these efforts could help improve the MOD measurement efforts. Continuing in the effort to provide NASA and contractor management with visibility into their various software projects, MOD is defining a set of development project management metrics. The 11 Draft

12 development project management metrics will be earned-value oriented and will cover aspects unrelated to the two software metric sets already defined. This project should be completed in Conclusion MOD relies on large software systems to support Space Shuttle Missions. This reliance has led to a need to improve management s ability to plan, monitor, and understand the status of each system s sustaining engineering. MOD implemented a maintenance metrics program as a first step in addressing this need. The primary goal of the metrics effort is to generate dialogue between MOD and its contractors concerning the effort, cost, and quality of the software maintenance process. MOD provides a standard maintenance metric set with procedures for collecting, reporting, and interpreting the data. This allows project management and contractors to identify current risks, compare current data with past predictions, and plan for future maintenance efforts. Establishing the software maintenance metrics program was easier than the development program put in place two years ago because the contractor team members and the NASA project managers had the benefit of that previous work. Furthermore, implementation was not expensive, nor were a large number of measurements necessary. Management support and easy-to-use tools are necessary for an effective implementation. Acknowledgments This effort was sponsored by contract number NAS Thanks are due to the MOD working group members for their insightful comments and lively discussions during this project. Further thanks are due to R. C. Lacovara and C. J. Guyse of MITRE for their help in data collection and our other MITRE colleagues who reviewed this paper. References 1. Basili, V., and Rombach, H. D., "Tailoring the Software Process to Project Goals and Environments," Proceedings of the 9th International Conference on Software Engineering, IEEE, April 1987, pp Stark, G. E., Durst, R. C., and Pelnik, T. M., June 1992, An Evaluation of Software Testing Metrics for NASA s Mission Control Center, Software Quality Journal, Vol 1, pp Durst, R. C., et. al., April 1992, DA3 Software Development Metrics Handbook, Version 2.1, JSC-25519, NASA Johnson Space Center, Houston, TX. 4. Grady, R. B., September 1987, "Measuring and Managing Software Maintenance," IEEE Software, pp Schaefer, H., April 1985, "Metrics for Optimal Maintenance Management," Proceedings IEEE Conference on Software Maintenance, pp IEEE Standard Dictionary of Measures to Produce Reliable Software, 1988, IEEE Standard 982.1, New York: The Institute of Electrical and Electronic Engineers, Inc. 12 Draft

13 7. Lientz, B. P., and Swanson, E. B., 1980, Software Maintenance Management, Addison- Wesley, Reading, MA. 8. Pressman, R. S.,1987, Software Engineering: A Practitioner's Approach, McGraw-Hill,New York, NY. 9. Zuse, H., October 1992, "Measuring Factors Contributing to Software Maintenance Complexity," Proceedings Second International Conference on Software Quality, Raleigh, NC, pp Lloyd, D. K., and Lipow, M., 1984, Reliability: Management, Methods, and Mathematics, Second Edition, Prentice-Hall (Englewood Cliffs, NJ). 11. Gill, G. K., and Kemerer, C. F., December 1991, "Cyclomatic Complexity Density and Software Maintenance Productivity," IEEE Transactions on Software Engineering, Vol 17, No. 12, pp Air Force Systems Command Software Quality Indicators, 1987, AFSC Pamphlet , Headquarters Air Force Systems Command, Andrews Air Force Base, Washington D.C. 13. Schultz, H. P., May 1988, Software Reporting Metrics, Report No. M88-1, The MITRE Corporation, Bedford, MA. 14. Beavers, J. K., et. al., March 1991, "U. S. Army Software Test and Evaluation Panel (STEP) Software Metrics," Proceedings Annual Oregon Workshop on Software Metrics, Silver Falls, OR. 15. Stark, G. E., and Kern, L. C., August 1992, DA3 Software Sustaining Engineering Metrics Handbook, Version 1.0, JSC-26010, NASA Johnson Space Center, Houston, TX. 16. Boehm, B. W., 1981, Software Engineering Economics, Prentice-Hall, Englewood Cliffs, N.J. 17. Rone, K. W., 1989, Quality Estimation and Planning, IBM Corporation, Houston, TX. 18. Levendel, Y., Reliability Analysis of Large Software Systems: Defect Data Modeling, IEEE Transactions on Software Engineering, Vol 16, No 2, February 1990, pp Shooman, M. L., and Richeson, G., 1983, "Reliability of Shuttle Mission Control Center Software," 1983 Proceedings Annual Reliability and Maintainability Symposium, pp Misra, P. N., 1983, "Software Reliability Analysis, IBM Systems Journal, Vol. 22, No. 3, pp Stark, G. E., and Durst, R. C., March 1992, "Software Reliability Measurement: A Case Study Using the AIAA Estimation and Prediction Handbook," Proceedings Annual Oregon Workshop on Software Metrics, Silver Falls, OR. 13 Draft

14 22. McCabe, T., A Complexity Measure, IEEE Transactions on Software Engineering, Vol SE- 2, No 4, December 1976, pp Potier, T., 1982, "Experiments with Computer Software Complexity and Reliability, Proceedings of the 6th International Conference on Software Engineering. 24. Gremillion, L. L., August 1984, "Determinants of Program Repair Maintenance Requirements," Communications of the ACM, pp Belady, L. A, and Lehman, M. M., 1976, "A Model of Large Program Development," IBM Systems Journal, Vol 15., No. 3, pp Stark, G. E., May 1991, A Survey of Software Reliability Measurement Tools, Proceedings 1991 International Symposium on Software Reliability Engineering, Austin, TX, pp Farr, W. H., and Smith, O. D., December 1988, Statistical Modeling and Estimation of Reliability Functions for Software (SMERFS) User s Guide, NSWC TR Revision 1, Naval Surface Warfare Center, Silver Spring, MD. 28. SET Laboratories, 1990, UX-metric User s Guide, Mulino, OR. 29. IEEE Draft- Standard for a Quality Metrics Methodology, September 1991, IEEE P1061/D22, New York: The Institute for Electrical and Electronics Engineers, Inc. 30. AIAA Draft Software Reliability Estimation and Prediction Handbook, February 1992, AIAA Space Based Observation Systems (SBOS) Committee on Standards (COS) Software Reliability Working Group. 14 Draft

Automated Collection of Software Sizing Data

Automated Collection of Software Sizing Data Automated Collection of Software Sizing Data Briefing at the University of Southern California - Center for Software Engineering USC - CSE Annual Research Review March, 1997 THE MANY USES OF SIZING DATA

More information

Leading Indicators for Systems Engineering Effectiveness Presentation for NDIA SE Conference October 28, 2009

Leading Indicators for Systems Engineering Effectiveness Presentation for NDIA SE Conference October 28, 2009 Leading Indicators for Systems Engineering Effectiveness Presentation for NDIA SE Conference October 28, 2009 Garry Roedler Lockheed Martin 1 Growing Interest in SE Effectiveness Questions about the effectiveness

More information

University of Maryland. College Park, MD Abstract. of reuse in an Ada development environment. Four medium scale Ada projects developed in

University of Maryland. College Park, MD Abstract. of reuse in an Ada development environment. Four medium scale Ada projects developed in AN EVALUATION OF ADA SOURCE CODE REUSE 3 W. M. Thomas, A. Delis & V. R. Basili Department of Computer Science University of Maryland College Park, MD 20742 Abstract This paper presents the results of a

More information

Electric Forward Market Report

Electric Forward Market Report Mar-01 Mar-02 Jun-02 Sep-02 Dec-02 Mar-03 Jun-03 Sep-03 Dec-03 Mar-04 Jun-04 Sep-04 Dec-04 Mar-05 May-05 Aug-05 Nov-05 Feb-06 Jun-06 Sep-06 Dec-06 Mar-07 Jun-07 Sep-07 Dec-07 Apr-08 Jun-08 Sep-08 Dec-08

More information

Applying PSM to Process Measurement

Applying PSM to Process Measurement Applying PSM to Process Measurement Name: Bill Golaz Title: LM Fellow & LM Aero Measurement and Analysis Lead PSM All rights reserved. Copyright 2010 1 Topics Why Measure Why Process Measurement Using

More information

Traffic Division Public Works Department Anchorage: Performance. Value. Results.

Traffic Division Public Works Department Anchorage: Performance. Value. Results. Mission Promote safe and efficient area-wide transportation that meets the needs of the community and the Anchorage Municipal Traffic Code requirements. Direct Services Design, operate and maintain the

More information

Watts Bar Nuclear Plant Unit 2 Completion Project

Watts Bar Nuclear Plant Unit 2 Completion Project Watts Bar Nuclear Plant Unit 2 Completion Project Fifth Quarterly Update to the Estimate to Complete May 2013 - July 2013 Published October 2013 Table of Contents Section 1 - Executive Summary... 3 Quarterly

More information

Requirements Analysis and Design Definition. Chapter Study Group Learning Materials

Requirements Analysis and Design Definition. Chapter Study Group Learning Materials Requirements Analysis and Design Definition Chapter Study Group Learning Materials 2015, International Institute of Business Analysis (IIBA ). Permission is granted to IIBA Chapters to use and modify this

More information

Building a Local Resource Model

Building a Local Resource Model Building a Local Resource Model MODELING AND MEASURING RESOURCES Model Validation Study Walston and Felix build a model of resource estimation for the set of projects at the IBM Federal Systems Division.

More information

Software Growth Analysis

Software Growth Analysis Naval Center for Cost Analysis Software Growth Analysis June 2015 Team: Corinne Wallshein, Nick Lanham, Wilson Rosa, Patrick Staley, and Heather Brown Software Growth Analysis Introduction to software

More information

PMBOK Guide Fifth Edition Pre Release Version October 10, 2012

PMBOK Guide Fifth Edition Pre Release Version October 10, 2012 5.3.1 Define Scope: Inputs PMBOK Guide Fifth Edition 5.3.1.1 Scope Management Plan Described in Section 5.1.3.1.The scope management plan is a component of the project management plan that establishes

More information

Chapter-3. Software Metrics and Reliability

Chapter-3. Software Metrics and Reliability Chapter-3 \ functions under given conditions for a specified period of time." The reliability of the delivered code is related to the quality of all of the processes and products of software development;

More information

Debra J. Perry Harris Corporation. How Do We Get On The Road To Maturity?

Debra J. Perry Harris Corporation. How Do We Get On The Road To Maturity? How Do We Get On The Road To Maturity? Debra J. Perry Harris Corporation NDIA Conference - 1 What do we want? From this To this But how? NDIA Conference - 2 Where Do We Start? NDIA Conference - 3 Government

More information

Using Measurement to Assure Operational Safety, Suitability, and Effectiveness (OSS&E) Compliance for the C 2 Product Line

Using Measurement to Assure Operational Safety, Suitability, and Effectiveness (OSS&E) Compliance for the C 2 Product Line Using Measurement to Assure Operational Safety, Suitability, and Effectiveness (OSS&E) Compliance for the C 2 Product Line David L. Smith The MITRE Corporation 115 Academy Park Loop, Suite 212 Colorado

More information

2004 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media,

2004 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, 2004 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising

More information

ANALYZING THE COMPUTER-AIDED MAINTENANCE DATA OF A COMMERCIAL BUILDING COMPLEX

ANALYZING THE COMPUTER-AIDED MAINTENANCE DATA OF A COMMERCIAL BUILDING COMPLEX ANALYZING THE COMPUTER-AIDED MAINTENANCE DATA OF A COMMERCIAL BUILDING COMPLEX Joseph H.K. Lai* and Francis W.H. Yik Department of Building Services Engineering, Hong Kong Polytechnic University, Hong

More information

Software Modeling of S-Metrics Visualizer: Synergetic Interactive Metrics Visualization Tool

Software Modeling of S-Metrics Visualizer: Synergetic Interactive Metrics Visualization Tool Software Modeling of S-Metrics Visualizer: Synergetic Interactive Metrics Visualization Tool Sergiu M. Dascalu 1, Norm Brown 2, Derek A. Eiler 1, Herman W. Leong 1, Nathan A. Penrod 1, Brian T. Westphal

More information

Measures and Risk Indicators for Early Insight Into Software Safety

Measures and Risk Indicators for Early Insight Into Software Safety Dr. Victor Basili University of Maryland and Fraunhofer Center - Maryland Measures and Risk Indicators for Early Insight Into Kathleen Dangle and Linda Esker Fraunhofer Center - Maryland Frank Marotta

More information

Measure What You Manage. Michael Cowley, CPMM President CE Maintenance Solutions, LLC

Measure What You Manage. Michael Cowley, CPMM President CE Maintenance Solutions, LLC Measure What You Manage Michael Cowley, CPMM President CE Maintenance Solutions, LLC Define Maintenance Scorecards Discuss Required Prerequisites Explain 10 Common Maintenance Scorecards Review Your Return-on-Investment

More information

Software Metrics: An Essential Tool for determining software success

Software Metrics: An Essential Tool for determining software success Software Metrics: An Essential Tool for determining software success Ruchi Yadav & Pramod Kumar Dept. of Information & technology, Dronacharya College of Engineering Farruhknagar, Gurgaon, India Email:ruchiyadav477@gmail.com

More information

Measurement Tailoring Workshops

Measurement Tailoring Workshops Measurement Tailoring Workshops Introduction The Director of Information Systems for Command, Control, Communications, and Computers (DISC4) policy memorandum of 19 September 1996, reference (a), eliminated

More information

Administration Division Public Works Department Anchorage: Performance. Value. Results.

Administration Division Public Works Department Anchorage: Performance. Value. Results. Administration Division Anchorage: Performance. Value. Results. Mission Provide administrative, budgetary, fiscal, and personnel support to ensure departmental compliance with Municipal policies and procedures,

More information

Software Inspections and Their Role in Software Quality Assurance

Software Inspections and Their Role in Software Quality Assurance American Journal of Software Engineering and Applications 2017; 6(4): 105-110 http://www.sciencepublishinggroup.com/j/ajsea doi: 10.11648/j.ajsea.20170604.11 ISSN: 2327-2473 (Print); ISSN: 2327-249X (Online)

More information

Number: DI-IPSC-81427B Approval Date:

Number: DI-IPSC-81427B Approval Date: DATA ITEM DESCRIPTION Title: Software Development Plan (SDP) Number: DI-IPSC-81427B Approval Date: 20170313 AMSC Number: N9775 Limitation: N/A DTIC Applicable: No GIDEP Applicable: No Preparing Activity:

More information

MEASUREMENT FRAMEWORKS

MEASUREMENT FRAMEWORKS MEASUREMENT FRAMEWORKS MEASUREMENT FRAMEWORKS Measurement is not just the collection of data/metrics calendar time number of open problems number of defects found in inspections cyclomatic complexity machine

More information

Energy Future Holdings (EFH)

Energy Future Holdings (EFH) Energy Future Holdings (EFH) Inclusion of Data Analytics into the Internal Audit Lifecycle June 3, 2015 Starting Place Baseline Questions Pertaining to the utilization of data analytics in the internal

More information

Software Reliability and Testing: Know When To Say When. SSTC June 2007 Dale Brenneman McCabe Software

Software Reliability and Testing: Know When To Say When. SSTC June 2007 Dale Brenneman McCabe Software Software Reliability and Testing: Know When To Say When SSTC June 2007 Dale Brenneman McCabe Software 1 SW Components with Higher Reliability Risk, in terms of: Change Status (new or modified in this build/release)

More information

Design of a Performance Measurement Framework for Cloud Computing

Design of a Performance Measurement Framework for Cloud Computing A Journal of Software Engineering and Applications, 2011, *, ** doi:10.4236/jsea.2011.***** Published Online ** 2011 (http://www.scirp.org/journal/jsea) Design of a Performance Measurement Framework for

More information

ROUTINE MAINTENANCE EMERGENCY SERVICES EQUIPMENT REPAIR

ROUTINE MAINTENANCE EMERGENCY SERVICES EQUIPMENT REPAIR critical facilities preventative maintenance ROUTINE MAINTENANCE EMERGENCY SERVICES EQUIPMENT REPAIR a l p h a t e c h n o l o g i e s s e r v i c e s, i n c. m e m b e r o f t h e a l p h a g r o u p

More information

United States Postal Service Executive Order 13,392 Report and Improvement Plan

United States Postal Service Executive Order 13,392 Report and Improvement Plan United States Postal Service Executive Order 13,392 Report and Improvement Plan A. Characterize overall nature of agency's FOIA operations The Records Office is responsible for the Postal Service s compliance

More information

Unleashing the Enormous Power of Call Center KPI s. Call Center Best Practices Series

Unleashing the Enormous Power of Call Center KPI s. Call Center Best Practices Series Unleashing the Enormous Power of Call Center KPI s Call Center Best Practices Series 27 Years of Call Center Benchmarking Data Global Database More than 3,700 Call Center Benchmarks 30 Key Performance

More information

Managing System Performance

Managing System Performance Managing System Performance System performance directly affects users. Centralized operations are easier to measure than complex networks and client/server systems. Various statistics can be used to assess

More information

GENERAL PRINCIPLES OF SOFTWARE VALIDATION

GENERAL PRINCIPLES OF SOFTWARE VALIDATION GUIDANCE FOR INDUSTRY GENERAL PRINCIPLES OF SOFTWARE VALIDATION DRAFT GUIDANCE Version 1.1 This guidance is being distributed for comment purposes only. Draft released for comment on: June 9, 1997 Comments

More information

Introduction and Key Concepts Study Group Session 1

Introduction and Key Concepts Study Group Session 1 Introduction and Key Concepts Study Group Session 1 PD hours/cdu: CH71563-01-2018 (3 hours each session) 2015, International Institute of Business Analysis (IIBA ). Permission is granted to IIBA Chapters

More information

Software metrics. Jaak Tepandi

Software metrics. Jaak Tepandi Software metrics, Jekaterina Tšukrejeva, Stanislav Vassiljev, Pille Haug Tallinn University of Technology Department of Software Science Moodle: Software Quality (Tarkvara kvaliteet) Alternate download:

More information

DMS VERSUS NMAS ANALYSIS

DMS VERSUS NMAS ANALYSIS DMS VERSUS NMAS ANALYSIS Abstract An analysis of the resource costs of the Nevada Affordable Housing Assistance Corporation s workflow processes under the current Document Management System versus the

More information

Software productivity measurement

Software productivity measurement Software productivity measurement by J. S. COLLOFELLO, S. N. WOODFIELD, and N.E. GIBBS Computer Science Department Arizona State University ABSTRACT Productivity is a crucial concern for most organizations.

More information

Volume 8, No. 1, Jan-Feb 2017 International Journal of Advanced Research in Computer Science RESEARCH PAPER Available Online at

Volume 8, No. 1, Jan-Feb 2017 International Journal of Advanced Research in Computer Science RESEARCH PAPER Available Online at Volume 8, No. 1, Jan-Feb 2017 International Journal of Advanced Research in Computer Science RESEARCH PAPER Available Online at www.ijarcs.info A Study of Software Development Life Cycle Process Models

More information

MEASURING PROCESS CAPABILITY VERSUS ORGANIZATIONAL PROCESS MATURITY

MEASURING PROCESS CAPABILITY VERSUS ORGANIZATIONAL PROCESS MATURITY MEASURING PROCESS CAPABILITY VERSUS ORGANIZATIONAL PROCESS MATURITY Mark C. Paulk and Michael D. Konrad Software Engineering Institute Carnegie Mellon University Pittsburgh, PA 15213-3890 Abstract The

More information

Extending Systems Engineering Leading Indicators for Human Systems Integration Effectiveness

Extending Systems Engineering Leading Indicators for Human Systems Integration Effectiveness Extending Systems Engineering Leading Indicators for Human Systems Integration Effectiveness Donna H. Rhodes 1, Adam M. Ross 2, Kacy J. Gerst 3, and Ricardo Valerdi 4 1 MIT, USA, rhodes@mit.edu 2 MIT,

More information

CMMI s Role in Reducing Total Cost of Ownership: Measuring and Managing. Software

CMMI s Role in Reducing Total Cost of Ownership: Measuring and Managing. Software CMMI s Role in Reducing Total Cost of Ownership: Measuring and Managing New and Legacy Software Total Ownership Cost: The Tradeoffs In Summary Pressure to ship can be costly: TOC & User COmmitment Development

More information

Information System of Scenario Strategic Planning

Information System of Scenario Strategic Planning Information System of Scenario Strategic Planning Denis R. Tenchurin dtenchurin@gmail.com Maxim P. Shatilov maxim.shatilov@gmail.com Scientific advisor: prof. Sergei M. Avdoshin savdoshin@hse.ru Abstract

More information

Role of Technical Complexity Factors in Test Effort Estimation Using Use Case Points

Role of Technical Complexity Factors in Test Effort Estimation Using Use Case Points Role of Technical ity s in Test Effort Estimation Using Use Case Points Dr. Pradeep Kumar Bhatia pkbhatia.gju@gmail.com Ganesh Kumar gkyaduvansi@gmail.com Abstarct-The increasing popularity of use-case

More information

Global and Regional Food Consumer Price Inflation Monitoring

Global and Regional Food Consumer Price Inflation Monitoring Global and Regional Food Consumer Price Inflation Monitoring October 2013 Issue 2 Global Overview Consumers at global level saw food price inflation up by 6.3 percent in the twelve months to February 2013

More information

Dave Honkanen Sr. Director, EPO Prime Therapeutics. The EPMO: Strategy to Execution

Dave Honkanen Sr. Director, EPO Prime Therapeutics. The EPMO: Strategy to Execution Dave Honkanen Sr. Director, EPO Prime Therapeutics The EPMO: Strategy to Execution September 30, 2010 Speaker Background Intro 2 Honkanen Consulting, Inc. Intro Prime Therapeutics is a thought leader in

More information

INTRODUCTION BACKGROUND. Paper

INTRODUCTION BACKGROUND. Paper Paper 354-2008 Small Improvements Causing Substantial Savings - Forecasting Intermittent Demand Data Using SAS Forecast Server Michael Leonard, Bruce Elsheimer, Meredith John, Udo Sglavo SAS Institute

More information

2IS55 Software Evolution. Software metrics (3) Alexander Serebrenik

2IS55 Software Evolution. Software metrics (3) Alexander Serebrenik 2IS55 Software Evolution Software metrics (3) Alexander Serebrenik Reminder Assignment 6: Software metrics Deadline: May 11 Questions? / SET / W&I 4-5-2011 PAGE 1 Sources / SET / W&I 4-5-2011 PAGE 2 Recap:

More information

Creating and Implementing a Balanced Measurement Program Dana T. Edberg

Creating and Implementing a Balanced Measurement Program Dana T. Edberg 1-04-25 Creating and Implementing a Balanced Measurement Program Dana T. Edberg Payoff Although IS measurement programs have long posed challenges in terms of focus and implementation, they can provide

More information

Abstract. Keywords. 1. Introduction. Rashmi N 1, Suma V 2. Where, i = 1 requirement phase, n = maintenance phase of software development process [9].

Abstract. Keywords. 1. Introduction. Rashmi N 1, Suma V 2. Where, i = 1 requirement phase, n = maintenance phase of software development process [9]. Defect Detection Efficiency: A Combined approach Rashmi N 1, Suma V 2 Abstract Survival of IT industries depends much upon the development of high quality and customer satisfied software products. Quality

More information

2IS55 Software Evolution. Software metrics (3) Alexander Serebrenik

2IS55 Software Evolution. Software metrics (3) Alexander Serebrenik 2IS55 Software Evolution Software metrics (3) Alexander Serebrenik Administration Assignment 5: Deadline: May 22 1-2 students / SET / W&I 28-5-2012 PAGE 1 Sources / SET / W&I 28-5-2012 PAGE 2 Recap: Software

More information

Project Planning. COSC345 Software Engineering 2016 Slides by Andrew Trotman given by O K

Project Planning. COSC345 Software Engineering 2016 Slides by Andrew Trotman given by O K Project Planning COSC345 Software Engineering 2016 Slides by Andrew Trotman given by O K Overview Assignment: The assignment sheet specifies a minimum Think about what else you should include (the cool

More information

Addressing UNIX and NT server performance

Addressing UNIX and NT server performance IBM Global Services Addressing UNIX and NT server performance Key Topics Evaluating server performance Determining responsibilities and skills Resolving existing performance problems Assessing data for

More information

COMOS Training Calendar 2017/2018

COMOS Training Calendar 2017/2018 COMOS Training Calendar 2017/2018 www.siemens.com/comos Scan this code for further informations. Calendar 2017/2018 Offer Whether you re looking for basic knowledge for first-time users or specialist know-how

More information

Enterprise Technology Projects Fiscal Year 2012/2013 Fourth Quarter Report

Enterprise Technology Projects Fiscal Year 2012/2013 Fourth Quarter Report Enterprise Technology Projects Fiscal Year 2012/2013 Fourth Quarter Report Enterprise Projects Fiscal Year 2012/2013 Fourth Quarter The Enterprise Program Investment Council (EPIC) is responsible for governance

More information

Software Engineering II - Exercise

Software Engineering II - Exercise Software Engineering II - Exercise April 29 th 2009 Software Project Management Plan Bernd Bruegge Helmut Naughton Applied Software Engineering Technische Universitaet Muenchen http://wwwbrugge.in.tum.de

More information

Software Cost Estimation Issues for Future Ground Systems

Software Cost Estimation Issues for Future Ground Systems Software Cost Estimation Issues for Future Ground Systems Nancy Kern Software Engineering Department ETG/RSD The Aerospace Corporation Outline ➊ Background ➋ Software Cost Estimation Research OO Software

More information

Forecasting for Short-Lived Products

Forecasting for Short-Lived Products HP Strategic Planning and Modeling Group Forecasting for Short-Lived Products Jim Burruss Dorothea Kuettner Hewlett-Packard, Inc. July, 22 Revision 2 About the Authors Jim Burruss is a Process Technology

More information

Estimating Software Maintenance

Estimating Software Maintenance Seminar on Software Cost Estimation WS 02/03 Presented by: Arun Mukhija Requirements Engineering Research Group Institut für Informatik Universität Zürich Prof. M. Glinz January 21, 2003 Contents 1. What

More information

Reliability Engineering - Business Implication, Concepts, and Tools

Reliability Engineering - Business Implication, Concepts, and Tools Reliability Engineering - Business Implication, Concepts, and Tools Dominique A. Heger, Fortuitous Technologies, Austin, TX, (dom@fortuitous.com) Introduction An emerging consensus in the systems performance

More information

Delivering End-to-End Supply Chain Excellence

Delivering End-to-End Supply Chain Excellence Delivering End-to-End Supply Chain Excellence -The DCMA Perspective- Ms. Marie A. Greening Director, Operations and Aeronautical Systems Divisions 19 March 2009 DCMA Mission We provide Contract Administration

More information

Operational Availability Modeling for Risk and Impact Analysis

Operational Availability Modeling for Risk and Impact Analysis David J. Hurst Manager Accreditation and Audits Aerospace Engineering and Project Management Division National Defence Headquarters Major General George R. Pearkes Building 400 Cumberland Street Ottawa,

More information

COURSE LISTING. Courses Listed. with Customer Relationship Management (CRM) SAP CRM. 15 December 2017 (12:23 GMT)

COURSE LISTING. Courses Listed. with Customer Relationship Management (CRM) SAP CRM. 15 December 2017 (12:23 GMT) with Customer Relationship Management (CRM) SAP CRM Courses Listed SAPCRM - Overview of the SAP CRM Solution CR100 - CRM Customizing Fundamentals CR500 - CRM Middleware CR580 - SAP CRM User Interface TCRM10

More information

Software Complexity Measurement: A Critical Review

Software Complexity Measurement: A Critical Review Software Complexity Measurement: A Critical Review Harmeet Kaur Ph.D. (Computer Applications) Research Scholar Punjab Technical University Jalandhar, Punjab, India Gurvinder N. Verma Professor & Hood-Applied

More information

Factors to Consider When Implementing Automated Software Testing

Factors to Consider When Implementing Automated Software Testing Factors to Consider When Implementing Automated Software Testing By Larry Yang, MBA, SSCP, Security+, Oracle DBA OCA, ASTQB CTFL, ITIL V3 ITM Testing is a major component of the Software Development Lifecycle

More information

DRAFT. Effort = A * Size B * EM. (1) Effort in person-months A - calibrated constant B - scale factor EM - effort multiplier from cost factors

DRAFT. Effort = A * Size B * EM. (1) Effort in person-months A - calibrated constant B - scale factor EM - effort multiplier from cost factors 1.1. Cost Estimation Models Parametric cost models used in avionics, space, ground, and shipboard platforms by the services are generally based on the common effort formula shown in Equation 1. Size of

More information

A Primer for the Project Management Process by David W. Larsen 1. Table of Contents

A Primer for the Project Management Process by David W. Larsen 1. Table of Contents A Primer for the Project Management Process by David W. Larsen 1 Table of Contents Description... 2 STAGE/STEP/TASK SUMMARY LIST... 3 Project Initiation 3 Project Control 4 Project Closure 6 Project Initiation...

More information

ASX CHESS Replacement Project Webinar

ASX CHESS Replacement Project Webinar ASX CHESS Replacement Project Webinar Q3 update 28 September 2017 Presenters and introductions Cliff Richards Executive General Manager, Equity Post Trade Services ASX Keith Purdie Stakeholder Engagement

More information

WORK PLAN AND IV&V METHODOLOGY Information Technology - Independent Verification and Validation RFP No IVV-B

WORK PLAN AND IV&V METHODOLOGY Information Technology - Independent Verification and Validation RFP No IVV-B 1. Work Plan & IV&V Methodology 1.1 Compass Solutions IV&V Approach The Compass Solutions Independent Verification and Validation approach is based on the Enterprise Performance Life Cycle (EPLC) framework

More information

Oracle Talent Management Cloud

Oracle Talent Management Cloud Oracle Talent Management Cloud Release 11 Release Content Document December 2015 Revised: April 2017 TABLE OF CONTENTS REVISION HISTORY... 4 OVERVIEW... 6 PERFORMANCE MANAGEMENT... 7 PERFORMANCE MANAGEMENT...

More information

Report of the Reliability Improvement Working Group (RIWG) Volume II - Appendices

Report of the Reliability Improvement Working Group (RIWG) Volume II - Appendices Report of the Reliability Improvement Working Group (RIWG) Volume II - Appendices Appendix 1 Formulate Programs with a RAM Growth Program II-1 1.1 Reliability Improvement Policy II-3 1.2 Sample Reliability

More information

Vector Software. Understanding Verification and Validation of software under IEC :2010 W H I T E P A P E R

Vector Software. Understanding Verification and Validation of software under IEC :2010 W H I T E P A P E R Vector Software W H I T E P A P E R Understanding Verification and Validation of software under IEC 61508-3:2010 Abstract This paper is intended to serve as a reference for developers of systems that will

More information

ABB Month DD, YYYY Slide 1

ABB Month DD, YYYY Slide 1 Aldo Dagnino, Will Snipes, Eric Harper, ABB Corporate Research RA Software/SAM WICSA, April 7 th of 2014 Metrics for Sustainable Software Architectures An Industry Perspective Month DD, YYYY Slide 1 Agenda

More information

3. PROPOSED MODEL. International Journal of Computer Applications ( ) Volume 103 No.9, October 2014

3. PROPOSED MODEL. International Journal of Computer Applications ( ) Volume 103 No.9, October 2014 Software Effort Estimation: A Fuzzy Logic Approach Vishal Chandra AI, SGVU Jaipur, Rajasthan, India ABSTRACT There are many equation based effort estimation models like Bailey-Basil Model, Halstead Model,

More information

Challenges of Managing a Testing Project: (A White Paper)

Challenges of Managing a Testing Project: (A White Paper) Challenges of Managing a Testing Project: () Page 1 of 20 Vinod Kumar Suvarna Introduction Testing is expected to consume 30 50 % of the Project Effort, Still properly managing testing project is not considered

More information

Realizing Business Value through Collaborative Document Development

Realizing Business Value through Collaborative Document Development Realizing Business Value through Collaborative Document Development P ro c e ss Improvement Summaries Abstract The process of capturing and communicating information using business documents is fundamental

More information

TAGUCHI APPROACH TO DESIGN OPTIMIZATION FOR QUALITY AND COST: AN OVERVIEW. Resit Unal. Edwin B. Dean

TAGUCHI APPROACH TO DESIGN OPTIMIZATION FOR QUALITY AND COST: AN OVERVIEW. Resit Unal. Edwin B. Dean TAGUCHI APPROACH TO DESIGN OPTIMIZATION FOR QUALITY AND COST: AN OVERVIEW Resit Unal Edwin B. Dean INTRODUCTION Calibrations to existing cost of doing business in space indicate that to establish human

More information

Measuring Safety Performance

Measuring Safety Performance Measuring Safety Performance Presented by: Keboitihetse Fredy Tong Date: 2017/04/04 Safety Performance Measurement (SPM): SPI & ALoSP Development Agenda Definition. Why measure safety performance? Alert

More information

Course Organization. Lecture 1/Part 1

Course Organization. Lecture 1/Part 1 Course Organization Lecture 1/Part 1 1 Outline About me About the course Lectures Seminars Evaluation Literature 2 About me: Ing. RNDr. Barbora Bühnová, Ph.D. Industrial experience Research Quality of

More information

SOFTWARE QUALITY IN 2008: A SURVEY OF THE STATE OF THE ART

SOFTWARE QUALITY IN 2008: A SURVEY OF THE STATE OF THE ART Software Productivity Research LLC SOFTWARE QUALITY IN 2008: A SURVEY OF THE STATE OF THE ART Capers Jones Founder and Chief Scientist Emeritus http://www.spr.com cjonesiii@cs.com January 30, 2008 Copyright

More information

An Application of Causal Analysis to the Software Modification Process

An Application of Causal Analysis to the Software Modification Process SOFTWARE PRACTICE AND EXPERIENCE, VOL. 23(10), 1095 1105 (OCTOBER 1993) An Application of Causal Analysis to the Software Modification Process james s. collofello Computer Science Department, Arizona State

More information

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF INFORMATION TECHNOLOGY QUESTION BANK

KINGS COLLEGE OF ENGINEERING DEPARTMENT OF INFORMATION TECHNOLOGY QUESTION BANK KINGS COLLEGE OF ENGINEERING DEPARTMENT OF INFORMATION TECHNOLOGY QUESTION BANK Subject Code & Subject Name: IT1251 Software Engineering and Quality Assurance Year / Sem : II / IV UNIT I SOFTWARE PRODUCT

More information

Can Functional Size Measures Improve Effort Estimation in SCRUM?

Can Functional Size Measures Improve Effort Estimation in SCRUM? Can Functional Size Measures Improve Effort Estimation in SCRUM? Valentina Lenarduzzi Dipartimento di Scienze Teoriche e Applicate Università degli Studi dell'insubria Varese, Italy valentina.lenarduzzi@gmail.com

More information

Measurement-Based Guidance of Software Projects Using Explicit Project Plans

Measurement-Based Guidance of Software Projects Using Explicit Project Plans Measurement-Based Guidance of Software Projects Using Explicit Project Plans Christopher M. Lott and H. Dieter Rombach Arbeitsgruppe Software Engineering Fachbereich Informatik Universität Kaiserslautern

More information

ROI From CMMI A DACS and SEI Collaboration

ROI From CMMI A DACS and SEI Collaboration ROI From CMMI A DACS and SEI Collaboration 8th Annual CMMI Technology Conference 19 November 2008 Robert L. Vienneau Data & Analysis Center for Software Dennis R. Goldenson Software Engineering Institute

More information

Position Paper for the International Workshop on Reuse Economics, Austin, Texas

Position Paper for the International Workshop on Reuse Economics, Austin, Texas Position Paper for the International Workshop on Reuse Economics, Austin, Texas 4.16.2002 COTS-based Systems and Make vs. Buy Decisions: the Emerging Picture Chris Abts Information & Operations Management

More information

SDLC Models- A Survey

SDLC Models- A Survey Available Online at www.ijcsmc.com International Journal of Computer Science and Mobile Computing A Monthly Journal of Computer Science and Information Technology IJCSMC, Vol. 2, Issue. 1, January 2013,

More information

Audit of. Boynton Beach Community High School

Audit of. Boynton Beach Community High School Audit of Boynton Beach Community High School January 19, 2007 Report 2007-01 Audit of Boynton Beach Community High School Table of Contents Page PURPOSE AND AUTHORITY 1 SCOPE AND METHODOLOGY 1 INVESTIGATIONS

More information

Software Reliability Assurance Using a Framework in Weapon System Development: A Case Study

Software Reliability Assurance Using a Framework in Weapon System Development: A Case Study 2009 Eigth IEEE/ACIS International Conference on Computer and Information Science Software Assurance Using a Framework in Weapon System Development: A Case Study Dalju Lee 1, Jongmoon Baik 1, Ju-Hwan Shin

More information

How to Improve Healthcare Payer Operations with Data

How to Improve Healthcare Payer Operations with Data How to Improve Healthcare Payer Operations with Data Andy Dé, Senior Industry Director for Healthcare and Life Sciences The Root Cause for Better Care More than ever, the healthcare industry is challenged

More information

FY17-FY18 Audit Plan. Office of Internal Auditing

FY17-FY18 Audit Plan. Office of Internal Auditing FY17-FY18 Audit Plan Office of Internal Auditing -Page Intentionally Blank- TABLE OF CONTENTS Executive Summary... 4 Audit Plan Details... 6 Budgeted Hours... 7 Risk Assessment... 8 Allocation of Resources...

More information

NHS Highland Internal Audit Report Managing Sickness Absence August 2011

NHS Highland Internal Audit Report Managing Sickness Absence August 2011 Internal Audit Report Managing Sickness Absence August 2011 Internal Audit Report Managing Sickness Absence August 2011 1 Executive Summary...1 2 Background...2 3 Scope...2 4 Summary of Findings...2 5

More information

New Army and DoD Reliability Scorecard

New Army and DoD Reliability Scorecard Marguerite H. Shepler Mechanical Engineer USAMSAA 392 Hopkins Road Bldg. 237 (RDAM-LR) APG, MD 21005-5071 410-278-5124 e-mail: marguerite.shepler@us.army.mil SUMMARY & CONCLUSIONS New Army and DoD Reliability

More information

Software Testing Conference (STC) Leveraging Requirement Based Test Practices For Non-Safety Critical Software Systems

Software Testing Conference (STC) Leveraging Requirement Based Test Practices For Non-Safety Critical Software Systems Software Testing Conference (STC) 2012 Leveraging Requirement Based Test Practices For Non-Safety Critical Software Systems Venkata Tulasiramu P 20-OCT-2012 1 1 Agenda Introduction Generic RBT Procedure

More information

An Open Source Student System: It is coming

An Open Source Student System: It is coming An Open Source Student System: It is coming Richard Spencer Acting CIO & AVP, University of BC AACRAO March 25, 2008 1 Agenda Why now? The vision Functional design and scope Technical architecture Development

More information

Reliability Improvement using Defect Elimination

Reliability Improvement using Defect Elimination Reliability Improvement using Defect Elimination A Three-Prong Approach The Keystone to Safe, Reliable, Profitable Production Michael Voigt 2006 KBC. All Rights Reserved. 1 Introduction Michael Voigt 19

More information

SWE 211 Software Processes

SWE 211 Software Processes SWE 211 Software Processes These slides are designed and adapted from slides provided by Software Engineering 9 /e Addison Wesley 2011 by Ian Sommerville 1 Outlines Software process models Process activities

More information

Performance-Based Earned Value

Performance-Based Earned Value Performance-Based Earned Value NDIA Systems Engineering Conference San Diego, CA October 25, 2006 Paul J. Solomon, PMP Performance-Based Earned Value Paul.Solomon@PB-EV.com 1 Agenda DoD Policy and Guidance,

More information

Pertemuan 2. Software Engineering: The Process

Pertemuan 2. Software Engineering: The Process Pertemuan 2 Software Engineering: The Process Collect Your Project Topic What is Software Engineering? Software engineering is the establishment and sound engineering principles in order to obtain economically

More information

Testing. CxOne Standard

Testing. CxOne Standard Testing CxOne Standard CxStand_Testing.doc November 3, 2002 Advancing the Art and Science of Commercial Software Engineering Contents 1 INTRODUCTION... 1 1.1 OVERVIEW... 1 1.2 GOALS... 1 1.3 BACKGROUND...

More information

As government agencies and businesses become

As government agencies and businesses become RESEARCH FEATURE A Formal Process for Evaluating COTS Software s A software product evaluation process grounded in mathematics and decision theory can effectively determine product quality and suitability

More information