ANALYSIS AND DESIGN OF CORE METRICS FOR MODERN SOFTWARE PROJECTS

Size: px
Start display at page:

Download "ANALYSIS AND DESIGN OF CORE METRICS FOR MODERN SOFTWARE PROJECTS"

Transcription

1 Internationa Journa of Information Technoogy and Knowedge Management Juy-December 2009, Voume 2, No. 2, pp ANALYSIS AND DESIGN OF CORE METRICS FOR MODERN SOFTWARE PROJECTS K. P. Yadav* & Raghuraj Singh** Size measurement methods have payed a key roe in heping to sove rea-word probems of estimating, suppier/customer disputes, performance improvement and the management of outsourced contracts. We woud ike to say that estimating and managing a project s effort, staffing, schedue, cost, risks, quaity and other factors is crucia. Yet a these are measures of input. Process management is enabed by feedback oops. So it is aso necessary to measure the output from the software process. This starts with measurement of the size of the requirements. To sove a probem, it is necessary to measure its size, in order to assess the various soution options, cacuate the reative costs, compare the benefits, before finay committing to one preferred approach. SOFTWARE PROJECT / ENGINEERING METRICS Many different metrics may be of vaue in managing a modern process/projects. I have setted on EIGHT CORE METRICS that shoud be used on a types of modern software projects, in which four are management metrics and four are quaity metrics. These may be caed as indicators. a) Management Metrics Work and progress Budgeted cost and expenditures Tabe Overview of the Eight Core Metrices Staffing and tem dynamics Software project contro pane Metric Purpose Perspectives b) Quaity Metrics Change traffic and stabiity Breakage and moduarity Rework and adaptabiity Mean time between faiures (MTBF) and maturity The foowing Tabe describes the core software metrics. Each metric has two dimensions: a static vaue used as an Work and progress Iteration panning, pan vs. actuas, SLOC, function points, object points, management indicator scenarios, test cases, SCOs Budgeted cost and expenditures Financia insight, pan vs. actuas, Cost per month, fu time staff per month, management indicator percentage of budget expended Staffing and term dynamics Resource pan vs. actuas, hiring rate, Peope per month added, attrition rate peope per month eaving Software project contro pane To show current status of the project Overa project vaues, threshods of projects Change traffic and stabiity Iteration panning, management indicator of SCOs opened vs. SCOs cosed, by type Schedue convergence (0,1,2,3,4), by reease/ component/subsystem Breakage and moduarity Convergence, software scrap, quaity Reworked SCOs per change, by type (0, 1, 2, indicator 3, 4), by reease/component/subsystem Rework and adaptabiity Convergence, software reworked, quaity Average hours per change by type (0,1,2,3,4), indicator by reease/component/subsystem MTBF and maturity Test coverage/adequacy, robustness for use, Faiure counts, test hours unti faiure, quaity indicator by reease/component/subsystem * Dronacharya Coege of Engg., Greater Noida, U.P. Emai: karunesh732@yahoo.co.in ** Deptt. of CSE, H.B.T.I., Kanpur, U.P. Emai: raghurajsingh@rediffmai.com objective, and dynamic trend used to manage the achievement of that objective. Whie metrics vaues provide one dimension of insight, metrics trends provide a more important perspective for managing the process. Metrics

2 278 K. P. YADAV & RAGHURAJ SINGH trends with respect to time provide insight into how the process and product are evoving. Iterative deveopment is about managing change, and measuring change is the most important aspect of the metrics program. Absoute vaues of productivity and quaity improvement are secondary issue unti the fundamenta goa of management has been achieved: predictabe cost and schedue performance for a given eve of quaity. The eight core metrics can be used in numerous ways to hep manage projects and organizations. In an iterative deveopment project or an organization structured around a software ine of business, the historica vaues of previous iterations and projects provides precedents data for panning subsequent iterations and projects. Consequenty, once metrics coection is ingrained, a project or organization can improve its abiity to predict the cost, schedue, or quaity performance of future work activities. The eight core metrics are based on common sense and fied experience with both successfu and unsuccessfu metrics program. Their attributes incude the foowing: They are simpe, objective, easy to coect, easy to interpret, and hard to misinterpret. Coection can be automated and no intrusive. They provide for consistent assessments throughout the ife cyce and are derived from the evoving product baseines rather than from a subjective assessment. They are usefu to both management and engineering personne for communicating progress and quaity in a consistent format. Their fideity improves across the ife cyce. Management Metrics There are four fundamenta sets of management metric: technica progress, financia status, staffing progress, and current project status/vaue. By examining these perspectives, management can generay assess whether a project is on budget and on schedue. Financia status is very we understood; it aways has been. Most managers know their resource expenditures in terms of costs and schedue. The probem is to assess how much technica progress has been made. Conventiona projects whose intermediate products were a paper documents reied on subjective assessments of technica progress or measured the number of documents competed. Whie these documents did refect progress in expending energy, they were not very indicative of usefu work being accompished. The management indicators not very recommended here incude standard financia status primary on an earned vaue system, objective technica progress metrics taiored to the primary measurement criteria for each major team of the organization, and staffing metrics that provide insight into team dynamics. Work and Progress The various activities of an iterative deveopment project can be measured by defining a panned estimate of the work in an objective measure, then tracking progress (work competed over time) against that pan. Each major organization team shoud have at east one primary progress that it is measured against. For the standard teams the defaut perspectives of this metric woud be as foows. Software architectures team: use cases demonstrated Software deveopment team: SLOC under baseine change management, SCOs cosed. Software management team: miestones competed. Budgeted Cost and Expenditures To maintain management contro, measuring cost expenditure over the project ife cyce is aways necessary. Through the judicia use of the metrics for work and progress, a much more objective assessment of technica progress can be performed to compare with cost expenditures. With an iterative deveopment process, it is important to pan the near-term activities (usuay a window of time ess than six months) in detai and eave the farterm activities as rough estimates to be refined as the current iteration is winding down and panning for the next iteration becomes crucia. Tracking financia progress usuay takes on an organization-specific format. One common approach to financia performance measurement is use of an earned vaue system, which provides highy detais cost and schedue insight. Its major weakness for software projects has traditionay been the inabiity to assess the technica progress (% compete) objectivey and accuratey. Whie this wi aways be the case in the engineering stage of a project, earned vaue systems have proved to be effective for the production stage, where there is high-fideity tracking of actua versus pans predictabe resuts. The other core metrics provide a framework for detaied and reaistic quantifiabe backup data to pan and track against, especiay in the production stage of software, when the cost and schedue expenditures are highest. Staffing and Team Dynamics An iterative deveopment shoud start with a sma team unti the risks in the requirements and architecture have been suitabe resoved. Depending on the overap of iterations and other project-specific circumstances, staffing can vary.

3 ANALYSIS AND DESIGN O CORE METRICS OR MODERN SO TWARE PROJECTS 279 For discrete, one-of-a-kind deveopment efforts (such as buiding a corporate information system), the staffing profie woud be typica. It is reasonabe to expect the maintenance team to be smaer than the deveopment team for these sorts of deveopment. For a commercia product deveopment, the sizes of the maintenance and deveopment teams may be the same. When ong-ived, continuousy improved products are invoved, maintenance is just continuous construction of new better reeases. Tracking actua versus panned staffing is a necessary and we-understood management metric. There is one other important management indicator of changes in project momentum: the reationship between attrition and additions. Increases in staff can sow overa project progress as new peope consume the productive time of existing peope in coming up to speed. Low attribution of good peope is a sign of success. Engineers are highy motivated by making progress in getting something to work; this is the recurring theme underying an effect iterative deveopment process. If this motivation is not there, good engineers wi migrate esewhere. An increase in unpanned attribution-namey, peope eaving a project prematurey-is one of the most garing indicators that a project that a project is destined for troube. The causes of such attrition can vary, but they are usuay personne dissatisfaction with management methods, acks of teamwork, or probabiity of faiure in meeting the panned objectives. Software Project Contro Pane The idea is to provide a dispay pane that integrates data from mutipe sources to show the current status of some aspect of the project. For exampe, the software project manager woud want to see a dispay with overa project vaues, a test manager may want to see a dispay focused on metrics specific to a upcoming beta reease, and deveopment managers may be interested ony in data concerning the subsystems and components for which they are responsibe. The pane can support standard features such as warning ights, threshods, variabe scaes, digita formats, and anaog formats to present an overview of the current situation. It can aso provide extensive capabiity for detaied situation anaysis. This automation support can improve management insight into progress and quaity trends and improve the acceptance of metrics by the engineering team. Quaity Metrics The four quaity metrics are based primariy on the measurement of software change across evoving baseines of engineering data (such as design modes and source code). Change Traffic and Stabiity Overa change traffic is one specific indicator of progress and quaity. Change traffic is defined as the number of software change orders opened and cosed over the ife cyce. This metric can be coected by change type, by reease, across a reease, by team, by components, by subsystem, and so forth. Couped with the work and progress metrics, it provides insight into the stabiity of the software and its convergence towards stabiity (or divergence towards instabiity). Stabiity is defined as the reationship between opened versus cosed SCOs. The next three quaity metrics focus more on the quaity of the product. Breakage and Moduarity Breakage is defined as the average extent of change, which is the amount of software baseine that needs rework (in SLOC, function points, components, subsystems, fies, etc.). Moduarity is the average breakage trend over time. For a heathy project, the trend expectation is decreasing or stabe. This indicator provides insight into the benign or, maignant character of software change. In a mature interactive deveopment process, earier changes are expected to resut in more scrap than ater changes. Breakage trends that are increasing with time ceary indicate the product maintainabiity is suspect. Rework and Adaptabity Rework is defined as the average cost of change, which is efforts to anayze, resove, and retest a charges to software baseines. Adaptabiity is defined as the rework trend over time. For a heathy project, the trend expectation is decreasing or stabe. Not a changes are created equa. Some changes can be made in a staff-hour, whie others take staff-weeks. This metric provides insight into rework measurement. In a mature iterative deveopment process, earier changes (architectura changes, which affect mutipe components and peope) are expected to require more rework than ater changes (Impementation changes, which tend to be confined to a singe components or person). Rework trends that are increasing with time ceary indicates that product maintainabiity is suspect. MTBF and Maturity MTBF is the average usages time between software fauts. In genera terms, MTBF is computed by dividing the test hours by the number of type 0 and type 1 SCOs. Maturity is defined as the MTBF trend over time. Eary insight into maturity requires that an effective test infrastructure be estabished. Conventiona testing approaches for monoithic software programs focused on achieving compete coverage of every ine of code, every branch, and so forth. In today s distributed and

4 280 K. P. YADAV & RAGHURAJ SINGH componentized software systems, such compete test coverage is achievabe ony for discrete components. Systems of components are more efficienty tested by using statistica techniques. Consequenty, the maturity metrics measure statistics over usage time rather than product coverage. Software errors can be categorized into two types: deterministic and nondeterministic. Physicist woud characterize these as Bohr-bugs and Heisen-bugs, respectivey. Bohr-bugs represent a cass of errors that aways resuts when the software is stimuated in a certain way. These errors are predominanty caused by coding errors, and changes are typicay isoated to a singe component. Heisen-bugs are software fauts that are coincidenta with a certain probabiistic occurrence of a given situation. These errors are amost aways design errors (frequenty requiring changes in mutipe components) and typicay are not repeatabe even when the software is stimuated in the same apparent way. To provide adequate test coverage and resove the statisticay significant Heisenbugs, extensive statistica testing under reaistic and randomized usage scenarios is necessary. Conventiona software programs executing a singe program on a singe processor typicay contained ony Bohr-bugs. Modern, distributed systems with numerous interoperating components executing across a network of processors are vunerabe to Heisen-bugs, which are far more compicated to detect, anayze, and resove. The best way to mature a software product is to estabish an initia test infrastructure that aows execution of randomized usages scenarios eary in the ife cyce and continuousy evoves the breadth and depth of usages scenarios to optimize coverage across the reiabiity-critica components. The basic characteristics of a good metric are as foows: 1. It is considered meaningfu by the customer, manager, and performer. 2. It demonstrates quantifiabe correation between process perturbations and business performance. The ony rea organizationa goas and objectives are financia: cost reduction, revenue increase, and margin increase. 3. It is objective and unambiguousy defined. Ambiguity is minimized through we-understood units of measurement (such as staff-month, SLOC,) 4. It dispays trends. This is an important characteristic. Understanding the change in a metrics vaue with respects to time, subsequent projects, subsequent reeases, and so forth is an extremey important perspective, especiay for today s iterative deveopment modes. It is very rare that a given metric drives the appropriate action directy. 5. It is a natura by-product of the process. The metric does not introduce new artifacts or overhead activities; it is derived directy from the mainstream engineering and management workfows. 6. It is supported by automation. Experience has demonstrated that the most successfu metrics are those that are coected and reported by automated toos, in part because software toos require rigorous of the data they process. Vaue judgments cannot be made by metrics; they must be eft to smarter entities such as software project managers. CRITICISMS Software metrics tend to be used as an aid in judging the quaity of software deveopment. Metrics are reativey easy to produce, but their use as a management instrument has drawbacks: Unethica: It is said to be unethica to reduce a person s performance to a sma number of numerica variabes and then judge him/her by that measure. A supervisor may assign the most taented programmer to the hardest tasks on a project, which means it may take the ongest time to deveop the task and may generate the most defects due to the difficuty of the task. Uninformed managers overseeing the project might then judge the programmer as performing poory without consuting the supervisor who has the fu picture. Demeaning: Management by numbers without regard to the quaity of experience of the empoyees, instead of managing peope. Gaming: The measurement process is biased because of empoyees seeking to maximize management s perception of their performances. For exampe, if ines of code are used to judge performance, then empoyees wi write as many separate ines of code as possibe, and if they find a way to shorten their code, they may not use it. Inaccurate: No known metrics are both meaningfu and accurate. Lines of code measure exacty what is typed, but not the difficuty of the probem. Function points were deveoped to better measure the compexity of the code or specification, but they require persona judgment to use we. Different estimators wi produce different resuts. This makes function points hard to use fairy and unikey to be used we by everyone. Uneconomica/Suboptima: It has been argued that when the economic vaue of measurements are computed using proven methods from decision

5 ANALYSIS AND DESIGN O CORE METRICS OR MODERN SO TWARE PROJECTS 281 theory, measuring software deveoper performance turns out to be a much ower priority than measuring uncertain benefits and risks. REFERENCES [1] Stephen P. Berczuk (with Brad Appeton), Software Configuration Management Patterns: Effective Teamwork, Practica Integration. Addison-Wesey, [2] John D. McGregor. The Evoution of Product Line Assets, Software Engineering Institute, CMU/SEI-2003-TR-005, [3] Jitender Kumar Chhabra, K. K. Aggarwa and Yogesh Singh, Measurement of Object-Oriented Software Spatia Compexity. Information and Software Technoogy, 46 (10), (2004) [4] S. R. Schach, Introduction to Object-Oriented Anaysis and Design. Tata, McGraw-Hi, [5] Object Management Group. Software Process Engineering Meta-Mode v. 1.1, [6] K. K. Aggarwa, Yogesh Singh, Pravin Chandra, Manimaa Puri: An Expert Committee Mode to Estimate Lines of Code. ACM SIGSOFT Software Engineering Notes 30(5): 1-4 (2005). [7] Ecipse Process Framework 1.0, [8] Kannan Mohan and Baasubramaniam Ramesh. Change Management Patterns in Software Product Lines, Communications of the ACM, 49 (12), [9] Software Engineering Institute, Framework for Product Line Practice, [10] Shakti Kumar, K. K. Aggarwa, Jagatpreet Singh: A Matab Impementation of Swarm Inteigence based Methodoogy for Identification of Optimized Fuzzy Modes. Swarm Inteigent Systems 2006: [11] K. K. Aggarwa, Yogesh Singh, Arvinder Kaur, Ruchika Mahotra: Software Design Metrics for Object-Oriented Software. Journa of Object Technoogy, 6(1) (2007).