Chapter 4 Software Process and Project Metrics 1
Measurement & Metrics... collecting metrics is too hard... it's too time-consuming... it's too political... it won't prove anything... Anything that you need to quantify can be measured in some way that is superior to not measuring it at all.. Tom Gilb 2
Why do we Measure? To characterize To evaluate To predict To improve 3
A Good Manager Measures process process metrics measurement project metrics product metrics product What do we use as a basis? size? function? 4
Process Metrics majority focus on quality achieved as a consequence of a repeatable or managed process statistical SQA data error categorization & analysis defect removal efficiency propagation from phase to phase reuse data 5
Project Metrics Effort/time per SE task Errors uncovered per review hour Scheduled vs. actual milestone dates Changes (number) and their characteristics Distribution of effort on SE tasks 6
Product Metrics focus on the quality of deliverables measures of analysis model complexity of the design internal algorithmic complexity architectural complexity data flow complexity code measures (e.g., Halstead) measures of process effectiveness e.g., defect removal efficiency 7
Metrics Guidelines Use common sense and organizational sensitivity when interpreting metrics data. Provide regular feedback to the individuals and teams who have worked to collect measures and metrics. Don t use metrics to appraise individuals. Work with practitioners and teams to set clear goals and metrics that will be used to achieve them. Never use metrics to threaten individuals or teams. Metrics data that indicate a problem area should not be considered negative. These data are merely an indicator for process improvement. Don t obsess on a single metric to the exclusion of other important metrics. 8
Normalization for Metrics Normalized data are used to evaluate the process and the product (but never individual people) size-oriented normalization the line of code approach function-oriented normalization the function point approach 9
Typical Size-Oriented Metrics errors per KLOC (thousand lines of code) defects per KLOC $ per LOC page of documentation per KLOC errors / person-month LOC per person-month $ / page of documentation 10
Typical Function-Oriented Metrics errors per FP (thousand lines of code) defects per FP $ per FP pages of documentation per FP FP per person-month 11
Why Opt for FP Measures? independent of programming language uses readily countable characteristics of the "information domain" of the problem does not "penalize" inventive implementations that require fewer LOC than others makes it easier to accommodate reuse and the trend toward object-oriented approaches 12
Computing Function Points Analyze information domain of the application and develop counts Establish count for input domain and system interfaces Weight each count by assessing complexity Assign level of complexity or weight to each count Assess influence of global factors that affect the application Grade significance of external factors, F i such as reuse, concurrency, OS,... Compute function points function points = (count x weight) x C where: complexity multiplier: C = (0.65 + 0.01 x N) degree of influence: N = F i 13
Analyzing the Information Domain measurement parameter count weighting factor simple avg. complex number of user inputs X 3 4 6 = number of user outputs X 4 5 7 = number of user inquiries X 3 4 6 = number of files X 7 10 15 = number of ext.interfaces X 5 7 10 = count-total complexity multiplier function points 14
Taking Complexity into Account Factors are rated on a scale of 0 (not important) to 5 (very important): data communications distributed functions heavily used configuration transaction rate on-line data entry end user efficiency on-line update complex processing installation ease operational ease multiple sites facilitate change 15
Measuring Quality Correctness the degree to which a program operates according to specification Maintainability the degree to which a program is amenable to change Integrity the degree to which a program is impervious to outside attack Usability the degree to which a program is easy to use 16
Defect Removal Efficiency DRE = (errors) / (errors + defects) where errors = problems found before release defects = problems found after release 17
Managing Variation The mr Control Chart Er, Errors found/ review hour 6 5 4 3 2 1 0 1 3 5 7 9 11 13 15 17 19 Project s 18
Chapter 5 Software Project Planning 1
Software Project Planning The overall goal of project planning is to establish a pragmatic strategy for controlling, tracking, and monitoring a complex technical project. Why? So the end result gets done on time, with quality! 2
The Steps Scoping understand the problem and the work that must be done Estimation how much effort? how much time? Risk what can go wrong? how can we avoid it? what can we do about it? Schedule how do we allocate resources along the timeline? what are the milestones? Control strategy how do we control quality? how do we control change? 3
Write it Down! Project Scope Estimates Risks Schedule Control strategy Software Project Plan 4
To Understand Scope... Understand the customers needs understand the business context understand the project boundaries understand the customer s motivation understand the likely paths for change understand that... Even when you understand, nothing is guaranteed! 5
Cost Estimation project scope must be explicitly defined task and/or functional decomposition is necessary historical measures (metrics) are very helpful at least two different techniques should be used remember that uncertainty is inherent 6
Estimation Techniques past (similar) project experience conventional estimation techniques task breakdown and effort estimates size (e.g., FP) estimates tools (e.g., Checkpoint) 7
Functional Decomposition Statement of Scope perform a "grammatical parse" functional decomposition 8
Creating a Task Matrix Obtained from process framework framework activities application functions Effort required to accomplish each framework activity for each application function 9
Conventional Methods: LOC/FP Approach compute LOC/FP using estimates of information domain values use historical effort for the project 10
Example: LOC Approach Functions estimated LOC LOC/pm $/LOC Cost Effort (months) UICF 2340 315 14 32,000 7.4 2DGA 5380 220 20 107,000 24.4 3DGA 6800 220 20 136,000 30.9 DSM 3350 240 18 60,000 13.9 CGDF 4950 200 22 109,000 24.7 PCF 2140 140 28 60,000 15.2 DAM 8400 300 18 151,000 28.0 Totals 33,360 655,000 145.0 11
Example: FP Approach measurement parameter count weight number of user inputs 40 x 4 = 160 number of user outputs 25 x 5 = 125 number of user inquiries 12 x 4 = 48 number of files number of ext.interfaces 4 4 x x 7 7 = = 28 28 0.25 p-m / FP = 120 p-m algorithms 60 x 3 = 180 count-total 569 complexity multiplier.84 feature points 478 12
Tool-Based Estimation project characteristics calibration factors LOC/FP data 13
Empirical Estimation Models General form: effort = tuning coefficient * size exponent usually derived as person-months of effort required either a constant or a number derived based on complexity of project usually LOC but may also be function point empirically derived 14
Estimation Guidelines estimate using at least two techniques get estimates from independent sources avoid over-optimism, assume difficulties you've arrived at an estimate, sleep on it adjust for the people who'll be doing the job they have the highest impact 15
The Make-Buy Decision 16
Computing Expected Cost expected cost = (path probability) x (estimated path cost) i i For example, the expected cost to build is: expected cost = 0.30($380K)+0.70($450K) build = $429 K similarly, expected cost reuse = $382K expected cost buy = $267K expected cost = $410K contr 17
Chapter 6 Risk Management 1
Project Risks What can go wrong? What is the likelihood? What will the damage be? What can we do about it? 2
Reactive Risk Management project team reacts to risks when they occur mitigation plan for additional resources in anticipation of fire fighting fix on failure resource are found and applied when the risk strikes crisis management failure does not respond to applied resources and project is in jeopardy 3
Proactive Risk Management formal risk analysis is performed organization corrects the root causes of risk TQM concepts and statistical SQA examining risk sources that lie beyond the bounds of the software developing the skill to manage change 4
Risk Management Paradigm control track plan RISK analyze identify 5
Building a Risk Table Risk Probability Impact RMMM Risk Mitigation Monitoring & Management 6
Building the Risk Table Estimate the probability of occurrence Estimate the impact on the project on a scale of 1 to 5, where 1 = low impact on project success 5 = catastrophic impact on project success sort the table by probability and impact 7
Risk Mitigation, Monitoring, and Management mitigation how can we avoid the risk? monitoring what factors can we track that will enable us to determine if the risk is becoming more or less likely? management what contingency plans do we have if the risk becomes a reality? 8
Risk Due to Product Size Attributes that affect risk: estimated size of the product in LOC or FP? estimated size of product in number of programs, files, transactions? percentage deviation in size of product from average for previous products? size of database created or used by the product? number of users of the product? number of projected changes to the requirements for the product? before delivery? after delivery? amount of reused software? 9
Risk Due to Business Impact Attributes that affect risk: affect of this product on company revenue? visibility of this product by senior management? reasonableness of delivery deadline? number of customers who will use this product interoperability constraints sophistication of end users? amount and quality of product documentation that must be produced and delivered to the customer? governmental constraints costs associated with late delivery? costs associated with a defective product? 10
Risks Due to the Customer Questions that must be answered: Have you worked with the customer in the past? Does the customer have a solid idea of requirements? Has the customer agreed to spend time with you? Is the customer willing to participate in reviews? Is the customer technically sophisticated? Is the customer willing to let your people do their job that is, will the customer resist looking over your shoulder during technically detailed work? Does the customer understand the software engineering process? 11
Risks Due to Process Maturity Questions that must be answered: Have you established a common process framework? Is it followed by project teams? Do you have management support for software engineering Do you have a proactive approach to SQA? Do you conduct formal technical reviews? Are CASE tools used for analysis, design and testing? Are the tools integrated with one another? Have document formats been established? 12
Technology Risks Questions that must be answered: Is the technology new to your organization? Are new algorithms, I/O technology required? Is new or unproven hardware involved? Does the application interface with new software? Is a specialized user interface required? Is the application radically different? Are you using new software engineering methods? Are you using unconventional software development methods, such as formal methods, AI-based approaches, artificial neural networks? Are there significant performance constraints? Is there doubt the functionality requested is "do-able?" 13
Staff/People Risks Questions that must be answered: Are the best people available? Does staff have the right skills? Are enough people available? Are staff committed for entire duration? Will some people work part time? Do staff have the right expectations? Have staff received necessary training? Will turnover among staff be low? 14
Recording Risk Information Project: Embedded software for XYZ system Risk type: schedule risk Priority (1 low... 5 critical): 4 Risk factor: Project completion will depend on tests which require hardware component under development. Hardware component delivery may be delayed Probability: 60 % Impact: Project completion will be delayed for each day that hardware is unavailable for use in software testing Monitoring approach: Scheduled milestone reviews with hardware group Contingency plan: Modification of testing strategy to accommodate delay using software simulation Estimated resources: 6 additional person months beginning 7-1-96 15