Quality Management Lessons of COQUALMO (COnstructive QUALity MOdel) A Software Defect Density Prediction Model

Size: px
Start display at page:

Download "Quality Management Lessons of COQUALMO (COnstructive QUALity MOdel) A Software Defect Density Prediction Model"

Transcription

1 Quality Management Lessons of COQUALMO (COnstructive QUALity MOdel) A Software Defect Density Prediction Model AWBrown and Sunita Chulani, Ph.D. {AWBrown, sdevnani}@csse.usc.edu} -Center for Systems & Software Engineering (-CSSE) 1

2 Outline Behavioral Underpinnings Hidden Factory Defect Types COQUALMO Framework The Defect Introduction Sub-Model Expert-Judgment Model + Some Initial Data Results The Defect Removal Sub-Model Expert-Judgment Model (Result of COQUALMO Workshop) COQUALMO Integrated with COCOMO II 2

3 Modeling Methodology 3

4 Operationally Wideband Delphi Final values (for each parameter) Max=(Hmax + 4*AVEmax + Lmax)/6 4

5 Fig 11., Pg 170 SwCEwCII 5

6 Model Framework Defect Introduction pipes Code Defects Residual Software Defects Design Defects Requirements Defects Defect Removal pipes 6

7 Software Devel. Process(es) (cont.) : the hidden factory 7

8 Outline Model Framework The Defect Introduction Sub-Model Expert-Judgment Model + Some Initial Data Results The Defect Removal Sub-Model Expert-Judgment Model (Result of COQUALMO Workshop) COQUALMO Integrated with COCOMO II Future Plans Call for Data 8

9 The Defect Introduction (DI) Sub-Model Software Size estimate Software product, process, computer and personnel attributes (subset of COCOMO II factors) Defect Introduction Sub-Model Number of non-trivial requirements, design and code defects introduced 9

10 A-Priori Expert-Judgment Based Code DI Ranges FLEX RUSE STOR TIME DATA ACAP AEXP PEXP DOCU SITE TEAM SCED LTEX PVOL PREC TOOL PCON PCAP CPLX RESL RELY PMAT

11 DI Model Equations Estimated Number of Defects Introduced = 3 21 B j j= 1 i= 1 A (Size) (DI driver) j identifies the 3 artifact types (requirements, design and coding). A is the multiplicative calibration constant. B is initially set to 1 (DI-driver) ij is the Defect Introduction driver for the j th artifact and the i th factor. For each artifact j, Quality Adjustment Factor (QAF) QAF j = 22 DIR - driver ij ij 3 i = 1 Estimated Number of Defects Introduced = B j A (Size) QAF j j Copyright j= 1 -CSSE 11

12 Initial Data Analysis on the DI Model Type of Artifact 1970 Baseline DIRs Quality Adjustment Factor Predicted DIR Actual DIR Calibrated Constant (A) 1990 Baseline DIRs Requirements Design Code DIR = Defect Introduction Rate 12

13 Outline Model Framework The Defect Introduction Sub-Model Expert-Judgment Model + Some Initial Data Results The Defect Removal Sub-Model Expert-Judgment Model (Result of COQUALMO Workshop) COQUALMO Integrated with COCOMO II Future Plans Call for Data 13

14 The Defect Removal (DR) Sub-Model Number of non-trivial requirements, design and coding defects introduced Defect removal activity levels Defect Removal Sub-Model Number of residual defects/ unit of size Software Size Estimate 14

15 D e f e c t R e m o v a l P r o f i l e s 3 relatively orthogonal profiles Automated Analysis People Reviews Execution Testing and Tools Each profile has 6 levels Very Low, Low, Nominal, High, Very High, Extra High Very Low--removes the least number of defects Extra High--removes the most defects 15

16 Automated Analysis Rating Very Low Low Nominal High Very High Extra High Automated Analysis Simple compiler syntax checking. Basic compiler or additional tools capabilities for static module-level code analysis, and syntax- and type-checking. All of the above, plus Some compiler extensions for static module and inter-module level code analysis, and syntax- and type-checking. Basic requirements and design consistency; and traceability checking. All of the above, plus Intermediate-level module and inter-module code syntax and semantic analysis. Simple requirements/design consistency checking across views. All of the above, plus More elaborate requirements/design view consistency checking. Basic distributed-processing and temporal analysis, model checking, symbolic execution. All of the above, plus Formalized* specification and verification. Advanced distributed-processing and temporal analysis, model checking, symbolic execution. *Consistency-checkable pre-conditions and post-conditions, but not mathematical theorems. 16

17 Static [Module-Level Code] Analysis Wikipedia Static code analysis is the analysis of computer software that is performed without actually executing programs built from that software (analysis performed on executing programs is known as dynamic analysis). In most cases the analysis is performed on some version of the source code and in the other cases some form of the object code. 17

18 Static [Module-Level Code] Analysis SWEBOK [sans references] 4.2. Quality Analysis and Evaluation Techniques Various tools and techniques can help ensure a software design s quality. Software design reviews: informal or semiformal, often group-based, techniques to verify and ensure the quality of design artifacts (for example, architecture reviews, design reviews and inspections, scenario-based techniques, requirements tracing) Static analysis: formal or semiformal static (nonexecutable) analysis that can be used to evaluate a design (for example, fault-tree analysis or automated cross-checking) Simulation and prototyping: dynamic techniques to evaluate a design (for example, performance simulation or feasibility prototype) 18

19 Peer Reviews Rating Very Low Low Nominal High Very High Extra High Peer Reviews No people reviews. Ad-hoc informal walkthroughs Minimal preparation, no follow-up. Well-defined sequence of preparation, review, minimal follow-up. Informal review roles and procedures. Formal review roles and procedures applied to all products using basic checklists*, follow up. Formal review roles and procedures applied to all product artifacts & changes; formal change control boards. Basic review checklists, root cause analysis. Use of historical data on inspection rate, preparation rate, fault density. Formal review roles and procedures for fixes, change control. Extensive review checklists, root cause analysis. Continuous review process improvement. User/Customer involvement, Statistical Process Control. * Checklists are lists of things to look for or check against (e.g. exit criteria) 19

20 Syntactic Versus Semantic Checking Both sentences below are semantically correct, only one is semantically correct. A panda enters the bar, eats shoots and leaves. A panda enters the bar, eats, shoots and leaves. 20

21 Execution Testing and Tools Rating Execution Testing and Tools Very Low No testing. Low Ad-hoc testing and debugging. Basic text-based debugger Nominal Basic unit test, integration test, system test process. Basic test data management, problem tracking support. Test criteria based on checklists. High Well-defined test sequence tailored to organization (acceptance, alpha, beta, flight, etc. test). Basic test coverage tools, test support system. Basic test process management. Very More advanced test tools, test data preparation, basic test oracle support, High distributed monitoring and analysis, active assertion checking. Extra High Metrics-based test process management. Highly advanced tools for test oracles, distributed monitoring and analysis, assertion checking. Integration of automated analysis and test tools. Model-based test process management. 21

22 Technique Selection Guidance Under specified conditions, Peer reviews are more effective than functional testing for faults of omission and incorrect specification (UMD, ) Functional testing is more effective than reviews for faults concerning numerical approximations and control flow (UMD, ) 22

23 Residual Defects Equation Estimated Number of Residual Defects 3 Est, j j Est, j ij i=1 DRes = C DI (1 DRF ) DRes Est, j = Estimated No. of Residual Defects for the j th artifact C j = Calibration Constant for the j th artifact DI Est, j = Estimated No. of Defects Introduced for the j th artifact (output of DI Sub-Model) i = Defect Removal profile DRF ij = Defect Removal Fraction 23

24 Judgment Calibrated COQUALMO Very Low Low Defect Densities from Expert- Nominal High Very High Extra High Automated Analysis DRF People Reviews DRF Execution Testing and Tools DRF Product (1-DRF ij ) DI/kSLOC DRes/kSLOC R D C Total: 60 R D C Total: 28.5 R D C Total: 14.3 R D C Total: 7.5 R D C Total: 3.5 R D C Total:

25 Validation of Defect Densities Average defect density using Jones data weighted by CMM maturity level distribution of 542 organizations is 13.9 defects/ksloc Average defect density using COQUALMO is 14.3 defects/ksloc % of Organizations 100% 90% 80% 70% 60% 50% 40% 30% 20% 10% 0% Organizations Initial Defined Optimizing Residual defect density from Jones data Leading Average Lagging (.135* * *18.3) = 13.9 defects/ksloc 25

26 An Independent Validation Study Aim: To validate expert-determined COQUALMO has correct trends in defect rates Sample Project: Size = ksloc Type of Artifact DI DIR Quality Adjustment Factor (QAF j ) Baseline DIR 1970 s Baseline DIR Reqts Design Code Type of Artifact Automated Analysis (VL) Peer Reviews (L - VL) Execution Testing and Tools (H - VH) Product (1-DRF) DI/ ksloc DRes/ ksloc Reqts Design Code Total: 5.57 Actual Defect Density = 6 defects/ksloc 26

27 Outline Model Framework The Defect Introduction Sub-Model Expert-Judgment Model + Some Initial Data Results The Defect Removal Sub-Model Expert-Judgment Model (Result of COQUALMO Workshop) COQUALMO Integrated with COCOMO II 27

28 Integrated COQUALMO Software Size estimate COCOMO II COQUALMO Software development effort, cost and schedule estimate Software platform, project, product and personnel attributes Defect Introduction Model Number of residual defects Defect density per unit of size Defect removal profile levels Defect Removal Model 28