Finding The Relationship Between Software Testing Effort And Software Quality Metrics

Size: px
Start display at page:

Download "Finding The Relationship Between Software Testing Effort And Software Quality Metrics"

Transcription

1 Finding The Relationship Between Software ing Effort And Software Quality Metrics N. Yagci 1, K. Ayan 2 1 TUBITAK BILGEM, Gebze, Kocaeli, Turkey 2 Computer Engineering, Sakarya University, Serdivan,Sakarya,Turkey Abstract - Software testing has very important role in Software Development Life Cycle for providing software quality and its role comes into prominence day by day. One of the jobs, which software test engineers perform, is executing the tests according to the test cases. But before the execution of tests starts, test manager has to schedule and plan and for this purpose he or she has to estimate the test effort accurate as possible. The accuracy of the estimation is very important for project success, because source and time planning is going to be actualizing according to this estimation. But so far the methods which have been used to estimate the test effort are too subjective or required too many efforts. In this article, we propose a new method for test effort estimation. Proposed method is about finding the relationship between software quality metrics and test effort execution then making the estimation according to this relationship. Keywords: ing Effort Estimation, Software Quality Metric 1 Introduction and Previous Works Software testing has growing importance in the software projects. Software developers used to test their own products so far, but nowadays many software companies embrace the independent testing team approach. In this approach, ing team reports to the test manager not project manager. [1]. In this approach, test manager should organize time, source and budget planning. One of the jobs, which software test engineers perform, is executing the tests according to the test cases. [4] managers have to estimate the time which required for executing test cases. effort estimation is estimation of test time and test source before the test execution starts. There are lots of methods for estimation test effort. The followings are prominent of these methods. Taking the percentage of software development effort; this method is commonly used because of easy. In this method software effort is taken and test effort is calculated by dividing this number to a chosen number which project manager decides. (For example; ing Effort = software test effort * ¼ ). But testing and coding are different professions, they requires different expertise. Functional Point Analysis; this method is improved for specifying the project size. One of the initial design criteria for function points was to provide a mechanism that both software developers and users could utilize to define functional requirements. [3] Point Analysis; point analysis (TPA) represents a test estimate preparation technique that can be used to objectively prepare estimates for system- and acceptance tests [2] However, it is important to note that TPA itself only covers black-box testing. Thus, it is often used in conjunction with FPA, which in return does not cover systemand acceptance tests. Consequently, FPA and TPA merged together provide means for estimating both, white- and blackbox testing efforts. [2] Use Points; Use cases in their most primitive forms are basically representative of what the user wants from a system. [9] Each scenario and its exception flows for each use case are input for a test case. Subsequently, the estimation calculations can commence. As the requirements become clearer further downstream, the estimates will also undergo revision. [9] The common point of all these methods, a detailed work has to be done for estimating test effort. The team, that is going to make the estimation work, has to know very specific information about software under test and its documentations and to work for very long time. Getting the accurate estimation is only possible under these circumstances. But in these competitive world conditions, usually it is not possible to use these methods. There is no enough source and time. There is a strong need for handling test effort estimation in a short time without needing lots of input artifacts. Otherwise, tests seem to be executed in an unorganized way. 2 Proposed Approach For Estimating Effort And Study Software quality metrics has been using since 1970 for measuring the software quality. Software quality metrics give us very important clues about software. [5] We claim that

2 these metrics has also very important effect on test effort estimation. For examining this claim, we exercise a case study. We take the software metrics in two bases.for description of these metrics; look up Appendix (Table 10 - Table 11). Method Based Metrics; Branches, Call_Pairs, ed(g), Edge_Count, e, evgb4, id(g), i, Lines_with_Nodes, MNT_SEV, Norm_, Param_Count, p, SLOC, vd(g),, vgb10orevgb4 Class Based Metrics Avg_, Branches, Depth, e, id(g), i, Lack_Cohesion, Max_e, Max_, MNT_SEV, Norm_, Parent_Count, p, RFC, Sum_, vd(g), Finding the relationship between test effort and software quality metrics, makes possible to estimate the test effort easily, fast, reliable and objectively. In this study, we have tried to examine the accuracy of the proposed method. We chose one of our programs which is developed by our software team. Software under test has software requirements and test cases which are documented according the software requirements. Choosing the test cases independent is important because that makes easy to see the difference. Chosen test cases are; T1, T2, T3, T5, T6, T7, T9, T11, T13. Method based metrics (Branches, Call_Pairs, ed(g), Edge_Count, e, evgb4, id(g), i, Lines_w_Nodes, MNT_SEV, Norm_, Param_Count, p, SLOC, vd(g),, vgb10orevgb4) and class based metrics (Avg_, Branches, Depth, e, id(g), i, Lack_Cohesion, Max_e, Max_, MNT_SEV, Norm_, Parent_Count, p, RFC, Sum_, vd(g), ) are used. After all test cases are executed, the coverage is saved for all test cases separately. Table 1 shows method based coverage percentage of test case T1 as an example. Table 1 -Method Coverage No Method Name Coverage (Percentage) T1 AdminPanel_ windows.com monworks.arr angetooltip( DataGridView T1 T1,Dictionary,To oltip,datagri dviewcelleve ntargs) AdminPanel_ windows.form s.frmintroduct ion.frmintrod uction() AdminPanel_ windows.form s.frmintroduct ion.fillcombo Db() After coverage work, case metric evaluation tables (Metric Based and Class Based) are composed. An example calculation method metrics for a test case; It is assumed; Tn test case consists M1, M2, M3 methods and has the coverage percentage shows on the following table. Table 2 -Method Coverage Example No Method Name Coverage (Percentage) Tn M % Tn M2 100% Tn M3 76% The following table shows the metric value for M1, M2, M3 methods. Table 3 Method- Metric Method Name M1 5 M2 12 M3 3

3 The calculation of metric for Tn; Tn M * C (1) v( g) i 1.. m i v( g) i According to this formula, the following tables shows the results for T1, T2, T3, T5, T6, T7, T9, T11, T13 test case in method based calculation. M v(g) represents metric value of Method. C represents the Coverage Percentage Table 4.1 s Method Metrics vgb10or Branches Call_Pairs SLOC evgb4 vd(g) id(g) T T T T T T T T T Table 4.2 s Method Metrics Edge evgb4 ed(g) e i Lines_w_Nodes MNT_SEV Count T T T T T T T T T Table 4.3 s Method Metrics Norm_ Param_Count p T T T T T T T T T An example calculation class metrics for a test case; It is assumed; Tn test case consists C1, C2, C3 classes and has the coverage percentage shows on the following table.

4 Table 5 s-class Coverage Example No Class Name Coverage (Percentage) Tn Cl % Tn Cl2 40% Tn Cl3 36% The following table shows the metric value for Cl1, Cl2, Cl3 classes. Table 6 Class- metric Cl1 14 Cl2 12 Cl3 13 The calculation of metric for Tn; Tn Cl C (2) v( g) i 1.. n i v( g) * i Cl v(g) represents value of Class and C represents the Coverage Percentage Tn 11,6794 (3) vg Class Name According to this formula, the following tables shows the results for T1, T2, T3, T5, T6, T7, T9, T11, T13 test case in class based calculation. Table 7.1 s-class Metrics Avg_ Branches vd(g) Depth id(g) e T T T T T T T T T i Table 7.2 s-class Metrics Lack_ MNT_SEV Max_e Cohesion Norm_ Max_ T T T T T T T T T Table 7-3 s-class Metrics Parent_Count p RFC Sum_

5 T T T T T T T T T Meanwhile, a test team including four test engineers has involved to this study. While all of test cases are executed by all test engineers, test efforts are saved on test case based. Arithmetic average is calculated by using these efforts. Table 8 s- Execute Duration Intention of using arithmetic average is minimizing human factor. A calculation shows the following table. er1 er2 er3 er4 Arithmetic Average T T T T T T T T T Sum Results All the results of the calculation normalized and ordered on method based, class based and test effort based. Table 9 s Order Comparison T3 T3 T5 T2 T2 T6 T1 T1 T2 Method Based Metric Order Class Based Metric Order Effort Based T11 T11 T11 T9 T9 T9 T13 T13 T13 T7 T7 T3 T5 T5 T1 T6 T6 T7 As it can be seen from Table 9; the result of the first and last orders are the same for all the bases. The middle part of the table shows test cases that not have too many different results, so the order of them can be ignored. 4 Conclusion And Future Work In this study, the relationship between software quality metrics and test effort is researched. Firstly test cases are chosen independently. Then chosen test cases are executed for

6 the coverage calculation. At the same time, measurements for chosen metrics are taken for every method and class in the software under test. For showing the accuracy of proposed approach, chosen test cases are executed by a test team and test cases execution times are saved. Then the order of test execution times and metric results are compared. In this comparison, it can be seen, software product metrics has very important effect at test execution time. The purpose of this study is showing test effort estimation can done by using test coverage and software quality metrics information. Estimation of test effort for a program can be very fast and objective by using this relationship. For future work, we will try this method for more large scale projects and compare their results. 5 APPENDIX a. Method-Based Metric Name; Following metrics are calculated by method-based. Table 10 Method Metrics Description MNT_SEV nodes e/ Norm_ Normalized cyclomatic complexity ( / nl) [7] Param_Count nl= Number of lines for the module (physical count from start line to end line) Formal parameter count of a method p All unstructured constructs except multiple entries into loops are treated as straightline code in the module s flow graph. Pathological complexity is equal to the cyclomatic complexity of the reduced flow graph. [7] Metric Name Branches Call Pairs Description An initial edge into a flow graph and coming out of any decision. Executable calls SLOC vd(g) Line of Code contains only code /(SLOC+MLOC)MLOC= Line of Code contains both code ad comment between methods vgb10orevgb4 if >10 or e>4 true ed(g) (e-1)/(-1) Edge_Count e Edges represent the flow of control from one node to another on a flow graph. Essential Complexity b. Class-Based Metric Name Following metrics are calculated by class-based. Table 11 Class Metrics (unstructuredness DescriptionMetricName Description indicator) [8] Avg_ Average evgb4 id(g) If e>4, value is True i/ Branches An initial edge into a flow graph and coming out of any decision. i Module Design Complexity [8] Depth Depth (the level for a class) [6] Lines_with_Nodes Lines of code with flow graph e Essential Complexity

7 id(g) i Lack_Cohesion (unstructuredness indicator) [8] i/ Module Design Complexity [8] Lack of Cohesion of RFC Sum_ vd(g) Response for a class [6] Sum of the /(SLOC+MLOC) MLOC= Line of Code contains both code ad comment Methods [6] Max_e Maximum Essential Complexity [6] Max_ MNT_SEV Norm_ Parent_Count Maximum Cyclomatic Complexity e/ Normalized cyclomatic complexity ( / nl) [7] nl= Number of lines for the module (physical count from start line to end line) Formal parent count of a method p All unstructured constructs except multiple entries into loops are treated as straight-line code in the module s flow graph. Pathological complexity is equal to the cyclomatic complexity of the reduced flow graph. [7] 6 Acknowledgement The authors would like to thank Software ing and Quality Evaluation Center (YTKDM in Turkish) of Scientific and Technological Research Council of Turkey (TUBITAK in Turkish) for funding this study. 7 References [1] Cem Kaner, J. F. (1999). ing Computer Software. In J. F. Cem Kaner, ing Computer Software (p. 344). Wiley Computer Publishing. [2] CISA, D. E., & Dekkers, T. (1999). pointanalysis: a method for test estimation. [3] Heller, R. (2003). An introduction to function point analysis. Q/P Management Group. [4] IEEE Std , IEEE Standard for Software Documentation. (1998). [5] Karl S. Mathias, J. H. (1999). The Role of Software Measures and Metrics in Studies of Program Comprehension. ACM SE. [6] Kemerer, S. R. (1994). A Metrics Suite for Object Oriented Design. IEEE Transactions on Software Engineering, VOL. 20, No:6. [7] McCABE, T. J. (1976). A Complexity Measure. [8] McCabe, T. J., & Butler, C. W. (1989). Design Complexity Measurement and ing. [9] Nageswaran, S. (2001). Effort Estimation Using Use Points.