6.1 Introduction Objective Acceptance Testing CHAPTER 7: Black Box Testing Introduction Objectives...

Size: px
Start display at page:

Download "6.1 Introduction Objective Acceptance Testing CHAPTER 7: Black Box Testing Introduction Objectives..."

Transcription

1 This course provides a highly practical bottom-up introduction to software testing and quality assurance. Each organization performs testing and quality assurance activities in different ways. This course provides a broad view of both testing and quality assurance so that participants will become aware of the various activities that contribute to managing the quality of a software product. 1 P age

2 Contents CHAPTER 1: Software Testing and Software Development Life Cycle Introduction Software Development Lifecycle (SDLC) Various SDLC Models... 6 CHAPTER 2: Software Quality Testing Introduction What is Software Quality? Standards and Guidelines CHAPTER 3: Software Test Life Cycle and Verification & Validation Software Testing Life Cycle (STLC) Verification and Validation Model CHAPTER 4A: Validation Activity Low-Level Testing CHAPTER 4B: Validation Activity High-Level Testing B.1 Objectives B.2 Steps of Function Testing B.3 Summary CHAPTER 5: Types of System Testing Introduction Usability Testing Performance Testing Load Testing Stress Testing Security Testing Configuration Testing Compatibility Testing Installation Testing Recovery Testing Availability Testing Volume Testing Accessibility Testing CHAPTER 6: Acceptance Testing

3 6.1 Introduction Objective Acceptance Testing CHAPTER 7: Black Box Testing Introduction Objectives Advantages of Black Box Testing Disadvantages of Black Box Testing Black Box Testing Methods CHAPTER 8: Testing Types Introduction Mutation Testing Progressive Testing Regression Testing Retesting Localization Testing Internationalization Testing CHAPTER 9: White Box Testing Introduction Objective Advantages of WBT Disadvantages of WBT Techniques for White Box Testing Cyclomatic Complexity How to calculate Statement, Branch/Decision, and Path Coverage for ISTQB Exam purpose. 65 CHAPTER 10: Test Cases Introduction Objective Structure of Test Cases Test Case Template CHAPTER11: Test Planning Introduction

4 11.2 Objectives IEEE Standard for Software Test Documentation CHAPTER 12: Configuration Management Introduction Objective Configuration Management Tools CHAPTER13: Defect Tracing and Defect Life Cycle Introduction Objectives Why Do Faults Occur? What Is a Bug Life Cycle? Bug Status Description Severity: How Serious Is The Defect? Priority: How to Decide Priority? Defect Tracking Defect Prevention Defect Report CHAPTER 14: Risk Analysis Introduction Objectives Risk Identification Risk Strategy Risk Assessment Risk Mitigation Risk Reporting What Is Schedule Risk? DEFINITIONS

5 CHAPTER1: Software Testing and Software Development Life Cycle 1.1 Introduction Software testing is a crucial phase of a product development lifecycle. It is a process of finding flaws in a given product or application. The purpose of testing is not to ensure that a product functions properly under all conditions, but to ensure that the product does not function under specific conditions. The objective of software testing is to: validate and verify, automatically or manually, that a software program/product meets the technical and business requirements. evaluate the product for its correctness, completeness, reusability, and reliability. ensure that the behavior of the product is as per the end-user s expectations. identify defects in a product as early as possible in the development lifecycle, thereby helping in reducing costs fixing the defects later. deliver defect-free and high-quality products. 1.2 Software Development Lifecycle (SDLC) Software development lifecycle (SDLC) is a conceptual model that describes sequence of activities followed by designers and developers during product development. SDLC consists of multiple stages or phases in which input for each phase is the output of the previous one. In the IT industry, different SDLC models are followed, which involve various stages from creating through testing a software product. The commonly followed SDLC model is categorized into five stages: analysis, design, implementation, verification, and maintenance. 1. Analysis Design Implementation Verification Maintenance Software Development Life Cycle (SDLC) 5

6 1.3 Various SDLC Models Various types of SDLC models exist to streamline the development process. Each one has its pros and cons. It is up to the development team to choose the appropriate model for its project. In this section, we will learn about four SDLC models: 1. Waterfall model 2. Incremental model 3. Spiral model 4. Agile methodology Let's learn about each of these models in brief Waterfall Model Waterfall model, a classic software lifecycle model, was introduced and widely followed in software engineering. This model exhibits a linear and sequential approach in software development. In this model, the different phases of software engineering are cascaded to each other such that one can move to a phase only when its preceding phase is finished and once you finish one phase, you cannot move back to the previous phase. The different phases in waterfall model are as follows: Project Planning This phase defines the objectives, strategies, and supporting methods required to achieve the project goal. Requirement Analysis and Definition The main objective of this phase is to prepare a document, called Software Requirement Specification (SRS), that clearly specifies all the requirements of the customer. SRS is the primary output of this phase. Systems Design This phase includes designing of screen layouts, business rules, process diagrams, pseudo code, and other documentation to describe the features and operations of a software product in detail. Implementation In this phase, the actually coding starts. After the preparation of system design documents, programmers develop the software program/application based on the specifications. In this phase, the source code, executables, and databases are created. Integration and Testing In this phase, the pieces of all codes/modules of a product are integrated into a complete system and tested to check if all modules/units coordinate between each 6

7 other, and the system as a whole behaves as per the specifications. Acceptance, Installation, Deployment This phase includes: o Getting the software accepted o Installing the software at the customer site Acceptance consists of formal testing conducted by the customer according to the acceptance test plan prepared earlier and analysis of the test results to determine whether the system satisfies its acceptance criteria. When the test results satisfy the acceptance criteria, the user accepts the software. Maintenance This phase is for all types of modifications and corrections of the product after it is installed and operational. This, the least glamorous and perhaps the most important step of all in SDLC, however goes on seemingly forever. Requirement Design Implementation /Coding Testing Maintenance Waterfall Model Let s quickly go through the advantages and disadvantages of the Waterfall model. Advantages It is simple and easy to use. Because of the rigidity of the model, each phase has specific deliverables and a review process; it is easy to manage. Phases are processed and completed one at a time. More suitable for smaller projects where requirements are very well understood. 7

8 Disadvantages Adjusting scope during the lifecycle can kill a project. No working software is produced until late during the lifecycle. Risk and uncertainty is very high. Poor model for complex and object-oriented projects. Poor model for long and ongoing projects Incremental Model Incremental model is an advanced approach to the Waterfall model. It is essentially a series of waterfall cycles. In this model, a core setof functions is identified in the first cycle and is built and deployed as the first release. You can then repeat the software development cycle with each release adding mode functionality until all the requirements are met. Each development cycleacts as the maintenance phase for the previous software release. New requirements discoveredduring the development of a given cycle are implemented in subsequent cycles. In this model, a subsequent cycle may beginbefore the previous cycle is complete. Requirements Design Implementation & Unit Testing Integration & System Operation Incremental Life Cycle Model Let s go through the advantages and disadvantages of the Incremental model. Advantages Allows requirement modification and addition of new requirements Easier to test and debug on smaller cycles Easier to manage riskssince risks are identified and handled during each iteration Every iteration in incremental model is an easily managed milestone 8

9 Disadvantages Majority of requirements must be known in the beginning Cost and schedule overrun may result in an unfinished system Spiral Model This model is similar to the incremental model, but with additional phase of risk analysis. The spiral model has four phases: Planning, Risk Analysis, Engineering, and Evaluation. Let s see each of these phases in brief. 1. Planning: determines the objectives, alternatives, and constraints on the new iteration 2. Risk analysis: evaluates alternatives and identify and resolve risk issues 3. Engineering: develops and verifies the product for this iteration 4. Evaluation: evaluates the output of the project to date before the project continues to the next spiral; plans the next iteration ----PLANNING -Requirement Gathering -Design ----RISK ANALYSIS -Prototyping Progress Project Cost ----EVALUATION -Customer Evaluation ----ENGG. -Coding -Testing Spiral Model 9

10 Let s go through the advantages and disadvantages of the spiral model. Advantages Useful for complex and large projects High amount of risk analysis Software is produced early in the software lifecycle because of the prototype Disadvantages Expensive model Time spent for planning, risk analysis, prototyping can be excessive Risk analysis requires highly-level, skilled expertise Project s success is highly dependent on the risk analysis phase Doesn t work well for smaller projects Agile Methodology Agile methodology breaks development tasks into smalleriterations with minimal planning. The working software is delivered frequently, say, on weekly, fortnightly, or monthly basis. Iterations are short time frames and typically last from one to four weeks. In each iteration, a team works through a full software development cycle.this minimizes the overall risk and allows the project to adapt to changes quickly. The team involved in agile methodology is usually cross-functional and self-organizing regardless of any existing corporate hierarchy or the corporate roles of team members. Team members take responsibility for tasks that deliver the functionality and decide individually on how to meet an iteration's requirements. In most agile implementation, a formal, daily, face-to-face meeting is conducted among team members. In a brief session, team members report to each other what they did the previous day, what they intend to do today, and what their roadblocks are. This face-to-face meeting helps in exposing the problem areas. Let s go through the advantages and disadvantages of the agile methodology. Advantages Involves an adaptive team which is able to respond to the changing requirements Face- to-face communication and continuous inputs from customer representative leaves no space for guesswork End result is the high-quality software in least possible time duration and satisfied customer 10

11 Disadvantages It becomes difficult to assess efforts required at the beginning of SDLC, in case of large, complex software deliverables. The project can easily get off the track if the customer is not clear what final outcome that they want. 11

12 CHAPTER 2: Software Quality Testing 2.1 Introduction Quality is defined as the degree to which a component, system, or process meets the specified requirements and/or user/customer needs and expectations. Quality could also mean: a product or service free of defects fitness for use conformance to requirements In this chapter, you will learn about software quality testing and terminologies. 2.2 What is Software Quality? In the software engineering industry, software quality refers to: Software functional quality: reflects how well a product complies/conforms to a given design, based on the functional requirements or specifications Software structural quality: refers to how a product meets non-functional requirements such as robustness or maintainability Software quality is broadly classified as Quality Assurance and Quality Control. QUALITY QUALITY ASSSURANCE QUALITY CONTROL Categories of Software Quality 12

13 2.2.1 Quality Assurance (QA) Quality Assurance aims at defect prevention in processes. It monitors and evaluates various aspects of projects and ensures that the engineering processes and standards are strictly adhered throughout the software lifecycle to ensure quality. Audits are a key technique used to perform product evaluation and process monitoring. Key Points Identifies weaknesses in processes and improves them QA is the responsibility of the entire team Helps defect prevention Helps establish processes for defect prevention Sets up measurement programs to evaluate processes Quality Control (QC) Quality control focuses on testing of products to remove defects from the products and ensure that the product meets performance requirements. Key Points Involves comparison of product quality with applicable standards, and actions taken when non-conformance is detected Implements processes for defect removal QC is the responsibility of the tester Detects and reports defects found in testing 13

14 2.3 Standards and Guidelines Standards are rules or processes set to be followed in any organization for developing a product, whereas guidelines acts as a suggestion for carrying out a particular activity or task. The Software Engineering Institute (SEI) standard, established in 1984 at Carnegie Mellon University, aims at rapid improvementof the quality of operational software in the missioncritical computer systemsof the UnitedStates Department of Defense. Based on the type of industry, various industry standards exist. The standards used for software industries are as follows: 1. Capability Maturity Model (CMM) 2. International Organization for Standardization (ISO) 3. IEEE 4. ANSI Let s learn about each of these standards in detail Capability Maturity Model (CMM) Capability Maturity Model (CMM) is a process improvement approach. It helps organizations improve their performance and can be used to guide process improvement across a project, a division, or an entire organization. CMM describes five evolutionary stages in which an organization manages its processes. These stages are: 1. Level 1: Initial In level 1 organizations, processes are disorganized and chaotic. Success usually depends on individual efforts and heroics of people. These organizations often exceed the budget and schedule of their projects. Key Points Tendency to over commit Skip processes in the time of crisis Does not repeat their past successes again Success depends on having quality people 2. Level 2: Repeatable In level 2 organizations, project tracking, requirements management, realistic planning, and configuration management processes are established and put in place. 14

15 Key Points Software development successes are repeatable Process discipline helps ensure that existing practices are followed even during tight delivery timelines Basic project management processes are established to track cost, schedule, and functionality 3. Level 3: Defined Standard software development and maintenance processes are established and improved over time. These standard processes bring consistency across the organization. 4. Level 4: Managed Using metrics and measurements, management can effectively track productivity, development efforts, processes, and products. In this level of organization, quality is consistently high. 5. Level 5: Optimizing In level of organization, processes are constantly improved and new, innovative processes are introduced to better serve the organization's particular needs International Organization for Standardization (ISO) The ISO 9001:2000 standard specifies requirements for a quality management system. This ISO standard covers documentation, design, development, production, testing, installation, servicing and other processes Institute of Electrical and Electronics Engineers (IEEE) IEEE has created standards related to software quality and testing. These standards are IEEE Standard for Software Test Documentation(IEEE/ANSI Standard 829), IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), IEEE Standard for Software Quality Assurance Plans (IEEE/ASNI Standard 730), and others American National Standards Institute (ANSI) ANSI is the primary industrial standards body in the U.S. It publishes some software-related standards in conjunction with the IEEE and ASQ (American Society for Quality). 15

16 CHAPTER3: Software Test Life Cycle and Verification& Validation 3.1 Software Testing Life Cycle (STLC) Every company follows its own software testing lifecycle (STLC) to suit their requirements, culture, and available resources. STLC includes various stages of testing through which a software product goes. STLC comprises of following sequential phases: 1. Planning 2. Analysis 3. Design 4. Construction and verification 5. Testing cycles 6. Final testing and implementation 7. Post implementation Let s learn about each of these stages. 1.Planning In the Planning stage, the Project Manager decides what things need to be tested, what would be the appropriate budget, etc. Proper planning at this stage helps to reduce the risk of low quality software. Major tasks involves in the planning stage are: Defining scope of testing Identifying approaches Defining risks Identifying resources Defining schedule 2.Analysis Once the test plan is created, the next phase is the Analysis phase. This phase involves: Identifying the types of testing to be carried out at various SDLC stages Determining if testing should be performed manually or automatically Creating test case formats, test cases, functional validation matrix based on Business Requirements Identifying which test cases to automate Reviewing documentations In the analysis phase, frequent meetings are held between testing teams, project managers, and development teams to check the progress of project and ensure the completeness of the test plan created in the planning phase. 3.Design In the design phase, the following activities are carried out: Test plans are test cases are revised. Functional validation matrix is revised and finalized. 16

17 Risk assessment criteria is developed. Test cases for automation are identified and scripts are written for them. Test data is prepared. Standards for unit testing and pass/fail criteria are defined. Testing schedule is revised and finalized. Test environment is prepared. 4.Construction and Verification This phase aims at completion of all test plans, test cases, and scripting of the automated test cases. In this phase, test cases are run and defects are reported as and when found. 5. Testing cycles In this phase, test cycles need to be completed until test cases are executed without errors or a predefined condition is reached. Activities involved in this phase are: Running test cases Reporting defects Revising test cases Adding new test cases Fixing defects Retesting 6. Final Testing and Implementation In this phase, the following activities are carried out: Executing stress and performance test cases Completing or updating documentation for testing Providing and completing different matrices for testing In this phase, acceptance, load, and recovery testing is also conducted and the application is verified under production conditions. 7. Post implementation In this phase, the following activities are carried out: Evaluating the testing process and documenting lessons learnt from the testing process Creating plans to improve the process. Recording of new errors and enhancements is an ongoing process. Cleaning up the test environment Restoring test machines to base lines. 3.2 Verification and Validation Model Verification and Validation are the two main processes involved in software testing. Let us learn about these processes in detail. Software quality, correctness, and completeness can be identified by performing adequate testing. In order to make sure that the product development is as per requirements, we 17

18 have to initiate testing right from the beginning. The picture below depicts the Verification and Validation model which shows that the software testing process is carried out in parallel with the development process. The left part of the V is called validation, which is carried out after a part of the product is developed. V-V model can also be called as Software Testing Life Cycle (STLC). In STLC, each development activity is followed with a testing activity. Instead of moving down in a linear way, the process steps are bent upwards after the coding phase, to form the typical V shape. The V-Model demonstrates the relationships between each phase of the development life cycle and its associated phase of testing. V-V Model Different Stages of SDLC with STLC Stage 1: Requirement Gathering Development Activity In this phase, the requirements of the proposed system are collected by analyzing the needs of the users. However, in many situations, not enough care is taken in establishing correct requirements; up front. It is necessary that requirements are established in a systematic way to ensure their accuracy and completeness, but this is not always an easy task. 18

19 Testing Activity To make requirements more accurate and complete, testing needs to be performed right from the requirements phase in which testers review the requirements. For example, the requirements should not have ambiguous words like may or may not. It should be clear and concise. Stage 2: Functional Specifications Development Activity The Functional Specification document describes the features of the software product. It describes the product s behavior as seen by an external observer, and contains the technical information and data needed for the design. The Functional Specification defines what the functionality will be. Testing Activity Testing is performed in order to ensure that the functional specifications accurate. Stage 3: Design Development Activity During the design process, the software specifications are transformed into design models that describe the details of the data structures, system architecture, interface, and components. At the end of the design process, a design specifications document is produced. This document is composed of the design models that describe the data, architecture, interfaces, and components. Testing Activity Each design product is reviewed for quality before moving to the next phase of software development. In order to evaluate the quality of a design (representation), the criteria for a good design should be established. Such a design should exhibit good architectural structure, be modular, contain distinct representations of data, architecture, interfaces, and components. The software design process encourages good design through the application of fundamental design principles, systematic methodology and through review. Stage 4: Code Development Activity Using the design document, code is constructed. Programs are written using a conventional programming language or an application generator. Different high-level programming languages such as C, C++, VB, Java, etc. are used for coding. With respect to the type of application, the right programming language is chosen. Programming tools such as compilers, interpreters, and debuggers are used to generate the code. 19

20 Testing Activity Code review is done to find and fix defects that are over looked in the initial development phase to improve the overall quality of code. Online software repositories, like anonymous CVS, allow groups of individuals to collaboratively review code to improve software quality and security. Code review is a process of verifying the source code. Code reviews can often find and remove common security vulnerabilities such as format string attacks, race conditions, and buffer overflows, thereby improving software security. Stage 5: Building Software Development Activity This phase involves building different software units (components) and integrating them one by one to build single software. Testing Activity a. Unit Testing A unit test is a validation procedure to check working of the smallest module of source code. Once the modules are ready, individual components should be tested to verify that the units function as per the specifications. Test cases are written for all functions and methods to identify and fix the problems faster. For testing of units, dummy objects are written such as stubs and drivers. This helps in testing each unit separately when all the code is not written. Usually a developer uses this method to review his code. b. Integration Testing Integration testing follows unit testing and is done before system testing. Individual software modules are combined and tested as a group under integration testing. The purpose is to validate functionality, performance and reliability requirements. Test cases are constructed to test all components and their interfaces and confirm whether they are working correctly. It also includes inter-process communication and shared data areas. Stage 6: Building System Development Activity After the software has been build, we have the whole system considering all the nonfunctional requirements such as installation procedures, configuration, etc. Testing Activity a. System Testing Testing the complete integrated system to confirm that it complies to requirement specifications is called as System Testing. Under System Testing, the entire system is tested against its Functional Requirement Specifications (FRS), and/or System 20

21 Requirement Specification (SRS) and with the non-functional requirements. System Testing is crucial. Testersneed to test from users perceptive and need to be more creative. b. Acceptance Testing Also called as User Acceptance Testing (UAT), this is one of the final stages of a project and will often occur before a customer accepts a new system. It is a process to obtain confirmation by the owner of the object under test, through trial or review that the modification or addition meets mutually agreed upon requirements. Users of the system will perform these tests according to their User Requirements Specification, to which the system should conform. There are two stages of acceptance testing, Alpha and Beta. Now the whole product has been developed, the required level of quality has been achieved, and the software is ready to be released for customers Verification Verification ensures that the product is built or developed in accordance with the requirements and design specifications given by the end user. Verification also ensures that the data gathered is used in the right place and in the right way. Verification happens at the beginning of the software testing lifecycle. This process is used to exhibit consistency, correctness, completeness of the software at every stage as well as in between the different stages of the lifecycle. In the verification phase, documents related to software, plans, code, specifications, etc. are reviewed. Verification Methods There are mainly three methods of verification. They are as follows: 1. Peer Reviews 2. Walkthroughs 3. Inspections 1. Peer Reviews Peer review is the review of products performed by peers during product development to identify the defects for removal and recommend other changes that are needed. It is an informal way of verification. Peer reviews are also called as buddy checks. 2. Walkthroughs Walkthroughs are semi-formal meetings led by a presenter who presents the documents. The purpose of walkthroughs is to find potential bugs and is essentially used for knowledge sharing or communication purpose. 21

22 3. Inspections Inspections are formal meetings attended by authors and participants who come prepared with their own task. The goals of these meetings are to communicate important product information and detect defects by verifying the software product Validation Validation checks the product design to ensure that the product is right for its intended use. Unlike verification, the validation process happens in the later part of the software testing cycle. It is in this process that the actual testing of software takes place. Validation determines the correctness of the product in accordance with the user requirements. Validation Techniques The two main techniques of the Validation process are: 1. White box testing 2. Black box testing 1. White box testing White box testing is a software testing approach. It uses inner structural and logical properties of the program for verification and deriving test data. White box testing is also called as glass, structural, open box, transparent, or clear box testing. For white box testing, the tester needs to have knowledge of the code or the internal program design. White box testing also requires the tester to look into the code and find out which unit/statement of the code is malfunctioning. 2. Black box testing Black Box Testing is a validation strategy that does not need any knowledge of the internal design or the code. Black box testing is also called as opaque box, functional/behavioral box, or closed box testing. The main focus of this testing is on testing for requirements and functionality of the software product or application. In this approach, black-box tests are derived from functional design specifications against which testers check the actual behavior of the software. 22

23 The verification and validation processes are summarized in the table below. Verification Focus is on Process, i.e. determining if Am I building the product right? Low-level activity It is performed during development on key artifacts, like walkthroughs, reviews and inspections, mentor feedback, training, checklists, and standards. Am I accessing the data right (in the right place; in the right way)? Verify the consistency, completeness, and correctness of the software at each stage and between each stage of the development life cycle. Validation Focus is on Product, i.e. determining if Am I building the right product? High-level activity It is performed after a product is produced against established criteria ensuring that the product integrates correctly into the environment. Am I accessing the right data (in terms of the data required to satisfy the requirement)? Validate the correctness of the final software product by a development project with respect to the user needs and requirements Advantages of V-V Model Simple and easy to use Each phase has specific deliverables Chances of success are high since the test plans are developed in the initial stage of development lifecycle Works well for small projects where requirements are easily understood Disadvantages of V-V Model: Less flexible and adjusting scope is difficult and expensive Software product is developed during the implementation phase, so no early prototypes of the software are produced Very rigid like the waterfall model 23

24 CHAPTER 4A: Validation Activity Low-Level Testing The validation process in software development stage is carried out at two levels, low level and high level. In this section, we will learn about low-level testing methods. Low-level testing is broadly classified into: Unit Testing Integration Testing Unit Testing Unit testing involves validation of individual units of source code to ensure that they are working properly. A unit is the smallest testable part of an application. The main purpose of unit testing is to take the smallest piece of testable software in the application, isolate it from the remainder of the code, and determine whether it behaves as per requirements. Each unit is tested separately before integrating into modules to test the interfaces between modules. Unit testing results in identifying large number of defects. Unit testing requires knowledge of internal design of code and it is generally done by developers. Integration Testing Integration testing is the process of combining and testing multiple components in a group. This testing is performed after unit testing and before system testing. Integration testing detects interface errors and ensures that the modules or components operate properly when combined together. Integration testing is done by developers or by the QA team. Integration testing is of two types: Non-incremental Incremental 24

25 INTEGRATION TESTING INCREMENTAL NON-INCREMENTAL TOP DOWN BOTTOM UP SANDWITCH DFS BFS Types of Integration Testing Non-incremental Testing In this approach, all the developed modules are coupled together to form a complete software system and then used for integration testing. It is also called as Big Bang Integration. The integrated result is then tested. In this method, debugging is difficult since an error can be associated with any component. Incremental Testing In this approach, modules are integrated in small increments. It, therefore, becomes easier to isolate the errors and interfaces are more likely to be tested completely. Incremental testing is further classified into top down integration, bottom up integration, and sandwiched integration. a) Top Down Integration In this method, modules are integrated in small increments in a downward direction, starting from the top, i.e. with the main module until the end of the related modules at the bottom, sequentially. Top-down integration is further classified into: depthfirst and breadth-first search approach. o Depth-first search Depth-first approach integrates the components vertically downwards, i.e. depth-wise using a control path of the program. For example, if we select the left hand path, components U1, U2, U4 DFS= {[(U1+U2)+U4]+U5}+U3 25

26 U1 U2 U3 U4 U5 Top-down Integration o Breadth-first search Breadth-first integration incorporates all components directly subordinate at each level, moving across the structure horizontally. For example, considering components U1,U2,U3, BSF={[(U1)+ (U2+U3)]+U4+U5} Advantages of top-down integration Functionality of the main module is tested first. This helps in verifying major control or decision points early in the testing process. Disadvantages of top-down integration Stubs are required when performing integration testing and generally, developing stubs is very difficult. b) Bottom-up integration In this approach, the lowest level components are tested first, then moving upwards with testing the higher-level components. Bottom-up integration testing begins components at the lowest levels in the program structure. All the bottom or low-level modules, procedures or functions are integrated and then tested. After the integration testing of lower-level integrated modules, the next level of modules are formed and used for integration testing. This approach is best used only when all or most of the modules of the same development level are ready. This approach helps to determine the levels of software developed and makes it easier to report testing progress in the form of a percentage. Advantages of bottom-up integration Drivers required are much easier to develop. 26

27 Disadvantages of bottom-up integration The Main module functionality is tested at the end. So major control and decision problem is identified later in the testing process. c) Sandwich Integration In this approach, top-down testing and bottom-up testing is combined together. Both top-down and bottom-up are started simultaneously and the testing is built up from both sides. It needs a big team. 27

28 CHAPTER 4B: Validation Activity High-Level Testing High-level testing is broadly classified into: 1. Function Testing 2. System Testing 3. Acceptance Testing Function testing is a type of High-level testing and is based on black box testing that creates its test cases on the specifications of the software component under test. Functions are tested by feeding them input and then examining the output. It is used to detect discrepancies between a program s functional specification and it s actual behavior. It is carried out after completing unit testing and integration testing, and can be conducted in parallel with System testing. However, it is advisable to begin system testing when function testing has demonstrated some predefined level of reliability, usually after 40% of the function testing is complete. Functional testing differs from system testing in that functional testing validates a program by checking it against the functional design specifications, while system testing validates a program by checking it against the user or system requirements. 4B.1 Objectives The goal of function testing is to verify the actual behavior of the software or application against the functional design specifications provided by customers. Function testing is performed before the product is made available to customers. It can begin whenever the product has sufficient functionality to execute some of the tests, or after unit and integration testing have been completed. Function testing is the process of attempting to detect discrepancies between a program s functional specification and its actual behavior. When a discrepancy is detected, either the program or the specification is incorrect. All black-box methods are applicable to function based testing. 4B.2 Steps of Function Testing 1. Decompose and analyze the functional design specification 2. Identify functions that the software is expected to perform 3. Create input data based on the function's specifications 4. Determine output based on the function's specifications 5. Develop functional test cases 28

29 6. Execute test cases 7. Comparison expected and actual results 4B.3 Summary Function Testing: 1. Attempts to detect discrepancies between a program s functional specification and its actual behavior. 2. Includes positive and negative scenarios i.e. valid inputs and invalid inputs. 3. Ignores the internal mechanism or structure of a system or component and focuses on the output generated in response to selected input and execution conditions. 4. Evaluates the compliance of a system or component with specified functional specification and corresponding predicted results. 29

30 CHAPTER 5: Types of System Testing 5.1 Introduction System Testing is the next level of testing and is one of the most difficult activities. It focuses on testing the system as a whole. Once the components are integrated, the system needs to be rigorously tested to ensure that it meets the Quality Standards. It verifies software operation from the perspective of the end-user, with different configuration/setups. System testing builds on the previous levels of testing namely unit testing and integration testing. System testing can be conducted in parallel with function testing Prerequisites for System Testing The prerequisites for System Testing are: All the components should have been successfully Unit Tested. All the components should have been successfully integrated and Integration testing must have been performed. An Environment closely resembling the production environment should be created Steps of System Testing The major steps of system testing are as follows: 1. Create a System Test Plan by decomposing and analyzing the SRS. 2. Develop the requirements test cases. 3. Carefully build data used as input for system testing. 4. If applicable, create scripts to a) Build environment and b) To automate execution of test cases 5. Execute test cases. 6. Fix bugs if any and re-test the code. 7. Repeat test cycle as necessary Types of System Testing 1. Usability testing 2. Performance Testing 3. Load Testing 4. Stress Testing 5. Security Testing 30

31 6. Configuration Testing 7. Compatibility Testing 8. Installability Testing 9. Recovery Testing 10. Availability Testing 11. Volume Testing 12. Accessibility Testing 5.2 Usability Testing Usability testing is a technique for ensuring that the intended users of a system can carry out the intended tasks efficiently, effectively and satisfactorily. It is carried out pre-release so that any significant issues identified can be addressed. Usability testing can be carried out at various stages of the design process. In the early stages, however, techniques such as walkthroughs are often more appropriate. System usability testing is the system testing of an integrated, black box application against its usability requirements. The system usability test is conducted to observe people using the product to discover errors and areas of improvement. Usability testing is a black-box testing technique. Identify usability defects involving the application s human interface such as: o Difficulty of orientation and navigation (e.g., navigation defects such as broken links and anchors within a website) o Efficiency of interaction (based on user task analysis) o Information consistency and presentation o Appropriate use of language and metaphors o Conformance to the: Digital brand description document Website design guidelines o Programming defects (e.g., incorrectly functioning tab key, accelerator keys, and mouse actions) Validating the application by determining if it fulfills its quantitative and qualitative usability requirements concerning ease of: o Installation by the environments team o Usage by the user organization o Operations by the operations organization Determine if the application s human interfaces should be iterated to make them more usable. More emphasis is on the presentation of the product rather than its functionality. 31

32 Report these failures to the development teams so that the associated defects can be fixed. It helps determine the extent to which the application is ready for launch. Provide input to the defect trend analysis effort What Is Usability? Usability is how easily users can navigate from one page to another or from one menu to another. Usability is a combination of factors that influence user s experience with a product or system. Usability testing is a methodical evaluation of the graphical user interface (GUI) according to usability criteria. Usability criteria include: Efficiency of use Once a user is experienced with the system, how much time will it require to accomplish key tasks? Ease of learning How fast can a user learn to use a system that he has never seen before, in order to accomplish basic tasks? Memorability When the user approaches the system the next time, will he/she remember enough to use it effectively? Subjective satisfaction How does the user react to the system? How does he/she feel about using it? Error frequency and severity How frequent are errors in the system? How severe are they? How do users recover from errors? Purpose of Usability Testing A usability test establishes the ease of use and effectiveness of a product using standard usability test practices. It also identifies usability problems with the product and helps establish solutions for those problems. Once those solutions are implemented, the product is easier to use, requires less support and should be better received in the marketplace. When clients want to determine how well target users can understand and use their software or hardware product we recommend usability testing of the product with target market users. 32

33 5.2.3 Methods of Usability Testing By Onsite Observation: Conducted on-site. On-site observation enables the study of users working on the system in their typical work environment. This is usually done when the system or environment are too complicated to be replicated in a laboratory. On-site observations might also be used to study users in their real environment. The advantage of this type of testing is that it gives users a less formal feeling regarding the test and enables a relatively long observation period. The informal setting helps collect information from a real environment and not only from preset scenarios. By Laboratory Experiments: The usability test may be performed on a real system, on a paper prototype, or on a demo (e.g., Power Point) that incorporates only the elements of the system that are to be tested. Testing is performed in a controlled atmosphere. Users are introduced to the system and are required to perform several key tasks according to pre-set scenarios. User activities are recorded using two cameras one that records on-screen activities and the second that records the user response and expressions. In addition, usability experts monitoring the usability test take notes of any item of interest Summary The goal of usability testing is to adapt software to meet user s actual work styles, rather than forcing users to adapt a new work style. Usability testing involves having users work with the product and observing their responses to it. Unlike Beta testing, which also involves users, it should be done as early as possible in the development cycle. Usability testing is the process of attempting to identify discrepancies between the user interface of a product and the human engineering requirements of its potential users. The real customer is involved as early as possible, even at the stage where only screens drawn on paper are available. Usability is testing to ensure that the application is easy to work with, limits keystrokes, and is easy to understand. The best way to perform this testing is to bring in experienced, medium and novice users and solicit their input on the usability of the application. Usability testing can be done numerous times during the life cycle. 33

34 5.3 Performance Testing Performance testing is done to verify all the performance related aspects of the application. The aim of performance testing is to identify inefficiencies and bottlenecks with regard to application performance and enable to be identified, analyzed, fixed and prevented in the future. Performance testing is the system testing of an integrated, black box, partial application against its performance requirements under normal operating circumstances. Software performance testing is used to determine the speed or effectiveness of a computer, network, software program or device. This process can involve quantitative tests done in a lab, such as measuring the response time or the number of MIPS (millions of instructions per second) at which a system functions. Qualitative attributes such as reliability, scalability and interoperability may also be evaluated. Performance testing is often done in conjunction with stress testing. The performance testing is conducted to: Validate the system. Cause failures relating to performance requirements: o Response time (the average and the maximum application response times). o Throughput (the maximum transaction rates that the application can handle). o Latency (the average and maximum time to complete a system operation). o Capacity (the maximum number of objects the application/databases can handle). Track and report these failures to development teams so that the associated defects can be fixed. Reduce hardware costs by providing information allowing systems engineers to: o Identify the minimum hardware necessary to meet performance requirements. o Tune the application for maximum performance by identifying the optimal system configuration (e.g., by repeating the test using different configurations). Provide information that will assist in performance tuning under various workload conditions, hardware configurations, and database sizes (e.g., by helping identify performance bottlenecks) What Is Performance Testing? Performance testing is testing to ensure that the application responds in the time limit set by the user. 34

35 5.3.2 Purpose of Performance Testing Performance testing is testing that is performed to determine how fast some aspect of a system performs under a particular workload. The purpose of performance testing is to measure and evaluate response times, transaction rates, and other time sensitive requirements of an application in order to verify that performance requirements have been achieved. Examples include response times for on-line processing, processing times for batch work, transaction throughput rates (number of transactions in a predetermined period), etc Benefits of Performance Testing Helps in improving customer satisfaction by providing them with a faster, more reliable product. Helps identify and fix bottlenecks in an application before rolling it out to customers Summary Performance testing determines whether the program meets its performance requirements. Efficiencies in performance testing are realized through extensive experience, optimization of processes, and optimal selection of tools. 5.4 Load Testing Load Tests are end-to-end performance tests under anticipated production load. Load testing is the process of exercising the system under test by feeding it the largest tasks it can operate with. It is the process of putting demand on a system or device and measuring its response. Load testing is sometimes called volume testing, or longevity/endurance testing. Load testing is done to expose bugs that do not surface in cursory testing, such as memory management bugs, memory leaks, buffer overflows, etc. and to ensure that the application meets the performance baseline established during performance testing. This is done by running regression tests against the application at a specified maximum load. The load testing is done to: Cause failures concerning the load requirements that help identify defects that are not efficiently found during unit and integration testing. Partially validate the application (i.e., to determine if it fulfills its scalability requirements for example, when the number of users increases), Distribution and load-balancing mechanisms. Determine if the application will support typical production load conditions. 35

36 Identify the point at which the load becomes so great that the application fails to meet performance requirements. Report these failures to the development teams so that the associated defects can be fixed. Locate performance bottlenecks including those in I/O, CPU, network, and database What Is Load Testing? Load Testing is subjecting your system to statistically representative load. Load testing is nonfunctional form of System Testing. Load Runner and Rational Robot are the front runners for this type of testing. Application is tested against the heavy loads, such as testing of a Web site under a range of loads to determine at what point the system s response time degrades or fails Why Is Load Testing Important? It is done to measure and monitor performance of an e-business infrastructure. Watch out how the system handles (or not) the load of thousands of concurrent users hitting your site before deploying and launching it for the entire world to visit. It increases uptime and availability of mission-critical Internet systems, by spotting bottlenecks in the systems under large user stress scenarios before they happen in a production environment. It protects IT investments by predicting scalability and performance. IT projects are expensive. The hardware, the staffing, the consultants, the bandwidth, and more add up quickly. Avoid wasting money on expensive IT resources and ensure that it will all scale with load testing. It avoids project failures by predicting site behavior under large user loads. Before uploading the site, one has to visualize the site behavior with a large number of users and test high-load scenarios. Take precautions to avoid such scenarios. 5.5 Stress Testing Stress testing tries to break the system under test by overwhelming its resources or by taking resources away from it (in which case it is sometimes called negative testing). The main purpose behind this is to make sure that the system fails and recovers gracefully this quality is known as recoverability. 36

37 Where performance testing demands a controlled environment and repeatable measurements, stress testing joyfully induces chaos and unpredictability. Stress testing is performed to: Partially validate the application (i.e., to determine if it fulfills its scalability requirements). Determine how an application degrades and eventually fails, as conditions become extreme. For example, stress testing could involve an extreme number of simultaneous users, extreme numbers of transactions, queries that return the entire contents of a database, queries with an extreme number of restrictions, or an entry at the maximum amount of data in a field. Report these failures to the development teams so that the associated defects can be fixed. Determine if the application will support worst case production load conditions. Provide data that will assist systems engineers in making intelligent decisions regarding future scaling needs. Help determine the extent to which the application is ready for launch. Provide input to the defect trend analysis effort What Is Stress Testing? Stress Testing is testing done by applying load to the application under test, beyond the limits specified. Subjecting the system to extreme pressures in a short time-span is stress testing What Is The Purpose of Stress Testing? Stress testing helps in determining, e.g. the maximum number of requests a Web application can handle in a specific period of time, and at what point the application will overload and break down. The test is designed to determine how heavy a load the Web application can handle. A huge load is generated as quickly as possible in order to stress the application to its limit. The time between transactions is minimized in order to intensify the load on the application, and the time the users would need for interacting with their Web browsers is ignored. For example, simultaneous log-on of 1000 users at the same time on a particular Website. 37

38 5.5.3 Summary Tester s objective is to force the system to break down under the stress of extreme conditions. When we perform Stress testing on a particular application, system will fail but should fail in a rational manner without corrupting the data or losing customer s data. Perform testing on your application to the point that it experiences diminished response or break down, to determine the application s limitations. This testing is conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how. Start Stress Testing early to catch subtle bugs that need the original developers to fix basic design flaws that may affect many parts of the system. It is to ensure that the application will respond appropriately with many users and activities happening simultaneously. 5.6 Security Testing Security testing is performed to guarantee that only users with the appropriate authority are able to use the applicable features of the system. Security is a primary concern to avoid any unwanted penetration into the application. Security Testing is checking a system, application or its component against its security requirements and the implementation of its security mechanisms. It also verifies the applications failure to meet security-related requirements (black box testing), failure to properly implement security mechanisms (white box/gray box testing), thereby it enables the underlying defects to be identified, analyzed, fixed, and prevented in the future. The security testing covers: Requirements: Verifying the application (i.e. determine if it fulfills its security requirements), identification, authentication, authorization, content protection, integrity, intrusion detection, privacy, system maintenance Mechanisms: Determine if the system causes any failures concerning its implementation of its security mechanisms: o Encryption and Decryption o Firewalls o Personnel Security: Passwords 38

39 Digital Signatures Personal Background Checks o Physical Security: Locked doors for identification, authentication, and authorization Budges for identification, authentication, and authorization Cameras for identification, authentication, and authorization Cause Failures: Cause of failures concerning the security requirements that help identify defects that are not efficiently found during other types of testing: o The application fails to identify and authenticate a user. o The application allows a user to perform an unauthorized function. o The application fails to protect its content against unauthorized usage. o The application allows the integrity of data or messages to be violated. o The application allows undetected intrusion. o The application fails to ensure privacy by using an inadequate encryption technique. Report Failures: It is necessary to report failures to the development teams so that the associated defects can be fixed. Determine Launch Readiness: It helps determine the extent to which the system is ready for launch. Project Metrics: It helps provide project status metrics. Trend Analysis: It provides input to the defect trend analysis effort Purpose of Security Testing It helps in determining how well a system protects against unauthorized, internal or external access or willful damage Summary It is to show that testing whether the system meets its specified security objectives. Tester s aim is to demonstrate the system s failure to fulfill the stated security requirements o Beware: it is impossible to prove that a system is impenetrable. o Objective is to establish sufficient confidence in security. 39

40 5.7 Configuration Testing Configuration testing is to check the operation of the software under test with different types of hardware configurations. It is done to check whether the system can work on machines with different configurations (software with hardware). Computers are designed using different peripherals, components, drivers which are designed by various companies Purpose of Configuration Testing To determine whether the program operates properly when the hardware or software is configured in a required manner Summary It is the process of checking the operation of the software with various types of hardware. For example: for applications to run on Windows-based PC used in homes and businesses. o PC: different manufacturers such as Compaq, Dell, Hewlett Packard, IBM and others o Components: disk drives, video, sound, modem, and network cards o Options and memory o Device drivers 5.8 Compatibility Testing Compatibility testing is to check the operation of the software under test with different types of software. Software compatibility testing means checking that your software interacts with and shares information correctly with the other software. For example, it measures how Web pages display well on different browser versions. Compatibility testing is used to determine if your software application has issues related to how it functions in concern with the operating system and different types of system hardware and software Purpose of Compatibility Testing To evaluate how well software performs on a particular hardware, software, operating system, browser, or network environment. 40

41 5.8.2 Summary Testing whether the system is compatible with other systems with which it should communicate. It is the process of determining the ability of two or more systems to exchange information. In a situation where the developed software replaces an already working program, an investigation should be conducted to assess possible comparability problems between the new software and other programs or systems. It means checking that your software interacts with and shares information correctly with other software. For example: What other software (operating systems, web browser etc.) your software is designed to be compatible with? 5.9 Installation Testing Installability testing is to ensure that all the installation options in the software are working properly. Installation testing (in software engineering) can simply be defined as any testing that occurs outside of the development environment. Such testing will frequently occur on the computer system in which the software product will eventually be installed on Purpose of Installation Testing It is done to identify the ways in which installation procedures lead to incorrect results. It is done to ensure that the application or component is easy to install, ensure that time and money are not wasted during the installation process, Improve the morale of the engineers who will install the application or component, minimize installation defects, determine whether the installation procedure is documented, determine whether the methodology for migration from old system to new system is documented Summary Testing installation procedures is a good way to avoid making a bad impression. Since, Installation makes the first impression on the end-user To identify ways in which the installation procedures lead to incorrect results Installation options are o New o Upgrade o Customized/Complete o Under Normal and Abnormal Conditions It is the testing concerned with the installation procedures for the system 41

42 5.10 Recovery Testing Recovery testing is to check a system s ability to recover from failure. It is done to determine whether operations can be continued after a disaster or after integrity of the system has been lost. This involves reverting to a point where the integrity of the system was known and then reprocessing transactions up to the point of failure. It is used where continuity of operations is essential Purpose of Recovery Testing To verify the system s ability to recover from varying degrees of failure Summary To determine whether, the system or program meets its requirements for recovery after a failure Availability Testing Availability testing is done to verify functionalities available for use to the user whenever a system undergoes any failure. Application is tested for its reliability so that failures, if any, are discovered and removed before deploying the system. Availability testsare conducted to check both reliability and availability of an application. Reliability is the degree to which something operates without failure under given conditions during a given time period. Most likely scenarios are tested under normal usage conditions to validate that the application provides expected service. It compares the availability percentage to the original service level agreement. Using availability testing, the application is run for a planned period, and failure events collected with repair times. Where reliability testing is about finding defects and reducing the number of failures, availability testing is primarily concerned with measuring and minimizing the actual repair time. Formula for calculating percentage availability: (MTBF/(MTBF+MTTR)) X 100. Notice that as MTTR trends towards zero, the percentage availability testing reduce and eliminate downtime. 42

43 5.12 Volume Testing Volume testing is done to check the performance of the application when volume of data being processed in the database is increased. Volume Testing, as its name implies, is testing that purposely subjects a system (both hardware and software) to a series of tests where the volume of data being processed is the subject of the test. Such systems capturing real-time sales or could be database updates and or data retrieval. Volume testing will seek to verify the physical and logical limits to a system s capacity and ascertain whether such limits are acceptable to meet the projected capacity of the organization s business processing Summary Testing where the system is subjected to large volumes of data Testing designed to challenge a system s ability to manage the maximum amount of data over a period to of time. This type of testing also evaluates a system s ability to handle overload situations in an orderly fashion Accessibility Testing Accessibility Testing is an approach to measuring a product s ability to be easily customized or modified for the benefit of users with disabilities. Users should be able to change input and output. Accessibility testing is the process of ensuring that a Web application is accessible to people with disabilities. If your Web application is produced for or by a US government agency, accessibility verification is required in order to prevent violation of the federal law, the potential loss of government contracts, and the potential for costly lawsuits. It can help you prevent functionality problems that could occur when people with disabilities try to access your application with adaptive devices such as screen readers, refreshable Braille displays, and alternative input devices Purpose of Accessibility Testing The goal of Accessibility Testing is to ensure that people with disabilities can access and use the software product as effectively as without disabilities. It is to pinpoint problems within Web sites and products that may otherwise prevent users with disabilities from accessing the information they are searching for. It can help one in determine the compliance of the product i.e. how your product complies with legal requirements regarding accessibility, user friendliness & effectiveness of the product for physically challenged users. 43

44 Summary Enable users with common disabilities to use the application or component Determines the degree to which the user interface of an application enables users with common or specified (e.g., auditory, visual, physical, or cognitive) disabilities to perform their specified tasks. Examples of accessibility requirements include the people with auditory disabilities, colorblindness, and physical disabilities by enabling them to verbally interact with and use it, with mild cognitive disabilities. 44

45 CHAPTER 6: Acceptance Testing 6.1 Introduction Acceptance testing is the process of evaluating the product with the current needs of its end users. It is usually done by end users of customers after the testing group has successfully completed the testing. Acceptance tests really are requirement artifacts because they describe the criteria by which the customer will determine whether the system meets their needs. It is a type of high-level testing, describes black-box requirements, identified by your project customers, which your system must conform to. It involves operating the software in production mode for a pre-specific period of time. 6.2 Objective The objectives of acceptance testing are to: Determine whether the application satisfies its acceptance criteria. Enable the customer organization to determine whether to accept the application. Determine if the application is ready for deployment to the full user community. Report any failures to the development teams so that the associated defects can be fixed. 6.3 Acceptance Testing Acceptance testing is further divided into: ACCEPTANCE TESTING CONTRACTUAL NON-CONTRACTUAL 45

46 If the software is developed under contract, the contracting customer does the accepting testing. For example, proper messages should be provided for the navigation from one part to another for an end-user. If the software is not developed under contract, then acceptance testing will be done in following two different ways: Alpha Testing Beta Testing Alpha Testing Alpha testing is usually performed by end users inside the development organization. The testing is done in a controlled environment. Developers are present. Defects found by end users are noted down by the development team and fixed before release Beta Testing Beta testing is usually performed by end users at customer s site, i.e. outside the development organization and inside the end users organization. Not a controlled environment. Developers will not be present. Defects found by end users are reported to the development organization. Once the acceptance testing is done and user/client gives a clearance, the next step is to release the software. At the time of release, usually final candidate testing is done, which is, a last minute testing. It is also called as a Golden Candidate Testing. 46

47 CHAPTER7: Black Box Testing 7.1 Introduction Black Box Testing is a Validation strategy, and not a type of testing. The types of testing under this strategy are totally based/focused on the testing for requirements and functionality of the work product/software application. It is a testing technique that does not require knowledge of the internal functionality/program structure of the system. Black box testing is sometimes also called as Opaque Testing, Functional/Behavioral Testing and Closed Box Testing. It will not test hidden functions (i.e. functions implemented but not described in the functional design specification) and errors associated with them will not be found in black-box testing. 7.2 Objectives The objectives of black box testing are to: Validate the system to determine if it fulfills its operational requirements. Identifying the defects that are not efficiently found during unit and integration testing. Report these failures to the development teams so that the associated defects can be fixed. Help determine the extent to which the system is ready for launch. Black box testing verifies the actual behavior of the software with its functional requirements and not with the internal program structure or code. That is the reason black box testing is also considered as functional testing. This testing technique is also called as behavioral testing or opaque box testing or simply closed box testing. So, black box testing is not normally carried out by the programmer. This testing technique treats the system as black box or closed box. Tester will only know the formal inputs and projected (expected) results. Tester does not know how the program actually works at those results. Hence tester tests the system based on the functional specifications given to him. 47

48 Input Output 7.3 Advantages of Black Box Testing Tests will be done from an end user s point of view because the end user should finally accept the system. Test cases can be designed as soon as the functional specifications are complete. Testing helps to identify the vagueness and contradiction in functional specifications. Efficient when used on larger systems. The tester and the developer are independent of each other. Tester can be non-technical. 7.4 Disadvantages of Black Box Testing It is difficult to identify all possible valid and invalid inputs in limited testing time. So writing test cases is slow and difficult. It is difficult to identify tricky inputs, if the test cases are not developed based on specifications. Chances of having repetition of tests that are already done by programmer. 7.5 Black Box Testing Methods There are three Black Box Testing methods: 1. Equivalence partitioning 2. Boundary Value Analysis 3. Error Guessing Equivalence Partitioning Equivalence partitioning is a black box testing technique. All the inputs with which we get the same output can be categorized in a same equivalence class. Therefore the tests are written using test data, which represents each equivalence class. It is designed to minimize the number of test cases. 48

49 7.5.2 How to Identify Equivalence Classes Take each input condition described in the specification and derive at least two equivalence classes for it. One class represents the set of cases which satisfy the condition (the valid class) and the other represents cases which do not (the invalid class). Following are some general guidelines for identifying equivalence classes: A. Considering a numeric value is input to the system and must be within a range of values. Identify one valid class inputs which are within the valid range and two invalid equivalence class s inputs which are too low and inputs which are too high. For example, if an item in inventory can have a quantity of to ,identify the following classes: One valid class (-9999 < = QTY < = 9999) The invalid class (QTY < -9999) The invalid class (QTY > 9999) B. Considering a specific value, identify one valid class inputs which are within the valid range and two invalid equivalence classes inputs which are too low and inputs which are too high. For example, a 6 digit pin code number would have equivalence classes as: Valid equivalence class (pin code =6) Invalid class (pin code> 6) Invalid class (pin code< 6) C. If the requirements state that a particular input item match one of a set of values and each case will be dealt with the same way, identify a valid class for values in the set and one invalid class representing values outside of the set. For example, if the requirements state that a valid province code is ON, QU and NB, then identify: Valid class code is one of ON, QU, NB Invalid class code is not one of ON, QU, NB 49

50 7.5.3 Disadvantages No guidelines for choosing inputs Very limited focus Doesn t test every input Heuristic based It is not guaranteed that the system under test treats all sets of an equivalence class in the same way Boundary Value Analysis Boundary value analysis is a black box testing technique. Using this testing technique, boundaries of the input domain are tested. More emphasis is on input-output boundaries, as more errors tend to occur at the boundaries of given domain. It has been widely recognized that input values at the extreme ends of, and just outside of, input domains tend to cause errors in the system functionality. In boundary value analysis, values at and just beyond the boundaries of the input domain are used to generate test cases to ensure proper functionality of the system. Boundary value analysis complements the technique of equivalence partitioning. Instead of checking any value in the equivalence class, take the values that are at the edge of the domain. For example, for a system that accepts as input a number between one and ten, boundary value analysis would indicate that test cases should be created for the lower and upper bounds of the input domain (1,10), and values just outside these bounds (0,11) to ensure proper functionality. It is an excellent way to catch common user input errors which can disrupt proper program functionality Advantages of Boundary Value Analysis Very clear guidelines on determining test cases Very small set of test cases generated Very good at exposing potential user interface/user input problems Disadvantages of Boundary Value Analysis Does not test dependencies between combinations of inputs Does not test all possible inputs Boundary value analysis and equivalence partitioning are used during the test design phase, and their influence is hard to see in the tests once they re implemented. Note that any level of testing (unit testing, system testing, etc.) can use any test design methods. 50

51 7.5.7 Error Guessing Error guessing is an ad hoc approach and totally depends on the intuition, experience and knowledge of the tester. Error Guessing is more a testing art than a testing science but can be very effective given a tester s familiarity with the history of the system. Error Guessing involves making an itemized list of the errors expected to occur in a particular area of the system and then designing a set of test cases to check for these expected errors. 51

52 CHAPTER 8: Testing Types 8.1 Introduction There are several random types of testing used in software industry. Other than Validation activities like unit, integration, system and acceptance, we have the following types: Mutation Testing Progressive Testing Regression Testing Retesting Localization Testing Internationalization Testing 8.2 Mutation Testing It is also called as Fault injection Testing or Be-bugging. In this testing, we manually inject faults in a code. Mutation Testing is a fault-based testing technique that is based on the assumption that a program is well tested if all simple faults are predicted and removed; complex faults are coupled with simple faults and are thus detected by tests that detect simple faults. Mutation Testing is a process of adding known faults intentionally in a computer program to monitor the rate of detection and removal, and estimating the number of faults remaining in the program. Formula for Mutation Testing is FU = FG. (FE/FEG) Where FU FG FE FEG =Number of undetected errors =Number of not seeded errors detected =Number of seeded errors =Number of seeded errors detected 8.3 Progressive Testing Whenever we start any testing activity (unit testing/integration testing/function testing/system testing) for the first time, it is termed as Progressive testing. Most test cases, unless they are truly thrown away, begin as progressive test cases and eventually become regression test cases for the life of the product. 52

53 8.4 Regression Testing Due to the code changes for fixing any bug, it may happen that some other functionality may get affected. To verify the impact on other functionalities, Regression testing is done. Testing a program that has been modified to verify that modifications have not caused unintended effects and still complies with its specified requirements. For example, a login window is to be tested. The window has OK and Cancel buttons in addition to User Id and Password. Build 1 is written for login window to check functionality of the OK button. A tester finds a defect in the Ok button. It is reported and the developer fixes the defect and a new build (build 2) is given back to the testing team. Now we perform a regression testing to find whether the fixing of a defect in Ok button has lead to any changes in the Cancel button. 8.5 Retesting When a defect is detected and fixed, then the software should be retested to confirm that the original defect has been successfully removed. This is called retesting. 8.6 Localization Testing The process of adapting software to a specific locale, taking into account, its language, dialect, local conventions and culture is called Localization. Testing the localized software is called localization testing. Localization is abbreviated as L10N, as there are 10 letters between L and N. If you decide to localize, you should be familiar with the scope and purpose of localization testing. Localizers translate the product UI and sometimes change some initial settings to adapt the product to a particular local market. This definitely reduces the "world-readiness" of the application. That is, a globalized application whose UI and documentation are translated into a language spoken in one country will retain its functionality. However, the application will become less usable in countries where that language is not spoken. Localization testing checks how well the build has been translated into a particular target language. This test is based on the results of globalized testing where the functional support for that particular locale has already been verified. If the product is not globalized enough to support a given language, you probably will not try to localize it into that language in the first place. You should be aware that pseudo-localization, which was discussed earlier, does not completely eliminate the need for functionality testing of a localized application. When you test for 53

54 localizability before you localize, the chances of having serious functional problems due to localization are slim. However, you still have to check that the application you're shipping to a particular market really works. Now you can do it in less time and with fewer resources. 8.7 Internationalization Testing Internationalization is the process of designing and coding a product so it can perform properly when it is modified for use in different languages and locales. Internationalization is the process of designing an application so that it can be adapted to various languages and regions without engineering changes. The term internationalization is often abbreviated as I18N, because there are 18 letters between the first I and the last n. Localization refers to the process, on a properly internationalized base product, of translating messages and documentation as well as modifying other locale specific files. An internationalized program has the following characteristics: With the addition of localized data, the same executable can run worldwide. Textual elements, such as status messages and the GUI component labels are not hard coded in the program. Instead they are stored outside the source code and retrieved dynamically. Support for new languages does not require recompilation. Culturally dependent data, such as dates and currencies, appear in formats that conform to the end user s region and language. It can be localized quickly. 54

55 CHAPTER 9: White Box Testing 9.1 Introduction White Box Testing (WBT) is a testing strategy that uses the control structure described as part of component level design to derive test cases. White box testing deals with the internal logic and internal structure of the code. WBT is also called as Structural Testing, glass-box testing, Transparent-box and Clear-box Testing. 9.2 Objective The test written based on WBT strategy incorporate coverage of the code written, branches, paths, statements and internal logic of the code, etc. WBT needs the tester to look into the code and find out which unit/statement/chunk of the code in malfunctioning. It does not account for errors caused by omission, and all possible code must also be readable. White Box Testing is a test case design method that uses the control structure of the procedural design to drive test cases. Test cases can be derived that: Guarantee that all independent paths within a module have been exercised at least once. Exercise all logical decisions on their true and false sides. Execute all loops at their boundaries and within their operational bounds. Exercise internal data structures to ensure their values. 9.3 Advantages of WBT As knowledge of internal coding structure is a prerequisite, it becomes very easy to find which type of input/data can help in testing the application effectively. It helps in optimizing the code. It helps in removing the extra lines of code, which can bring in hidden defects. 55

56 9.4 Disadvantages of WBT As the knowledge of internal coding structure is a prerequisite, a skilled tester is needed to perform this type of testing, which increases the cost. It is nearly impossible to look into every bit of code to find out hidden errors, which may create problems, resulting in failure of the application. It fails to detect missing functions. 9.5 Techniques for White Box Testing White Box Testing can be done by: 1. Data coverage 2. Code coverage Data Coverage Data flow is monitored or examined throughout the program. We can also keep track of the data changes during its flow that take place in between the modules of the application. For example, watch window we use to monitor the values of variables and expressions Code Coverage Code coverage analysis (test coverage analysis) is a White box testing technique. Code coverage analysis is the process of: Finding areas of a program not exercised by a set of test cases. Creating additional test cases to increase coverage. Determining a quantitative measure of code coverage, which is an indirect measure of quality An optional aspect of code coverage analysis is identifying redundant test cases that do not increase coverage. Code coverage can be implemented using Basic measures like: Statement coverage Branch/Decision coverage Condition coverage Path coverage 56

57 1. Statement Coverage This measure reports whether each executable statement is encountered. It is also known as line coverage, segment coverage and basic block coverage. Faults are evenly distributed through code; therefore, the percentage of executable statements covered reflects the percentage of faults discovered. Statement coverage does not report whether loops reach their termination condition only whether the loop body was executed. With C, C++, and Java, this limitation affects loops that contain break statements. Since do-while loops always execute at least once, statement coverage considers them the same rank as non-branching statements. Statement coverage is completely insensitive to the logical operators (II and &&). Statement coverage cannot distinguish consecutive switch labels. 2. Branch/Decision Coverage This measure whether Boolean expressions tested in control structures (such as the ifstatement and white-statement) evaluated to both true and false. The entire Boolean expression is considered one true-or-false predicate regardless of whether it contains logical-and or logical-or operators. Additionally, this measure includes coverage of switchstatement cases, exception handlers, and interrupts handlers. It is also known as branch coverage, all-edges coverage, basis path coverage. A disadvantage is that, Decision Coverage measure ignores branches within Boolean expressions. For example, consider the following C/C++/Java code fragment: If (a>b&&c!=5) Else a=a+b; a=a-b; For the above example, conditions evolved are: (a > b) and c=5 (a > b) and c!=5 (a < b) and c=5 (a < b) and c!=5 57

58 Branch coverage fails for such conditions and so there is a need of a coverage which covers all the conditions. For example, consider: If a< b then S1 else S2 Branch coverage subsumes Statement Coverage. It stated that data should be created in a way that both S1 and S2 should be executed and tested. 3. Condition Coverage Condition testing is a test case design method that exercises the logical conditions contained in a program module. Condition coverage reports the true or false outcome of each Boolean sub-expression, separated by logical-and and logical-or if they occur. Condition coverage measures the sub-expressions independently of each other. This measure is similar to decision coverage but has better sensitivity to the control flow. For example, consider: If a< b then S1 else S2 Conditional coverage subsumes Branch Coverage. In Conditional coverage, all possible values should be tested for each clause, here 'a' and 'b', to make sure each condition (here true or false for a<b) should be executed successfully. 4. Path Coverage Basis Path Testing is a white box testing technique that enables the test case designer to derive a logical complexity measure of a procedural design and use this measure as a guide for defining a basis set of execution paths. Test cases derived to exercise the basis set are guaranteed to execute every statement in the program at least one time during testing. Path coverage can be calculated by McCabe s Cyclomatic complexity. So, we can conclude that: 100% statement coverage is not 100% decision coverage. Decision coverage includes statement coverage since exercising every branch must lead to exercising every statement. Path coverage includes decision coverage. 100% condition coverage is 100% decision coverage and 100% statement coverage. 9.6 Cyclomatic Complexity Cyclomatic complexity is the most widely used member of a class of static software metrics. Cyclomatic complexity may be considered a broad measure of soundness and confidence for a program. Introduced by Thomas McCabe in 1976, it measures the number of linearlyindependent paths through a program module. This measure provides a single ordinal number 58

59 that can be compared to the complexity of other programs. Cyclomatic complexity is often referred to simply as program complexity, or as McCabe s complexity. It is often used in concert with other software metrics. As one of the more widely-accepted software metrics, it is intended to be independent of language and language format. Cyclomatic complexity has also been extended to encompass the design and structural complexity of a system. Cyclomatic complexity is used to measure the amount of decision logic in a single software module. It is used for two related purposes in the structured testing methodology. First, it gives the number of recommended tests for software. Second, it is used during all phases of the software lifecycle, beginning with design, to keep software reliable, testable, and manageable. Cyclomatic complexity is based entirely on the structure of software s control flow graph. Cyclomatic complexity is used for white box testing. It enables the test case designer to derive the logical complexity of software program, and can be used for defining the basis set of execution paths. Basis set guarantees the execution of every statement in the program, at least once, during testing. Thomas McCabe designed this method in the year Cyclomatic complexity gives you the minimum number of test cases, which you have to design in order to conform that each and every statement from the program has been executed at least once. One simple notation i.e. Flow Graph is used. This flow graph gives the logical control flow, using different notations: Each structured construct, ex: loops, decisions, switch cases, etc. have a corresponding flow graph symbol. Each circle is called the flow graph node; it represents one or more procedural statements. These nodes are connected with each other with edges. 59

60 The structured constructs in flow graph forms are as given in the table below: Sequence If-Else Condition While Condition Case Condition Types flow graph constructs Some rulesto be followed while calculating Cyclomatic complexity: 1. A sequence of process boxes and decision diamond can map into single node. 2. An edge must terminate at a node; even if the node does not represent any procedural statements. Predicate node: A node, which is having two or more outgoing edges from it. Bounded Region: A region, which is totally surrounded by nodes and edges. Let us take some examples. EXAMPLE 1 1. Main() 2. { 3. Int a,b,c; 4. Printf( Enter First Number: ); 5. Scanf( %d,&a); 6. Printf( Enter Second Number: ); 7. Scanf( %d,&b); 60

61 8. If(a.b) 9. { 10. C=a-b; 11. Printf( the subtraction s %d,c); 12. } 13. Else 14. { 15. C=a+b; 16. Printf( the addition is %d,c); 17. } 18. Prinf( Thank You ); 19. } The flow graph can be drawn as shown below: 8 9 to to In this flow graph, the number of nodes, N = 5, while the number of Edges, E = 5. By using the formula i. C.C. = No. of Edges No. of Nodes + 2 = = 2 *-- (In the above flow graph 8 is the predicate node) 61

62 ii. C.C.=No. of predicate Nodes+1 = = 2 iii. C.C. = No. of Bounded region + 1 = = 2 Therefore, the Cyclomatic complexity of the program code in the example 1 above is 2. EXAMPLE 2 1. main () 2. { 3. int a,b,i; 4. printf( Enter the Number: ); 5. scanf( %d,&a); 6. b = 0; 7. Ii= 1; 8. while (I,6) 9. { 10. b = b + I ; 11. i ++; 12. } 13. printf( the Addition is :%d,b); 14. printf( Thank You ): 15. } The flow graph of this program is: 8 9 to

63 In this flow graph, the number of Nodes, N = 3, while the number of Edges, E = 3. By using the formula i. C.C. = No. of Edges No. of Nodes + 2 = = 2 *--(In the above flow graph 8 is the predicate node) ii. C.C. = No. of predicated Nodes + 1 = = 2 III. C.C. = No. of Bounded region + 1 = = 2 Therefore, the Cyclomatic complexity of the program code in the example 2 above is 2. Let us understand now, why should we calculate Cyclomatic complexity. The Cyclomatic complexity number helps us in understanding the complexity of the program. If the Cyclomatic complexity number is large, it means the program is highly complex and there is high risk associated with the program. If the Cyclomatic number is small, it means the program is less complex and there is low risk associated with the program. The table given below will make this concept clearer Usage of Cyclomatic Complexity 1. Risk Evaluation: Classification of Cyclomatic complexity and relative risk of the program. Cyclomatic Complexity Risk Evaluation 1 10 a simple program, without much risk more complex, moderate risk complex, high risk program Greater than 50 untestable program (very high risk) 2. Code Development Risk Analysis: while code is under development, it can be measured for complexity to assess inherent risk. 3. Test Planning: Mathematical analysis of C.C. shows the exact no. of test cases to test every decision point in program. This analysis can be used in test planning. For example: If large complex module is there, it will require prohibitive number of test cases. This number can be reduced to a practical size by breaking the module into smaller, less complex sub modules. 63

64 9.6.2 Advantages of McCabe Cyclomatic Complexity Can be used as a ease of maintenance metric. Used as a quality metric, gives relative complexity of various designs. Can be computed early in the life cycle than of Halstead s metrics. Measures the minimum effort and best areas of concentration for testing. Guides the testing process by limiting the program logic during development. Easy to apply Drawbacks of McCabe Cyclomatic Complexity The Cyclomatic complexity is a measure of the program s control complexity and not the data complexity. The same weight is placed on nested and non-nested loops. However, deeply nested conditional structures are harder to understand than non-nested structures. It may give a misleading figure with regard to a lot of simple comparisons and decision structures. Whereas the fan-in-fan-out method would probably be more applicable as it can track the data flow Limiting Cyclomatic Complexity To 10 There are many good reasons to limit Cyclomatic complexity. Overly complex modules are more prone to error, are harder to understand, are harder to test, and are harder to modify. Deliberately limiting complexity at all stages of software development, for example as a departmental standard, helps avoid the pitfalls associated with high complexity software. Many organizations have successfully implemented complexity limits as part of their software programs. The precise number to use as a limit, however, remains somewhat controversial. The original limit of 10 as proposed by McCabe has significant supporting evidence, but limits as high as 15 have been used successfully as well. Limits over 10 should be reserved for projects that have several operational advantages over typical projects, for example experienced staff, formal design, a modern programming, code walkthroughs, and a comprehensive test plan. In other words, an organization can pick a complexity limit greater than 10, but only if it is sure it knows what it is doing and is willing to devote the additional testing effort required by more complex modules. Somewhat more interesting than the exact complexity limit are the exceptions to that limit. F or example, McCabe originally recommended exempting modules consisting of single multi-way decision ( switch or case ) statements from the complexity limit. The multi-way decision issue has been interpreted in many ways over the years, sometimes with disastrous results statement to it that did nothing. 64

65 9.6.5 Measurement of Cyclomatic Complexity Cyclomatic complexity measurement tools are typically bundled inside commercially-available CASE toolsets. It is usually one of the several metrics offered. Application of complexity measurements requires a small amount of training. The fact that a code module has high cyclomatic complexity does not, by itself, mean that it represents excess risk, or that it can or should be redesigned to make it simpler; more must be known about the specific application. 9.7 How to calculate Statement, Branch/Decision,and Path Coverage for ISTQB Exam purpose Statement Coverage: In this, the test case is executed in such a way that every statement of the code isexecuted at least once. Branch/Decision Coverage: Test coverage criteria requires enough test cases such that each condition in a decision takes on all possible outcomes at least once, and each point of entry to a program or subroutine is invoked at least once. That is, every branch (decision) taken each way, true and false. It helps in validating all the branches in the code making sure that no branch leads to abnormal behavior of the application. Path Coverage: In this, the test case is executed in such a way that every path is executed at least once. All possible control paths taken, including all loop paths taken zero, once, and multiple (ideally, maximum) items in path coverage technique, the test cases are prepared based on the logical complexity measure of a procedural design. In this type of testing, every statement in the program is guaranteed to be executed at least once. Flow Graph, Cyclomatic Complexity and Graph Metrics are used to arrive at basis path. How to calculate Statement Coverage, Branch Coverage and Path Coverage Draw the flow in the following way- Nodes represent entries, exits, decisions and each statement of code. Edges represent non-branching and branching links between nodes. For Example: Read P Read Q IF P+Q > 100 THEN Print Large ENDIF If P > 50 THEN Print P Large ENDIF 65

66 Calculate statement coverage, branch coverage and path coverage. Solution: The flow chart is- Statement Coverage (SC): To calculate Statement Coverage, find out the shortest number of paths following which all the nodes will be covered. Here by traversing through path 1A-2C-3D-E-4G-5H, all the nodes are covered. So by traveling through only one path all the nodes are covered, and hence the Statement coverage in this case is 1. Branch Coverage (BC): To calculate Branch Coverage, find out the minimum number of paths which will ensure covering of all the edges. In this case, there is no single path that will ensure cover age of all the edges at one go. By following paths 1A-2C-3D-E-4G-5H, the maximum numbers of edges (A, C, 66

67 D, E, G and H) are covered but edges B and F are left. To cover these edges, we can follow 1A- 2B-E-4F. By combining the above two paths, we can ensure traveling through all the paths. Hence Branch Coverage is 2. The aim is to cover all possible true/false decisions. Path Coverage (PC): Path Coverage ensures covering of all the paths from start to end. All possible paths are- 1A-2B-E-4F 1A-2B-E-4G-5H 1A-2C-3D-E-4G-5H 1A-2C-3D-E-4F So path coverage is 4. Thus for the above example SC=1, BC=2 and PC=4. REMEMBER 100% Path coverage will imply 100% Statement coverage 100% Branch/Decision coverage will imply 100% Statement coverage 100% Path coverage will imply 100% Branch/Decision coverage Branch coverage and Decision coverage are same 67

68 CHAPTER 10: Test Cases 10.1 Introduction Test cases are test conditions written to detect a bug. The term test case describes a case that tests the validity of a particular condition. Test cases are useful because they establish principles and thereby serve as a precedent for future similar cases Objective The main objective of writing test cases is to determine whether a requirement is fully satisfied, to put down the conditions along with the steps involved and the expected result after following the steps in a structured format. In software engineering, a test case is a set of conditions under which a tester will determine if a requirement upon an application is partially or fully satisfied. It may take many test cases to determine that a requirement is fully satisfied. In order to fully test that all the requirements of an application are met, there must be at least one test case for each requirement unless a requirements has sub requirements. In that situation, each sub requirement must have at least one test case. Some methodologies like Rational Unified Process (RUP is an iterative software development process created by the Rational Software Corporation) recommend creating at least two test cases for each requirement. One of them should perform positive testing of the requirement and the other should perform negative testing. If the applications are created without formal requirements, then the test cases are written based on the accepted normal operation of programs of a similar class. What characterizes a formal, written test case is that there is a known input and an expected output, which is worked out before the test is executed. The known input should test a precondition and the expected output should test a post condition. Under special circumstances, there could be a need to run the test, produce results, and then a team of experts would evaluate if the results can be considered as pass. This happens often on new products performance number determination. The first test is taken as the base line for subsequent test/product release cycles. Written test cases include a description of the functionality to be tested taken from either the requirements of use cases, and the preparation required to ensure that the test can be conducted. 68

69 10.3 Structure of Test Cases Test case definition consists of three main parts with subsections: Introduction/overview contains general information about the Test case o Identifier: is unique identifier of test case for further references, for example, while describing found defect. o Test case Author/Creator: is the name of the tester or the test designer, who created the test or is responsible for its development. o Version: of current Test case definition. o Name: of test case should be human-oriented title which allows to quickly understanding test case purpose and scope. o Objective: Of writing down the test case, i.e. purpose or short description of test, what functionality it checks. o Pre-requisites: of the software to execute the tests. Test Case Activity o Testing environment/configuration contains information about configuration of hardware or software which must be met while executing the test case. o Initialization describes actions that must be performed before test case execution. For example, we should open some file. o Finalization describes actions to be done after the test case is performed. For example, if the test case crashes database, the tester should restore it before other test cases will be performed. o Actions step by step to be done to complete test. o Input data description. EXPECTED RESULTS o Contains description of what the tester should see after all test steps have been completed. Usually test cases do not contain actual results. They should be described in defect reports or in testing reports Test Case Template Test Case ID: Test Case Name: 69

70 Description/Objective: (If necessary, write description text) Pre-requisites for this test case: (If necessary, write pre-condition text) Author/Creator: Date of Draft: Reviewer: Date of Review: Step No. Step Description Input/Test Data Expected Result Actual Result Status (Pass/Fail) Defect ID Remarks Test Case Status Status: Pass/Fail Test Case Format 70

71 CHAPTER11: Test Planning 11.1 Introduction The ultimate goal of the test planning process is communicating the software test team s intent, its expectations, and its understanding of the testing that s to be performed. The Planning process includes scope, approach, resources and schedule of the testing activities. Test planning is a process in which every aspect of testing is considered. Test Plan is a by-product of the detailed test planning process. It is a document that covers test planning Objectives To identify the items that are subject to testing To communicate, at high level, the extent of testing To define the roles and responsibilities for test activities To provide an accurate effort estimation required to complete testing as per the plan To define the infrastructure and support required 11.3 IEEE Standard for Software Test Documentation (ANSI/IEEE Standard ) This is a summary of the ANSI/IEEE Standard It describes a test plan as A document describing the scope, approach, resources, and schedule of intended testing activities. Itidentifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning. This standard specifies the following test plan outline: 1. Test Plan Identifier A unique identifier 2. Introduction Summary of the items and features to be tested Need for and history of each item (optional) References to related documents such as project authorization, project plan, QA plan, configuration management plan, relevant policies, relevant standards 71

72 References to lower level test plans 3. Test Items Test items and their version Characteristics of their transmittal media References to related documents such as requirements specification, design specification, user guide, operations guide, installation guide. References to bug reports related to test items Items which are specifically not going to be tested (optional) 4. Features To Be Tested All software features and combinations of features to be tested References to test-design specifications associated with each feature and combination of features 5. Features Not To Be Tested All features and significant combinations of features which will not be tested Reasons why these features won t be tested 6. Approach Overall approach to testing For each major group of features of combinations of features, specify the approach Specify major activities, techniques, and tools which are to be used to test the groups Specify a minimum degree of comprehensiveness required Identify which techniques will be used to judge comprehensiveness Specify any additional completion criteria Specify techniques which are to be used to trace requirements Identify significant constraints on testing, such as test-item availability, testingresource availability, and deadline 7. Item Pass/Fail Criteria Specify the criteria to be used to determine whether each test item has passed or failed testing 8. Suspension Criteria And Resumption Requirements Specify criteria to be used to suspend the testing activity 72

73 Specify testing activities which must be redone when testing is resumed 9. Test Deliverables Identify the deliverable documents: test plan, test design specifications, test case specifications, test procedure specifications, test item transmittal reports, test logs, test incident reports, test summary reports Identify test input and output data Identify test tools (optional) 10. Testing Tasks Identify tasks necessary to prepare for and perform testing Identify all task interdependencies Identify any special skills required 11. Environmental Needs Specify necessary and desired properties of the test environment: physical characteristics of the facilities including hardware, communications and system software, the mode of usage(i.e., stand alone), and any other software or supplies needed Specify the level of security required Identify special test tools needed Identify any other testing needs Identify the source for all needs which are not currently available 12. Responsibilities Identify groups responsible for managing, designing, preparing, executing, witnessing, checking and resolving Identify groups responsible for providing the test items identified in the Test Items section Identify groups responsible for providing the environmental needs identified in the Environmental Needs section 13. Staffing And Training Needs Specify staffing needs by skill level Identify training options for providing necessary skills 73

74 14. Schedule Specify test milestones Specify all item transmittal events Estimate time required to do each testing task Schedule all testing tasks and test milestones For each testing tasks and test milestones For each testing resource, specify its periods of use 15. Risks And Contingencies Identify the high-risk assumptions of the test plan Specify contingency plans for each 16. Approvals Specify names and titles of all persons who must approve the plan Provide space for signatures and dates 74

75 CHAPTER 12: Configuration Management 12.1 Introduction While building software, it undergoes changes. Changes need to be controlled effectively. Configuration Management (CM) is a group activity that keeps details of all the changes that takeplacethroughout the process and maintains all versions of builds. Configuration Management can be defined as: The process of identifying and defining the configuration items in a system. Controlling the release and change of these items throughout the system life cycle. Recording and reporting the status of configuration items and change requests. Verifying the completeness and correctness of configuration items Objective Configuration Managementkeeps track of the versions of the application currently being tested; it reports the problem and manages the list of issues and problems found by the tester. Change control is used to keep track of the problems that need to be corrected in the present release and is also used to keep a list of those problems that will not be fixed in the immediate future. Problems resulting from poor Configuration Management: Can t reproduce a fault reported by a customer. Can t roll back to previous subsystem. One change overwrites another. Emergency fault fix needs testing but tests have been updated to new software version. Which code changes belong to which version? Faults which were fixed on old version. Shouldn t that feature be in this version? Configuration Management is an engineering management procedure that includes: Configuration identification Configuration control Configuration status accounting Configuration audit 75

76 Products for Configuration Management in testing: Test plans Test Designs Test Cases o Test input o Test data o Test scripts o Expected results o Actual results o Test tools 12.3 Configuration ManagementTools Various CM tools are used to track versions of all components. 1. Clear Case IBM Rational Clear Case provides life cycle management and control of software development assets. With integrated version control, automated workspace management, parallel development support, baseline management, and build and release management, Rational Clear Case provides the capabilities needed to create, update, build, deliver, reuse and maintain business-critical assets. 2. Visual Source Safe (VSS) Source Safe provides for true project-level configuration control. Source Safe also runs on many platforms, so that it can be used for a client/server project where coding is being done on a Windows PC using Visual Basic, and on a UNIX workstation using C. 76

77 CHAPTER13: Defect Tracing and Defect Life Cycle 13.1 Introduction A software bug is used to describe an error, flaw, mistake, failure, or fault in a program or system that produces an incorrect result.a bug can be defined as an error in a program s code or a malfunction in a program s code. It can also be defined as the abnormal behavior of software. No software exists without bugs. The elimination of bugs from the software depends upon the efficiency of testing done on software. A bug is a specific concernabout the quality of application under test. Bugs arise from mistakes and errors made by people in either a program's source code or its design, and a few are caused by compilers producing incorrect code Objectives The main objective of finding a defect is to fix it. Adefect goes through various cycles and the objectives of finding a defect are to understand the cause, way to correct a defect, frequency of a defect, impact risk associated with a defect. Other objectives also include fixing them in the product and avoiding the same in future, and correct defects to improve the quality of the work products. Defectcan be defined as a deviation from the expected result, or difference between expected result and actual result. Types of computer Bugs are: Logic bugs Syntax bugs Arithmetic bugs Resource bugs Multi-threading programming bugs Performance bugs 13.3 Why Do Faults Occur? There are various reasons for the occurrence of the faults; it may be due to Ambiguous or unclear requirements Poor documentation Lack of programming skills 77

78 Increased complexity as we are moving from era of 1-tier architecture to 2-tier architecture, multi-tier architecture and now to satellite communication Due to increase in work pressure and assigned deadlines 13.4 What Is abug Life Cycle? The duration or time span between the first time a bug is found ( New ) and closed successfully (status: Closed ), rejected, postponed or deferred is called as Bug/Error Life Cycle. Defect life cycle includes the different stages after a defect is identified. New When defect is identified Open When Development team validates that it is a Bug, defect is opened Assigned When development lead assigns a developer to fix the bug Fixed When developer fixes the detected bug by appropriate code changes Retest When Test lead assigns a tester to verify the fix Closed/reopened Retested by TE and he will update the Status of the bug There are seven different life cycles that a bug can passes through: Deffered Not Imp New Open Valid Invalid Pending Reject Re ason Reject Re-Open Assigned Fixed Problem Persists Pending Retest Retest Not-Available Post-Poned After- Availability Closed Bug/Defect Life Cycle 78

79 Bug Life Cycle I 1. A tester finds a bug and reports it to Test Lead. 2. The Test lead verifies if the bug is valid or not. 3. Test lead finds that the bug is not valid and the bug is Rejected Bug Life Cycle II 1. A tester finds a bug and reports it to Test Lead. 2. The Test lead verifies if the bug is valid or not. 3. The bug is verified and reported to development team with status as New. 4. The development leader and team verify if it is a valid bug. The bug is invalid and is marked with a status of Pending Reject before passing it back to the testing team. 5. After getting a satisfactory reply from the development side, the test leader marks the bug as Rejected Bug Life Cycle III 1. A tester finds a bug and reports it to Test Lead. 2. The Test lead verifies if the bug is valid or not. 3. The bug is verified and reported to development team with status as New. 4. The development leader and team verify if it is a valid bug. The bug is valid and the development leader Open the bug and assigns a developer to it marking the status as Assigned. 5. The developer solves the problem and marks the bug as Fixed and passes it back to the Development leader. 6. The development leader changes the status of the bug to Pending Retest and passes on to the testing team for retest. 7. The test leader changes the status of the bug to Retest and passes it to a tester for retest. 8. The tester retests the bug and it is working fine, so the tester closes the bug and marks it as Closed Bug Life Cycle IV 1. A tester finds a bug and reports it to Test Lead. 2. The Test lead verifies if the bug is valid or not. 3. The bug is verified and reported to development team with status as New. 4. The development leader and team verify if it is a valid bug. The bug is valid and the development leader Open the bug and assigns a developer to it marking the status as Assigned. 79

80 5. The developer solves the problem and marks the bug as Fixed and passes it back to the development leader. 6. The development leader changes the status of the bug to Pending Retest and passes on to the testing team for retest. 7. The test leader changes the status of the bug to retest and passes it to a tester for retest. 8. The tester retests the bug and the same problem persists, so the tester after confirmation from test leader reopens the bug and marks it with Reopen status. And the bug is passed back to the development team for fixing Bug Life Cycle V 1. A Tester finds a bug and reports it to Test Lead. 2. The Test lead verifies if the bug is valid or not. 3. The bug is verified and reported to development team with status as New. 4. The developer tries to verify if the bug is valid but fails in replicate the same scenario as was at the time of testing, but fails in that and asks for help from testing team. 5. The tester also fails to re-generate the scenario in which the bug was found. And developer rejects the bug marking it Rejected Bug Life Cycle VI After confirmation that the data is unavailable or certain functionality is unavailable, the solution and retest of the bug is postponed for indefinite time and it is marked as Postponed Bug Life Cycle VII If the bug does not stand importance and can be/needed to be postponed, then it is given a status as Deferred. In this way, any bug that is found ends up with a status of Closed, Rejected, Deferred, or Postponed Bug Status Description There are various stages of bug life cycle. The status caption may vary depending on the bug tracking system you are using. 1. New: When QA files new bug/ bug is revealed for the first time, the software tester communicates it to his/her team leader (Test Lead) in order to confirm if that is a valid bug. After getting confirmation from the Test Lead, the software tester logs the bug and the status of New is assigned to the bug. 80

81 2. Open: Once the developer starts working on the bug, he/she changes the status of the bug to Open to indicate that he/she is working on it to find a solution. 3. Deferred: If the bug is not related to current build or cannot be fixed in this release or if the bug is not important to fix immediately, then the project manager can set the bug status as deferred. 4. Assigned: Assigned to field is set by the project lead or the project manager and the bug is assigned to developer. 5. Resolved/Fixed: When the developer makes necessary code changes and verifies the changes, then he/she can change the bug status as Fixed and the bug is passed to testing team. 6. Pending Retest: After the bug is fixed, it is passed back to the testing team to get retested and the status of Pending Reset is assigned to it. 7. Retest: The testing team leader changes the status of the bug, which is previously marked with Pending Retest to Retest and assigns it to a tester for retesting. 8. Could not reproduce: If developer is not able to reproduce the bug by the steps given in bug report by QA then developer can mark the bug as CNR. QA needs action to check if bug is reproduced and can assign to developer with detailed reproducing steps. 9. Need more information: If developer is not clear about the bug reproduce steps provided by QA to reproduce the bug, then he/she can mark it as Need more information. In this case QA needs to add detailed reproducing steps and assign bug back to dev for fix. 10. Reopen: If QA is not satisfy with the fix and if bug is still reproducible even after fix then QA can mark it as Reopen so that developer can take appropriate action. 11. Closed: If bug is verified by the QA team and if the fix is ok and problem is solved then QA can mark bug as Closed. 12. Rejected/Invalid: Sometimes the developer or team lead can mark the bug as Rejected or invalid if the system is working according to specifications and bug is just due to some misinterpretation. 81

82 13.6 Severity: How Serious Is The Defect? Severity Description Criteria 1 Show Stopper Inability to install/uninstall the product, product doesn t start, product hangs or Operating System freezes, no workaround is available, data corruption, product abnormally terminates 2 High Workaround is available, function is not working according to specifications, severe performance degradation, critical to customer 3 Medium Incorrect error messages, incorrect data, noticeable performance inefficiencies 4 Low Enhancements,cosmetic flaws 13.7 Priority: How to Decide Priority? Priority Description Criteria 1 Critical Needs immediate fix, blocks further testing 2 High/ Major Must be fixed before the product is released 3 Medium/ Average Should be fixed if time permits 4 Low/ Minor Would like to fix but can be released as is 82

83 13.8 Defect Tracking Defect tracking is important in software engineering as complex software systems typically have hundreds or thousands of defects. Managing, evaluating and prioritizing these defects is a difficult task. The process of monitoring defects right from the time of recording defects until satisfactory resolution has been determined is called as Defect Tracking. Defect tracking systems are computer database systems that store defects and help people to manage them Defect Prevention Discovering and removing defects is an expensive and inefficient process. It is much more efficient for an organization to conduct activities that prevent defects.the objective of defect prevention is to identify the defects and take corrective action to ensure they are not repeated over subsequent iterative cycles. While defect prevention is much more effective and efficient in reducing the number of defects, most organization conducts defect discovery and removal. Defect prevention can be implemented by preparing an action plan to minimize or eliminate defects, generating defect metrics, defining corrective action and producing an analysis of the root causes of the defects. 83

84 13.10 Defect Report A sample defect report is shown in the below figure. Summary and description are the most important parts of a defect report. Sample Defect Report To track defects, a defect workflow process has been implemented. Defect workflow training will be conducted for all test engineers. The steps in the defect workflow process are as follows: 1. When a defect is generated initially, the status is set to "New". Note: How to document the defect, what fields need to be filled in and so on, also need to be specified. 84