INTRODUCTION. It is the process used to identify the correctness, completeness and quality of developed computer software.

Size: px
Start display at page:

Download "INTRODUCTION. It is the process used to identify the correctness, completeness and quality of developed computer software."

Transcription

1

2 INTRODUCTION It is the process used to identify the correctness, completeness and quality of developed computer software. It is the process of executing a program/application under positive and negative conditions by manual or automated means. It checks for the :- Specification Functionality Performance

3 OBJECTIVES Uncover as many as errors (or bugs) as possible in a given product. Demonstrate a given software product matching its requirement specifications. Validate the quality of a software testing using the minimum cost and efforts. Generate high quality test cases, perform effective tests, and issue correct and helpful problem reports.

4 Error, Bug, Fault & Failure Error : It is a human action that produces the incorrect result that produces a fault. Bug : The presence of error at the time of execution of the software. Fault : State of software caused by an error. Failure : Deviation of the software from its expected result. It is an event.

5 Test Plan It is a systematic approach to test a system i.e. software. The plan typically contains a detailed understanding of what the eventual testing workflow will be.

6 Test Case It is a specific procedure of testing a particular requirement. It will include: Identification of specific requirement tested Test case success/failure criteria Specific steps to execute test Test data

7 Testing Methodologies Black box testing White box testing

8 Black box testing No knowledge of internal program design or code required. Tests are based on requirements and functionality. White box testing Knowledge of the internal program design and code required. Tests are based on coverage of code statements, branches, paths, conditions.

9 Black box testing requirements output input events

10 White box testing Test data Tests Derives Component code Test outputs

11 Testing Levels Unit testing Integration testing System testing

12 UNIT TESTING Tests each module individually. Follows a white box testing (Logic of the program). Done by developers.

13 INTEGRATION TESTING Once all the modules have been unit tested, integration testing is performed. It is systematic testing. Produce tests to identify errors associated with interfacing. Types: Top Down Integration testing Bottom Up Integration testing Sandwich Integration testing

14 Approaches to integration testing Top-down testing Start with high-level system and integrate from the top-down replacing individual components by stubs where appropriate Bottom-up testing Integrate individual components in levels until the complete system is created In practice, most integration involves a combination of these strategies

15 Q: For which types of system is bottom-up testing appropriate, and why? Answer: 1. Object-Oriented Systems because these have a neat decomposition into classes and methods makes testing easy 2. systems with strict performance requirements because we can measure the performance of individual methods early in the testing process

16 Top-down testing Testing Level 1 Level 1 sequence... Level 2 Level 2 Level 2 Level 2 Level 2 stubs Level 3 stubs

17 Bottom-up testing Test drivers Level N Level N Level N Level N Level N Testing sequence Test drivers Level N 1 Level N 1 Level N 1

18 Approaches to Integration Sandwich Integration Compromise between bottom-up and top-down testing Simultaneously begin bottom-up and top-down testing and meet at a predetermined point in the middle

19 SYSTEM TESTING The system as a whole is tested to uncover requirement errors. Verifies that all system elements work properly and that overall system function and performance has been achieved. Types: Alpha Testing Beta Testing Acceptance Testing Performance Testing

20 Alpha Testing It is carried out by the test team within the developing organization. Beta Testing It is performed by a selected group of friendly customers. Acceptance Testing It is performed by the customer to determine whether to accept or reject the delivery of the system. Performance Testing It is carried out to check whether the system meets the nonfunctional requirements identified in the SRS document.

21 Types of Performance Testing: Regression Testing Recovery Testing Security testing Stress testing Volume Testing Configuration Testing Compatibility Testing Maintenance Testing Documentation Testing Usability Testing

22 Regression Testing Each new addition or change to baselined software may cause problems with functions that previously worked flawlessly Regression testing re-executes a small subset of tests that have already been conducted Ensures that changes have not propagated unintended side effects Helps to ensure that changes do not introduce unintended behavior or additional errors May be done manually or through the use of automated capture/playback tools Regression test suite contains three different classes of test cases A representative sample of tests that will exercise all software functions Additional tests that focus on software functions that are likely to be affected by the change Tests that focus on the actual software components that have been changed 22

23 Different Types Recovery testing Tests for recovery from system faults Forces the software to fail in a variety of ways and verifies that recovery is properly performed Tests re initialization, check pointing mechanisms, data recovery, and restart for correctness Security testing Verifies that protection mechanisms built into a system will, in fact, protect it from improper access Stress testing Executes a system in a manner that demands resources in abnormal quantity, frequency, or volume Volume testing o Volume testing is a Non-functional testing that is performed as part of performance testing where the software is subjected to a huge volume of data. It is also referred as flood testing. 23

24 Different Types Configuration testing Configuration testing is the process of testing the system with each one of the supported software and hardware configurations. The Execution area supports configuration testing by allowing reuse of the created tests Compatibility testing Compatibility testing is a non-functional testing conducted on the application to evaluate the application's compatibility within different environments

25 Different Types Maintenance testing Maintenance testing is a test that is performed to either identify equipment problems, diagnose equipment problems or to confirm that repair measures have been effective Documentation testing Any written or pictorial information describing, defining, specifying, reporting, or certifying activities, requirements, procedures, or results. Documentation is as important to a product s success as the product itself. If the documentation is poor, non-existent, or wrong, it reflects on the quality of the product and the vendor Usability testing Usability testing is a technique used in user-centered interaction design to evaluate a product by testing it on users. This can be seen as an irreplaceable usability practice, since it gives direct input on how real users use the system

26 Debugging Process Debugging occurs as a consequence of successful testing It is still very much an art rather than a science Good debugging ability may be an innate human trait Large variances in debugging ability exist The debugging process begins with the execution of a test case Results are assessed and the difference between expected and actual performance is encountered This difference is a symptom of an underlying cause that lies hidden The debugging process attempts to match symptom with cause, thereby leading to error correction 26

27 Why is Debugging so Difficult? The symptom and the cause may be geographically remote The symptom may disappear (temporarily) when another error is corrected The symptom may actually be caused by nonerrors (e.g., round-off accuracies) The symptom may be caused by human error that is not easily traced (continued on next slide) 27

28 Debugging Strategies Objective of debugging is to find and correct the cause of a software error Bugs are found by a combination of systematic evaluation, intuition, and luck Debugging methods and tools are not a substitute for careful evaluation based on a complete design model and clear source code There are three main debugging strategies Brute force Backtracking Cause elimination 28

29 Strategy #1: Brute Force Most commonly used and least efficient method Used when all else fails Involves the use of memory dumps, run-time traces, and output statements Leads many times to wasted effort and time 29

30 Strategy #2: Backtracking Can be used successfully in small programs The method starts at the location where a symptom has been uncovered The source code is then traced backward (manually) until the location of the cause is found In large programs, the number of potential backward paths may become unmanageably large 30

31 Strategy #3: Cause Elimination Involves the use of induction or deduction and introduces the concept of binary partitioning Induction (specific to general): Prove that a specific starting value is true; then prove the general case is true Deduction (general to specific): Show that a specific conclusion follows from a set of general premises Data related to the error occurrence are organized to isolate potential causes A cause hypothesis is devised, and the aforementioned data are used to prove or disprove the hypothesis Alternatively, a list of all possible causes is developed, and tests are conducted to eliminate each cause If initial tests indicate that a particular cause hypothesis shows promise, data are refined in an attempt to isolate the bug 31

32 DISCUSSION In order to be cost effective, the testing must be concentrated on areas where it will be most effective. The testing should be planned such that when testing is stopped for whatever reason, the most effective testing in the time allotted has already been done. The absence of an organizational testing policy may result in too much effort and money will be spent on testing, attempting to achieve a level of quality that is impossible or unnecessary.

33 THANK YOU