BASICS OF SOFTWARE TESTING AND QUALITY ASSURANCE Yvonne Enselman, CTAL
Information alines with ISTQB Sylabus and Glossary THE TEST PYRAMID
Why Testing is necessary What is Testing Seven Testing principles Fundamental Test process Psychology of Testing Code of ethics FUNDAMENTALS OF TESTING
WHY IS TESTING NECESSARY A defect can lead to harm to a company, a person, or the environment There are differences between the root causes of defects and its effects What are examples of testing being necessary Testing is part of quality assurance and solid testing enhances all quality Common terms error, defect, fault, failure correspond to mistake and bug
TERMS Bug Defect Error Fails (false-fail, falsepositive) Failure Fault Mistake Passed (falsenegative, false-pass) Quality Risk
SOFTWARE SYSTEMS CONTEXT The human and other causes of software defects Programmer Business Analyst Technical Writer System Architect QA Analyst Business Stakeholder
CAUSES OF SOFTWARE DEFECTS Errors are made mistakes They cause defects fault, bug If not fixed it causes a failure Defects in software, systems or documents may result in failures but not all defects do so. Defects occur because human beings are fallible and because there is time pressure, complex code, complexity of infrastructure, changing technologies, and/or many system interactions. Failures can be caused by environmental conditions as well.
MULTIPLICATIVE INCREASES IN COST Cost increases at least 1:5 from requirements to after release for simple systems. Can be as high as 1:100 in complex systems Requirement > Design > Code/Unit Test > Independent Test > After Release
THE ROLE OF TESTING AND ITS EFFECT ON QUALITY Work product is produced by humans and they are inherently fallible Testing is part of how risks of failure is reduced Testing does not change the quality of the system under test Testing DOES measure the system s quality A properly designed set of tests should measure the quality of the system both in terms of functionality and non functional characteristics Learning opportunity allows for improved quality if lessons are learned from each project
TESTING IS ESSENTIAL BUT NOT ENOUGH Testing should be integrated into a complete, team wide, and software process-wide set of activities for quality assurance
How much testing is enough?
REDUCE RISK TO AN ACCEPTABLE LEVEL PRIOR TO RELEASING THE SOFTWARE TO CUSTOMERS AND USERS Selecting which conditions to cover is a fundamental problem of testing What tests give the greatest value most coverage or coverage of most important aspects Sufficient coverage is achieved when what should be covered is balanced against the constraints of time and budget Testing needs to ensure it provides sufficient information to the project and product stakeholders In some cases testing is needed for contractual or legal requirements or industry specific standards
WHAT IS TESTING There are common objectives of testing There are different objectives of testing in different phases of the software lifecycle Testing is different than debugging
TERMS Confirmation testing Debugging Requirement Re-testing Review Test case Test control Test design Specification Testing Test Objective
Debugging: The process of finding, analyzing, and removing the causes of failures in software. Requirement: A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document. Review: An evaluation of a project or project status to ascertain discrepancies from planned results and to recommend improvements. Examples include management review, informal review, technical review, inspection, and walkthrough. Test case: of set of input values, execution preconditions, expected results, and execution post conditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement. Testing: The process consisting of all lifecycle activities, both static and dynamic, concerned with planning preparation, and evaluation of software products and related work products to determine that they satisfy specified requirements to demonstrate that they are fir for purpose and to detect defects. Test objective: A reason or purpose for designing and executing a test.
TEST PLANNING Test activities exist before and after test execution. These activities include planning and control, choosing test conditions, designing and executing test cases, checking results, evaluating exit criteria, reporting on the testing process and system under test, and finalizing or completing closure activities after a test phase has been completed. Testing also includes reviewing documents (including source code) and conducting statistic analysis. Testing can have the following objectives: Finding defects Gaining confidence about the level of quality Providing information for decision-making Preventing defects Debugging and testing are different. Dynamic testing can show failures that are caused by defects. Debugging is the deployment activity that finds, analyzes and removes the cause of the failure. Subsequent re-testing by a tester ensures that the fix does indeed resolve the failure. The responsibility for these activities is usually testers test and developers debug.
TEST DESIGN TECHNIQUES Black Box Test Design Technique: procedure to derive and/or select test cases based on an analysis of the specification, either functional or non functional, of a component or system without reference to its internal structure. Experience-based test design technique: procedure to derive and/or select test cases based on the tester s experience, knowledge, and intuition. Functional test design technique: Procedure to derive and/or select test cases based on an analysis of the specification of the functionality of a component or system without reference to its internal structure. Non-Functional test design technique: Procedure to derive and/or select test cases for non functional testing based on an analysis of the specification of a component or system without reference to its Internal structure. White-Box test design technique: procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system.
SEVEN TESTING PRINCIPLES These principles while not always understood or noticed are in action on most if not all projects.
TESTING SHOWS THE PRESENCE OF DEFECTS Testing shows presence of defects: Testing can show that defects are present, but cannot prove there are no defects. Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, it is not a proof of correctness.
EXHAUSTIVE TESTING IS IMPOSSIBLE Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Instead of exhaustive testing, risk analysis and priorities should be used to focus testing efforts.
EARLY TESTING To find defects early, testing activities shall be started as early as possible in the software or system development lifecycle.
DEFECT CLUSTERING Testing effort shall be focused proportionally to the expected and later observed defect density of modules.
PESTICIDE PARADOX If the same tests are repeated over and over again, eventually the same set of test cases will no longer find any new defects. To overcome this pesticide paradox, test cases need to be regularly review and revised, and new and different tests need to be written to exercise different parts of the software to find potentially more defects.
TESTING IS CONTEXT DEPENDENT Testing is done differently in different contexts. For example, safety-critical software is tested differently from an e-commerce site.
ABSENCE OF ERRORS FALLACY Finding and fixing defects does not help if the system built is unstable and does not fulfill the users needs and expectations.
FUNDAMENTAL TEST PROCESSES Coverage Exit criteria Incident Regression testing Test approach Test basis Test condition Test data Test execution Test log Test monitoring Test plan Test procedure Test suite Test summary report Testware
PLANNING AND THE MOST VISIBLE PART OF TESTING IS TEST EXECUTION. BUT TO BE EFFECTIVE AND EFFICIENT, TEST PLANS SHOULD ALSO INCLUDE TIME TO BE SPENT PLANNING THE TESTS, DESIGNING TEST CASES, PREPARING FOR EXECUTION AND EVALUATING RESULTS The fundamental test process consists of the following main activities: Test planning and control Test analysis and design Test implementation and execution Evaluating exit criteria and reporting Test closure activities
TEST PLANNING AND CONTROL Test Control A test management tasks that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned. Test Plan A document describing the scope, approach, resourced, and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rational for their choice, and any risks requiring contingency planning, it is a record of the test planning process.
ANALYSIS AND DESIGN Test analysis The process of analyzing the test basis and defining test objectives. Test design The process of transforming general test objectives into tangible test conditions and test cases
Test implementation IMPLEMENTATION AND EXECUTION The process of developing and prioritizing test procedures, creating test data, and optionally, preparing test harnesses and writing automated test scripts. Execution The process of running a test on the component or system under test, producing actual results.
Exit criteria EVALUATING EXIT CRITERIA AND REPORTING The set of generic and specific conditions, agreed upon with the stakeholders for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing. Reporting Test Evaluating Report: A document produced at the end of the test process summarizing all testing activities and results. It also contains and evaluation of the test process and lessons learned. Test Reporting: collecting and analyzing data from testing activities and subsequently consolidating the data in a report to inform stakeholders.
TEST CLOSURE ACTIVITIES During the test closure phase of a test process data is collected from completed activities to consolidate experience, Testware, facts, and numbers. The test closure phase consists of finalizing and archiving the Testware and evaluating the test process, including preparation of a test evaluation report.
THE PSYCHOLOGY OF TESTING The mindset to be used while testing and reviewing is different from that used while developing software. With the right mindset developers are able to test their own code, but separation of this responsibility to a tester is typically done to help focus effort and provide additional benefits, such as an independent view by trained and professional testing resources. Independent testing may be carried out at any level of testing.
DEGREES OF DEPENDENCE Tests designed by the person who wrote the software under test (low level of independence) Tests designed by another person (e.g., from the development team) Tests designed by a person from a different organizational group e.g., an independent test team) or test specialists (e.g., usability or performance test specialists) Tests designed by a person from a different organization or company (i.e., outsourcing or certification by an external body)
Curiosity Professional pessimism A critical eye Attention to detail Experience Good communication skills TRAITS OF A GOOD TESTER
DON T FORGET YOUR SESSION SURVEYS Sign in to the Online Session Guide (www.common.org/sessions) Go to your personal schedule Click on the session that you attended Click on the Feedback Survey button located above the abstract Completing session surveys helps us plan future programming and provides feedback used in speaker awards. Thank you for your participation.
Thank You!