Risk Based Testing Pragmatic Risk Analysis and Management

Size: px
Start display at page:

Download "Risk Based Testing Pragmatic Risk Analysis and Management"

Transcription

1 Risk Based Testing Pragmatic Risk Analysis and Management

2 Risk Based Testing Pragmatic Risk Analysis and Management What is Risk Based Testing?

3 What is Risk Based Testing? Risk: the possibility of a negative outcome Quality risk: the possibility that the system might not deliver key quality attributes, endangering quality outcomes Risk based testing uses an analysis of quality risks to prioritize tests and allocate testing effort Risk based testing involves key business and technical project stakeholders Risk based testing also means managing project risks Copyright (c) RBCS Page 3

4 What Are the Benefits? Risk based testing includes the following benefits Running the tests in risk order gives the highest likelihood of discovering defects in severity order ( find the scary stuff first ) Allocating test effort based on risk is the most efficient way to minimize the residual quality risk upon release ( pick the right tests out of the infinite cloud of possible tests ) Measuring test results based on risk allows the organization to know the residual level of quality risk during test execution, and to make smart release decisions ( release when risk of delay balances risk of dissatisfaction ) If schedule requires, dropping tests in reverse risk order reduces the test execution period with the least possible increase in quality risk ( give up tests you worry about the least ) All of these benefits allow the test team to operate more efficiently and in a targeted fashion, especially in timeconstrained and/or resource-constrained situations Copyright (c) RBCS Page 4

5 The Foundation of Risk Based Testing Quality risk analysis is the foundation of risk based testing Quality risk analysis includes three major elements Identifying the quality risks Assessing the level of risk associated with each quality risk Capturing the quality risk information A thorough, accurate quality risk analysis leads to thorough, well-aligned, properly sequenced risk based testing Let s look at these activities more closely Copyright (c) RBCS Page 5

6 Identification of Risk Items Three main techniques In Agile lifecycles, risk analysis happens during iteration planning, after story grooming and before estimation For traditional lifecycles such as waterfall or RUP, we can have all stakeholder groups represented in a brainstorming session Alternatively, each stakeholder group representative interviewed by the lead analyst (usually a test lead or manager) in charge of the risk analysis Any of these techniques can work, given stakeholder engagement Copyright (c) RBCS Page 6

7 Generic Quality Risk Categories (Part 1) Quality Risk Category Competitive Inferiority Data Quality Date and Time Handling Disaster Handling and Recovery Documentation Error Handling, and Recovery Functionality What Kind of Problems Fit Into this Category Failures to match competing systems in quality. Failures in processing, storing, or retrieving data. Failures in date-based and/or time-based inputs/outputs, calculations, and event handling. Failure to degrade gracefully in the face of catastrophic incidents and/or failure to recover properly from such incidents. Failures in operating instructions for users or system administrators. Failures due to beyond-peak or illegal conditions (i.e., knock-on effects of deliberately inflicted errors). Failures that cause specific features not to work. Copyright (c) RBCS Page 7

8 Generic Quality Risk Categories (Part 2) Quality Risk Category Installation, Setup, Upgrade, and Migration Interoperability Load, Capacity, and Volume Localization Networked and Distributed What Kind of Problems Fit Into this Category Failures that prevent or impede deploying the system, migrating data to new versions, including unwanted side-effects (e.g., installing of additional, unwelcome, unintended software such as spyware, malware, etc.) Failures that occur when major components, subsystems, or related systems interact. Failures in scaling of system to expected peak concurrent usage levels. Failures in specific localities, including languages, messages, taxes and finances, operational issues, and time zones. Failure to handle networked/distributed operation, including latency, delays, lost packet/connectivity, and unavailable resources. Copyright (c) RBCS Page 8

9 Generic Quality Risk Categories (Part 3) Quality Risk Category Operations and Maintenance Packaging/Fulfillment Performance Portability, Configuration, and Compatibility Reliability, Availability, and Stability Security/Privacy What Kind of Problems Fit Into this Category Failures that endanger continuing operation, including backup/restore processes. Failures associated with the packaging and/or delivery of the system or product. Failures to perform as required under expected loads. Failures specific to different supported platforms, supported configurations, configuration problems, and/or cohabitation with other software/systems. Failures to meet reasonable expectations of availability and mean-time-between-failure. Failures to protect the system and secured data from fraudulent or malicious misuse. Copyright (c) RBCS Page 9

10 Generic Quality Risk Categories (Part 4) Quality Risk Category Standards Compliance States and Transactions User Interface and Usability What Kind of Problems Fit Into this Category Failure to conform to mandatory standards, company standards, and/or applicable voluntary standards. Failure to properly respond to sequences of events or to particular transactions. Failures in human factors, especially at the user interface. Copyright (c) RBCS Page 10

11 Capturing Risk Items During brainstorming sessions or interviews, capture risk items on whiteboards, flipcharts, or notepads Afterwards, transfer them to a document As you capture risk items, deal with duplicate, overlapping, and miscategorized risk items If risk items arose from user stories or specifications, capture traceability information Capture the risks in a negative sense Copyright (c) RBCS Page 11

12 By-Products Quality risk analysis typically produces three valuable by-products which you should capture Project risks Input document defects Assumptions, issues, and questions Capture these in the same spreadsheet as the quality risks for completeness Copyright (c) RBCS Page 12

13 Risk Based Testing Pragmatic Risk Analysis and Management What Is Risk Based Testing? Exercise 1: Identifying quality risks

14 Exercise: Identifying Quality Risks For a product you are currently working on (or have recently worked on), identify three to five quality risks Use the quality risk categories or the quality characteristics as a checklist to help identify the risk item Discuss your risk items and their categories or characteristics with the class Copyright (c) RBCS Page 14

15 Assessing Risk Items For each risk item, assess two factors Likelihood: upon delivery for testing, how likely is the system to contain one or more bugs related to the risk item? Impact: if such bugs were not detected in testing and were delivered into production, how bad would the impact be? Likelihood arises primarily from technical considerations, such as the programming languages used, the bandwidth of connections, and so forth Impact arises from business considerations, such as the financial loss the business will suffer, the number of users or customers affected, and so forth Copyright (c) RBCS Page 15

16 Likelihood Scale Likelihood Rating Criteria Very likely 1 Likely 2 Somewhat likely 3 Unlikely 4 Very unlikely 5 Almost certain to happen. (You d be shocked if it didn t happen.) More likely to happen than not to happen. (You d be surprised if it didn t happen.) About even odds of happening. (No surprise one way or the other.) Less likely to happen than not to happen. (You d be surprised if it did happen.) Almost certain not to happen. (You d be shocked if it did happen.) These ratings, while qualitative, are relatively objective. Technical stakeholders should have little difficulty reaching agreement on the likelihood for any given risk item, when discussing an existing system. (New systems based on unproven technologies are harder to assess.) Copyright (c) RBCS Page 16

17 Impact Scale Impact Rating High-level Criteria Very high 1 Total disaster, with financial, reputational, security, and/or legal implications High 2 Major short-term and long-term loss Medium 3 Significant loss and damage Low 4 Some loss and damage Very low 5 Little to no loss or damage These ratings, while based on clear criteria, are somewhat subjective. Business stakeholders might have difficulty reaching agreement on the impact for a given risk item. It is also important to think of the typical bug related to a risk item that could occur, rather than the most catastrophic bug that could occur. You should create specific criteria for each impact rating, along with examples of failures that fall into each rating Copyright (c) RBCS Page 17

18 Risk Priority Number Calculated by multiplying the likelihood and impact Using scale shown earlier, RPN ranges from 1 (worst) to 25 (least bad) RPN measures aggregate level of risk associated with each risk item During test design and implementation, testers design test cases to cover risk items Test cases inherit the RPN of the risk item from which they descend Risk based test sequencing uses the RPN to determine the order of execution of the test cases Copyright (c) RBCS Page 18

19 Risk Based Test Effort Allocation RPN determines the allocation of test effort Allocation can vary, depending on project parameters, constraints, and priorities In two coming slides, we look at two alternative examples Quality-driven Schedule-driven Other variations are possible The project management team may adopt either model Copyright (c) RBCS Page 19

20 Quality Driven Test Effort Allocation RPN Range Extent of Testing 1-10 Extensive Broad Cursory Comments Run a large number of tests that are both broad and deep, exercising combinations and variations of interesting conditions. Run a medium number of tests that exercise many different interesting conditions. Run a small number of tests that sample the most interesting conditions. Note: This has the effect that all risks item have some tests associated with them. Copyright (c) RBCS Page 20

21 Schedule Driven Test Effort Allocation RPN Range Extent of Testing Comments 1-2 Extensive Run a large number of tests that are both broad and deep, exercising combinations and variations of interesting conditions. 3-5 Broad Run a medium number of tests that exercise many different interesting conditions Cursory Run a small number of tests that sample the most interesting conditions Opportunity Leverage other tests or activities to run a test or two of an interesting condition, but invest very little time and effort Do not test at all, but, if bugs related to this risk Report bugs only arise during other tests, report those bugs None Neither test for these risks nor report bugs. Note: This has the effect that some risk items have no tests associated with them. Copyright (c) RBCS Page 21

22 Risk Based Testing Pragmatic Risk Analysis and Management What Is Risk Based Testing? Exercise 2: Assessing quality risks

23 Exercise: Assessing Quality Risks For the three to five quality risks identified in the previous exercise, rate each risk s likelihood and impact, using the given scale Be ready to explain and justify the ratings Multiple them together to obtain a risk priority number Discuss your risk items likelihood, impact, and RPN ratings with the class Copyright (c) RBCS Page 23

24 Risk Based Testing Pragmatic Risk Analysis and Management Risk Based Testing In the Project

25 Sequential SDLCs Risk based testing fits into the sequential software development lifecycles Perform initial quality risk analysis during requirements Create test cases during design phase Adjust risk analysis periodically Sequence tests, report results, and, if necessary, triage test cases Risk based testing is responsive to the schedule/quality trade-offs inherent in sequential SDLCs Copyright (c) RBCS Page 25

26 Iterative SDLCs In iterative SDLCs, quality risk analysis is more iterative Sequential: one large quality risk analysis effort during requirements Iterative: High level quality risk analysis, followed by iteration-focused analyses Risk based testing aligns with the philosophy behind iterative lifecycles Copyright (c) RBCS Page 26

27 Agile SDLCs Agile SDLCs have short iterations and lightweight documentation Repeated, focused quality risk analysis sessions for each iteration Risk analysis results can guide unit test Risk based testing aligns with the agile philosophies of change and documentation Copyright (c) RBCS Page 27

28 Agile Quality Risk Analysis Agile quality risk analysis process (iteration planning) Gather the agile team List iteration backlog items Identify functional, non-functional quality risks for each item Assess identified risks: categorize each risk, determine risk level Build consensus and ensure a good distribution of risk ratings Use level of risk to choose extent of testing Select appropriate test techniques for each risk item Adjustments may occur during an iteration Risk analysis may detect opportunities for early defect removal (e.g., problems in user stories) Copyright (c) RBCS Page 28

29 Handling Project Risks In addition to use testing to manage quality risks, testing is also subject to risk To discover risks to the testing effort, ask yourself and other stakeholders: What could go wrong on the project that would delay or invalidate your test plan and/or estimate? What kind of unacceptable testing outcomes do you worry about? For each project risk, you have four options: Mitigation: Reduce the likelihood or impact through preventive steps Contingency: Have a plan in place to reduce the impact Transfer: Get some other party to accept the consequences Accept (or ignore): Do nothing about it (best if both likelihood and impact are low!) Insurance (a form of transference) is usually not available Copyright (c) RBCS Page 29

30 Sample Test-related Project Risks Logistics or product quality problems block tests Test deliverables won t install Excessive change invalidates results, requires test updates Insufficient or unrealistic test environment(s) Test environment support unreliable Gaps in test coverage revealed during test execution Slips in test start dates and/or deliverables to test Budget and/or staffing cuts Debugging in the test environment Copyright (c) RBCS Page 30

31 Risk Based Testing Pragmatic Risk Analysis and Management Risk Based Testing in the Project Exercise 3: Managing project risks

32 Exercise: Omninet Testing Risks For the same project as before, identify three to five project risks For each risk, select a method for managing the risk Discuss your risks and management methods with the class Copyright (c) RBCS Page 32

33 Risk Based Testing Pragmatic Risk Analysis and Management For More Information

34 Contact RBCS For over 20 years, RBCS has delivered consulting, training, and expert services to clients, helping them with software and hardware testing. Employing the industry s most experienced and recognized consultants, RBCS advises its clients, trains their employees, conducts product testing, builds and improves testing groups, and hires testing staff for hundreds of clients worldwide. Ranging from Fortune 20 companies to start-ups, RBCS clients save time and money through improved product development, decreased tech support calls, improved corporate reputation and more. To learn more about RBCS, visit. Address: RBCS, Inc Beck Road Bulverde, TX USA Phone: +1 (830) info@rbcs-us.com @LaikaTestDog LinkedIn: YouTube: Copyright (c) RBCS Page 34