INF 3121 Software Testing - Lecture 05. Test Management

Similar documents
Chapter 5 Part Test progress monitoring and control. 4. Configuration management. 5. Risk and testing. 6. Incident management

Test Management: Part II. Software Testing: INF3121 / INF4121

Advantages and Disadvantages of. Independent Tests. Advantages. Disadvantages

Test Management: Part I. Software Testing: INF3121 / INF4121

Seminar 06 Chapter 5 - Part 1

Fundamentals Test Process

BASICS OF SOFTWARE TESTING AND QUALITY ASSURANCE. Yvonne Enselman, CTAL

T Software Testing and Quality Assurance Test Planning

Testing 2. Testing: Agenda. for Systems Validation. Testing for Systems Validation CONCEPT HEIDELBERG

WORK PLAN AND IV&V METHODOLOGY Information Technology - Independent Verification and Validation RFP No IVV-B

Risk Based Testing. -Why we need RBT? -Types of risks -Managing risks -Methods of evaluation & risk analysis -Costs and benefits

Test Management Test Planning - Test Plan is a document that is the point of reference based on which testing is carried out within the QA team.

Risk-Based Testing: Analysis and Strategy. Presented at Quality Assurance Institute QUEST Conference Chicago, Ill., 2009

This resource is associated with the following paper: Assessing the maturity of software testing services using CMMI-SVC: an industrial case study

ISTQB-Level1 ASTQB. American Software Testing Qualifications Board Level 1

CTFL - Version: 3. ISTQB Certified Tester Foundation Level

ISTQB CTFL BH0-010 Exam Practice Question Paper

Skill Category 7. Quality Control Practices

ISTQB Sample Question Paper Dump #11

Software Testing(TYIT) Software Testing. Who does Testing?

Contractual Aspects of Testing Some Basic Guidelines CONTENTS

Led by the Author Attended by a peer group Varying level of formality Knowledge gathering Defect finding

ISTQB CTFL BH QuestionsAnswers with Explanation

Introduction. Fundamental concepts in testing

Test Workflow. Michael Fourman Cs2 Software Engineering

ISTQB Certified Tester. Foundation Level. Sample Exam 1

REQUIREMENT DRIVEN TESTING. Test Strategy for. Project name. Prepared by <author name> [Pick the date]

Technical Integration Testing Requirements. Trusted Digital Identity Framework August 2018, version 1.0

MCGILL UNIVERSITY Montreal, Quebec September 20 21, A DMAIC Framework for Improving Software Quality in Organizations: Case Study at RK Company

Standard Glossary of Terms used in Software Testing. Version 3.2. Terms Changed since 01-Feb-2018

Testing Masters Technologies

ASTQB. ISTQB-Advanced-Lev3. ISTQB Advanced LevelTechnical Test. Download Full Version :

This document describes the overall software development process of microcontroller software during all phases of the Company Name product life cycle.

Introducing Risk Based Testing to Organizations

<P <Programme_name> <Project_name> <Account_name> <Phase_name> Test Strategy (Template)

Work Plan and IV&V Methodology

Digital Industries Apprenticeship: Occupational Brief. Software Tester. March 2016

Software Quality Engineering Courses Offered by The Westfall Team

Software Quality Engineering Courses Offered by The Westfall Team

CMMI V2.0 MODEL AT-A-GLANCE. Including the following views: Development Services Supplier Management. CMMI V2.0 outline BOOKLET FOR print.

Test Strategies Around the World Winning the War on Bugs Through Strategy

David. Director of Rational Requirements and Quality Management Products

Syllabus. REQB Certified Professional for Requirements Engineering. Advanced Level Requirements Manager

Erik van Veenendaal.

PMP. Processexam.com. PMI Project Management Professional. Exam Summary Syllabus Questions

Sample Exam ISTQB Agile Foundation Questions. Exam Prepared By

Sample Questions 2012 Advanced Level Syllabus Test Manager

Intermediate Certificate in Software Testing Syllabus. Version 1.4

CRM System Tester. Location London Department Supporter and Community Partnerships. CRM Project Manager Salary Band C

TERSUBSCRIBE. Testing in the lead. E s s e n t i a l f o r s o f t w a r e t e s t e r s. It s FREE for testers. August / 5 v2.

Plutora Test. Product Features DATA SHEET

ISEB ISTQB Sample Paper

1.0 PART THREE: Work Plan and IV&V Methodology

Number: DI-IPSC-81427B Approval Date:

Number: DI-IPSC-81427B Approval Date:

Testing and Inspections (3C05/D22) Unit 11: Testing and Inspection. What is Testing?

Release Notes. Standard Glossary of Terms used in. Software Testing. Version 3.2

Testing. CxOne Standard

Software Project & Risk Management Courses Offered by The Westfall Team

PRECISE INDUSTRIES INC. Quality Manual

QuEST Forum. TL 9000 Quality Management System. Requirements Handbook

ExamsLabs. Latest Study Materials, Valid Dumps - ExamsLabs

CERTIFIED SOFTWARE QUALITY ENGINEER

Quality Management_100_Quality Checklist Procedure

1. Can you explain the PDCA cycle and where testing fits in?

B.H. Far

I hate Maintenance! Maintenance SUCKS!

Defense Business Systems - Examples

Building quality into the software from the. Keeping and. the software. software life cycle

CMMI for Services Quick Reference

IBM Integrated Product Development (IPD)

ISTQB CTFL BH0-010 Exam Practice Question Paper

What is SQA? Software Quality Assurance. Quality Concepts. Quality Concept (cont.)

Summary of TL 9000 R4.0 Requirements Beyond ISO 9001:2000

(5) May carry out maintenance of the database (6) May carry out monitoring and organizing daily uploading of data and automatic issue of reports

Software Testing Life Cycle

Oh No, DevOps is Tough to Implement!

arxiv: v1 [cs.se] 4 Apr 2017

issue 5 The Magazine for Agile Developers and Agile Testers January free digital version made in Germany ISSN

How mature is my test organization: STDM, an assessment tool

Software Testing Level Part 2. Adam Hendra Brata

0 Introduction Test strategy A Test Strategy for single high-level test B Combined testing strategy for high-level tests...

Standard Glossary of Terms used in Software Testing. Version 3.2. Expert Improving the Test Process Terms

Mapping ISO/IEC 27001:2005 -> ISO/IEC 27001:2013

Information. Certified Software Quality Engineer. Quality excellence to enhance your career and boost your organization s bottom line

Appendix C: MS Project Software Development Plan and Excel Budget.

IT6004/ Software Testing

IT6004/ Software Testing Unit-I

Ten steps to effective requirements management

Information Technology Independent Verification and Validation

Software processes, quality, and standards VTV, fast methods, automation

Strategy Analysis. Chapter Study Group Learning Materials

Vendor: ISEB. Exam Code: BH Exam Name: ISEB ISTQB Certificate in Software Testing. Version: Demo

UNIT IV TEST MANAGEMENT

Managing the Testing Process E-learning Course Outline

Book Outline. Software Testing and Analysis: Process, Principles, and Techniques

Certified Tester. Expert Level. Modules Overview

PLM APPLICATION TESTING

VALLIAMMAI ENGNIEERING COLLEGE SRM Nagar, Kattankulathur

Strategie testowania i utrzymywania wysokiej jakości

Transcription:

INF 3121 Software Testing - Lecture 05 Test Management 1. Test organization (20 min) (25 min) (15 min) (10 min) (10 min) (10 min) INF3121 / 23.02.2016 / Raluca Florea 1

1. Test organization (20 min) LO: Describe the levels of independent testing LO: Explain the benefits of having independent testing of software LO: Enumerate which organization roles can be involved in independent testing. Explain the contribution that each role can make LO: Enumerate the typical tasks of and of test leaders INF3121 / 23.02.2016 / Raluca Florea 2

Test organization and The effectiveness of finding defects by testing and reviews can be improved by using independent s. Options for are: 1. No independent s. Developers test their own code. 2. Independent s within the development teams. 3. Independent test team or group within the organization, reporting to project management or executive management. 4. Independent s from the business organization or user community. 5. Independent test specialists for specific test targets such as usability s, security s or certification s (who certify a software product against standards and regulations). * Independent s outsourced or external to the organization. (Highest level of, but not so used in practice) INF3121 / 23.02.2016 / Raluca Florea 3

Test organization and For large, complex or safety critical projects, it is usually best to have multiple levels of testing, with some or all of the levels done by independent s. Development staff can participate in testing, especially at the lower levels INF3121 / 23.02.2016 / Raluca Florea 4

Advantages & disadvantages + Independent s see other and Isolation from the development team (if different defects, and are unbiased. treated as totally independent). An independent can verify assumptions people made during specification and implementation of the system. Note: Testing tasks may be done by people in a specific testing role, or may be done by someone in another role, such as: project manager quality manager developer business and domain expert infrastructure or IT operations (this can be both good and bad) - Independent s may be the bottleneck as the last checkpoint. Developers may lose a sense of responsibility for quality. INF3121 / 23.02.2016 / Raluca Florea 5

Tasks of the test leader Test leader = test manager / test coordinator. The test leader plans, monitors and controls the testing activities and tasks. INF3121 / 23.02.2016 / Raluca Florea 6

Tasks of the test leader Coordination of the test strategy and plan with project managers Plan the tests Understanding the test objectives and risks including: selecting test approaches estimating the time, effort and cost of testing acquiring resources defining test levels, cycles planning incident management. Test specs, preparation and execution Adapt planning Manage test configuration Introduce metrics Automation of tests Select test tools Test environment Test summary reports Initiate the specification, preparation, implementation and execution of tests; also: monitor the test results check the exit criteria. based on test results and progress and take any action to compensate for problems. Set up adequate configuration management of testware for traceability. for measuring test progress and evaluating the quality of the testing & the product. Decide what should be automated, to what degree, and how. Select tools to support testing and organize trainings for tool users. Decide about the implementation of the test environment. Write test summary reports based on the information gathered during testing. INF3121 / 23.02.2016 / Raluca Florea 7

Tasks of the Test plans Review and contribute to test plans Requirements and specs Test specifications Test environment Test data Testing process Test tools Test automation Other metrics Analyze, review and assess user requirements, specifications and models for testability. Create test specifications. Set up the test environment (often coordinating with system administration and network management). Prepare and acquire test data. Implement tests on all test levels, execute and log the tests, evaluate the results and document the deviations from expected results. Use test administration or management tools and test monitoring tools as required. Automate tests (may be supported by a developer or a test automation expert). Measure performance of components and systems (if applicable). Help the others Review tests developed by others INF3121 / 23.02.2016 / Raluca Florea 8

2. Test planning and estimation (25 min) LO: Enumerate the different levels and objectives of test planning LO: Explain the purpose and the content of the test plan LO: Differentiate between the different test approaches: analytical, modelbased, methodical, process-compliant, heuristic, consultative, regression-averse LO: Write a test execution schedule for a given set of test cases, considering prioritization and technical and logical dependencies LO: Recall the typical factors that influence the effort put in testing LO: Differentiate between metric-based approach and expert-based approach Define entry criteria and exit criteria, together with their goals INF3121 / 23.02.2016 / Raluca Florea 9

Test planning Test planning is a continuous activity and is performed in all life cycle processes and activities. Feedback from test activities is used to recognize changing risks so that planning can be adjusted. Planning may be documented in: a project or master test plan and in separate test plans for test levels, such as system testing and acceptance testing. Planning is influenced by: the test policy of the organization the scope of testing objectives, risks, constraints criticality, testability and the availability of resources * Outlines of test planning documents are covered by the Standard for Software Test Documentation (IEEE 829). INF3121 / 23.02.2016 / Raluca Florea 10

Test planning activities Scope and risk Determining the scope and risks of testing Objectives Overall approach Test activities Strategy Schedule Identifying the objectives of testing. Defining the overall approach of testing, including: the definition of the test levels entry and exit criteria. Integrating and coordinating the testing activities into the software life cycle activities: acquisition, supply, development, operation and maintenance. Making decisions about: what to test what roles will perform the test activities how the test activities should be done and how the test results will be evaluated. Scheduling test analysis and design activities. Scheduling test implementation, execution and evaluation. Resources Assigning resources for the different activities defined.. Metrics Selecting metrics for monitoring and controlling test preparation and execution, defect resolution and risk issues. INF3121 / 23.02.2016 / Raluca Florea 11

Entry criteria Entry criteria defines when to start testing. Typically entry criteria may consist of: Test tool readiness in the test environment Testable code availability Test environment availability and readiness Test data availability Entry criteria INF3121 / 23.02.2016 / Raluca Florea 12

Exit criteria The purpose of exit criteria is to define when to stop testing, such as at the end of a test level or when a set of tests has a specific goal. Typically exit criteria may consist of: Estimates Defect density Reliability measures Cost Residual risks Defects not fixed Lack of test coverage in some areas Thoroughness measures Coverage of code Functionality coverage Risk Exit criteria Schedule Time to market INF3121 / 23.02.2016 / Raluca Florea 13

Test estimation Two approaches for the estimation of test effort are covered in this syllabus: The metrics-based approach Estimating the testing effort based on metrics of former or similar projects or based on typical values. Once the test effort is estimated, resources can be identified and a schedule can be drawn up. The testing effort may depend on a number of factors, including: Characteristics of the product The expert-based approach Estimating the tasks by the owner of these tasks or by experts. the quality of the specification the size of the product the complexity of the problem domain the requirements for reliability and security Characteristics of the development process The outcome of testing skills of the people involved and time pressure the number of defects the amount of rework required INF3121 / 23.02.2016 / Raluca Florea 14

Test approaches (test strategies) One way to classify test approaches or strategies is based on the point in time at which the bulk of the test design work is begun: Preventative approaches Tests are designed as early as possible Reactive approaches Test design comes after the software or system has been produced INF3121 / 23.02.2016 * Different / Raluca approaches Florea may be 15 combined.

Test approaches (test strategies) Typical approaches or strategies include: Analytical appr. Model-based appr. Methodical appr. Process- or standardcompliant appr. risk-based testing - testing is directed to areas of greatest risk. stochastic testing using statistical information about failure rates (such as reliability growth models) failure-based (including error guessing and fault-attacks), experiencedbased, check-list based, and quality characteristic based. specified by industry-specific standards or the various agile methodologies. Dynamic and heuristic appr. exploratory testing, execution & evaluation are concurrent tasks. Consultative appr. test coverage is evaluated by domain experts outside the test team. Regression-averse appr. include reuse of existing test material, extensive automation of functional regression tests. * Different approaches may be combined. INF3121 / 23.02.2016 / Raluca Florea 16

3. Test progress monitoring and control (15 min) LO: Recall common metrics used tor test preparation and execution LO: Explain and compare metrics used for test reporting (e.g.: defects found & fixed, tests passed & failed) LO: Summarize the content of the test summary report, according to IEEE-829 INF3121 / 23.02.2016 / Raluca Florea 17

Test progress monitoring The purpose of test monitoring is to give feedback and visibility about test activities. Information to be monitored may be collected manually or automatically and may be used to measure exit criteria, such as coverage. Metrics may also be used to assess progress against the planned schedule and budget. Common test metrics include: % of work done in test case preparation % of work done in test environment preparation. Dates of test milestones. Test case execution (e.g. number of tests run/not run) Defect information (e.g. defect density, defects found and fixed). Testing costs, including the cost compared to the benefit of finding the next defect or to run the next test. Test coverage of requirements, risks or code. Subjective confidence of s in the product. INF3121 / 23.02.2016 / Raluca Florea 18

Test reporting Test reporting is concerned with summarizing information about the testing endeavor, including: What happened during a period of testing (ex: dates when exit criteria were met) Analyzed metrics to support decisions about future actions (ex: the economic benefit of continued testing) *The outline of a test summary report is given in Standard for Software Test Documentation (IEEE 829). Metrics are collected at the end of a test level in order to assess: The adequacy of the test objectives for that test level. The adequacy of the test approaches taken. The effectiveness of the testing with respect to its objectives. INF3121 / 23.02.2016 / Raluca Florea 19

Test control Test control describes any guiding or corrective actions taken as a result of information and metrics gathered and reported. Examples of test control actions are: Making decisions based on information from test monitoring Re-prioritize tests when an identified risk occurs (eg. software delivered late) Change the test schedule due to availability of a test environment Set an entry criterion requiring fixes to have been retested (confirmation tested) by a developer before accepting them into a build INF3121 / 23.02.2016 / Raluca Florea 20

(10 min) LO: Explain why configuration management is necessary in software development and testing LO: Enumerate software artifacts that need to be under configuration management INF3121 / 23.02.2016 / Raluca Florea 21

The purpose of configuration management is to establish and maintain the integrity of the products of the software Components Data Documentation through the project and product life cycle. For testing, configuration management may involve ensuring that: All items of testware are o identified o version controlled o tracked for changes so that traceability can be maintained throughout the test process. All identified documents and software items are referenced unambiguously in test documentation. For the, configuration management helps to uniquely identify (and to reproduce) the tested item test documents the tests the test harness INF3121 / 23.02.2016 / Raluca Florea 22

(10 min) LO: Define end explain the concept of risk. Describe how is risk calculated LO: Describe the differences between project risks and product risks INF3121 / 23.02.2016 / Raluca Florea 23

Risk = (def.) the chance of an event, hazard, threat or situation occurring and its undesirable consequences, a potential problem. The level of risk is determined by: The likelihood of an adverse event happening The impact (the harm resulting from that event) INF3121 / 23.02.2016 / Raluca Florea 24

Project risks When analyzing, managing and mitigating these risks, the test manager is following well established project management principles. (see IEEE 829). Project risks = the risks that surround the project s capability to deliver its objectives, such as: Organizational factors: skill and staff shortages personal and training issues political issues (i.e. problems with s communicating their needs and test results) improper attitude toward testing (i.e. not appreciating the value of finding defects during testing). Technical issues: problems in defining the right requirements the extent that requirements can be met given existing constraints the quality of the design, code and tests. Supplier issues: failure of a third party contractual issues. INF3121 / 23.02.2016 / Raluca Florea 25

Product risks Product risks = Potential failure areas in the software. They are a risk to the quality of the product, i.e: Failure-prone software delivered. Software/hardware could cause harm to an individual or company. Poor software characteristics (e.g. functionality, reliability, usability and performance). Software that does not perform its intended functions. Risks are used to decide where to start testing and where to test more; INF3121 / 23.02.2016 / Raluca Florea 26

Product risks Testing is used to: reduce the risk of an adverse effect occurring, reduce the impact of an adverse effect. In a risk-based approach the risks identified may be used to: Determine the test techniques to be employed. Determine the extent of testing to be carried out. Prioritize testing in an attempt to find the critical defects as early as possible. Determine whether any non-testing activities could be employed to reduce risk (e.g. providing training to inexperienced designers). INF3121 / 23.02.2016 / Raluca Florea 27

(10 min) LO: Describe the content of a typical incident report LO: Write an incident report of a bug you have discovered in a software product INF3121 / 23.02.2016 / Raluca Florea 28

Incident = (Def.) discrepancies between actual and expected test outcomes. When to raise incidents: during development, review, testing or use of a software product. Statuses of incident reports: Objectives of incident reports: Provide developers and other parties with feedback about the problem to enable identification, isolation and correction as necessary. Provide test leaders a means of tracking the quality of the system under test and the progress of the testing. Provide ideas for test process improvement. INF3121 / 23.02.2016 / Raluca Florea 29

Incident reports Details of the incident report may include (cf. IEEE 829): Date: Project: Programmer: Tester: Program/Module: Build/Revision/Release: Software Environment: Hardware Environment: Status of the incident Number of Occurrences: Severity: Impact Priority Detailed Description: (logs, databases, screenshots) Expected result / Actual result: Change history References (including the identity of the test case specification that revealed the problem Assigned To: Incident Resolution: INF3121 / 23.02.2016 / Raluca Florea 30