Advantages and Disadvantages of. Independent Tests. Advantages. Disadvantages

Similar documents
INF 3121 Software Testing - Lecture 05. Test Management

T Software Testing and Quality Assurance Test Planning

Chapter 5 Part Test progress monitoring and control. 4. Configuration management. 5. Risk and testing. 6. Incident management

Test Management: Part II. Software Testing: INF3121 / INF4121

Test Management: Part I. Software Testing: INF3121 / INF4121

WORK PLAN AND IV&V METHODOLOGY Information Technology - Independent Verification and Validation RFP No IVV-B

Managing the Testing Process E-learning Course Outline

Test Management Test Planning - Test Plan is a document that is the point of reference based on which testing is carried out within the QA team.

Fundamentals Test Process

Test Management is Risk management. Risk Based Testing

ISTQB Certified Tester. Foundation Level. Sample Exam 1

BASICS OF SOFTWARE TESTING AND QUALITY ASSURANCE. Yvonne Enselman, CTAL

REQUIREMENT DRIVEN TESTING. Test Strategy for. Project name. Prepared by <author name> [Pick the date]

Contractual Aspects of Testing Some Basic Guidelines CONTENTS

CTFL - Version: 3. ISTQB Certified Tester Foundation Level

1.0 PART THREE: Work Plan and IV&V Methodology

How to Suspend Testing and Still Succeed A True Story

Risk Based Testing. -Why we need RBT? -Types of risks -Managing risks -Methods of evaluation & risk analysis -Costs and benefits

Testing. CxOne Standard

ISTQB CTFL BH QuestionsAnswers with Explanation

ISTQB CTFL BH0-010 Exam Practice Question Paper

How mature is my test organization: STDM, an assessment tool

Seminar 06 Chapter 5 - Part 1

ISTQB-Level1 ASTQB. American Software Testing Qualifications Board Level 1

Risk Based Testing Pragmatic Risk Analysis and Management

Intermediate Certificate in Software Testing Syllabus. Version 1.4

Key Attributes and Responsibilities of a Test Manager

Sample Exam ISTQB Agile Foundation Questions. Exam Prepared By

Project Planning and Management (PPM) V2.0. WBS Dictionary

7. Model based software architecture

Passit4Sure.OG Questions. TOGAF 9 Combined Part 1 and Part 2

1. Can you explain the PDCA cycle and where testing fits in?

Quantifying the Value of Investments in Micro Focus Quality Center Solutions

Unit 381 IT Project Management Level 3. Credit value 10. Rationale

Chapter 4 Document Driven Approach for Agile Methodology

NCOVER. ROI Analysis for. Using NCover. NCover P.O. Box 9298 Greenville, SC T F

Digital Industries Apprenticeship: Occupational Brief. Software Tester. March 2016

Stakeholder Management Plan <Project Name>

This resource is associated with the following paper: Assessing the maturity of software testing services using CMMI-SVC: an industrial case study

Planning the Work How to Create a Manageable Enterprise GIS Project Plan

Test Estimation Seeing the Future of Your Test Effort

Testing Masters Technologies

Project Execution Approach

The Basics of ITIL Help Desk for SMB s

PART THREE: Work Plan and IV&V Methodology (RFP 5.3.3)

True stories about testing based on experiences

Improving Agile Execution in the Federal Government

4/26/11. CTG - Company overview. True stories about testing based on experiences. CTG - Company overview. CTG - Company overview

Appendix C: MS Project Software Development Plan and Excel Budget.

0 Introduction Test strategy A Test Strategy for single high-level test B Combined testing strategy for high-level tests...

Test Maturity Assessment and Improvement Using TPI and Quality Blueprint. Performance driven. Quality assured.

Digital & Technology Solutions Specialist Integrated Degree Apprenticeship (Level 7)

Also we will try to recap what we studied in the last class. (Refer Slide Time: 00:35)

This document describes the overall software development process of microcontroller software during all phases of the Company Name product life cycle.

issue 5 The Magazine for Agile Developers and Agile Testers January free digital version made in Germany ISSN

CRM System Tester. Location London Department Supporter and Community Partnerships. CRM Project Manager Salary Band C

Why PMOs Fail: Is Your Organization at Risk?

Quality Management_100_Quality Checklist Procedure

Darshan Institute of Engineering & Technology for Diploma Studies

Software Testing Life Cycle

Introducing Risk Based Testing to Organizations

Erik van Veenendaal.

Objectives. Topics covered. Software project management. Management activities. Software management distinctions. Project management

Defining Requirements

MERCURY CUSTOMER PERSPECTIVE WHITE PAPER: USING MERCURY TESTDIRECTOR TO DEVELOP A SOFTWARE DEFECT REPORTING AND RESOLUTION PROCESS

Testing 2. Testing: Agenda. for Systems Validation. Testing for Systems Validation CONCEPT HEIDELBERG

Scrum Testing: A Beginner s Guide

Software Project & Risk Management Courses Offered by The Westfall Team

Information Technology Independent Verification and Validation

Core Skills: Contributing Skills: Role Title: Senior Project Manager EXAMPLE. Reference: SFIA level 5

WORKING WITH TEST DOCUMENTATION

Lecture 2: Project Management, Part 1: Requirements, WBS, Scheduling, and Risk Management. Prof. Shervin Shirmohammadi SITE, University of Ottawa

QUALITY ASSURANCE PLAN OKLAHOMA DEPARTMENT OF HUMAN SERVICES ENTERPRISE SYSTEM (MOSAIC PROJECT)

REQUIREMENTS DOCUMENTATION

The Science of Running Effective User Acceptance Testing Cycles

Software Engineering

ISO 9001:2000 Drives Process Changes at Siemens

Test Strategy Evolution Throughout the Lifecycle. Chris Comey & Davidson Devadoss

Software Engineering II - Exercise

Sample Questions 2012 Advanced Level Syllabus Test Manager

CMMI for Acquisition Quick Reference

Business Analysis Essentials

Agile Transformation Key Considerations for success

About the Tutorial. Audience. Prerequisites. Copyright & Disclaimer STLC

Led by the Author Attended by a peer group Varying level of formality Knowledge gathering Defect finding

ISTQB CTFL BH0-010 Exam Practice Question Paper

What We Know Now: Lessons Learned Implementing Federal Financial Systems Projects

Large Federal Agency Leverages IV&V to Achieve Quality Delivery for Critical Modernization Initiative

ITIL Qualification: MANAGING ACROSS THE LIFECYCLE (MALC) CERTIFICATE. Sample Paper 2, version 5.1. To be used with Case Study 1 QUESTION BOOKLET

Assessor-3 Release-1 Retrospective-ESI

Project Management Framework with reference to PMBOK (PMI) July 01, 2009

Software Testing(TYIT) Software Testing. Who does Testing?

7. Project Management

ISACA CRISC. Certified in Risk and Information Systems Control. Download Full Version :

The following Standard reflects employers requirements for the skills, knowledge and behaviours expected from someone to be competent in the job role.

Strategy Analysis. Chapter Study Group Learning Materials

Overview: Status Reports/Dashboards provide program leadership and governance with updates on program progress, and strategic program risks/issues.

S2 IT GROUP IT AND CLOUD CONSULTING CORPORATE TRAINING QUALITY ASSURANCE & PRODUCTION SUPPORT SOFTWARE DEVELOPMENT YOUR ON-SHORE IT PARTNER

Project and Process Tailoring For Success

CMMI-DEV V1.3 CMMI for Development Version 1.3 Quick Reference Guide

Transcription:

8.0 Test Management

Outline 8.1 Test organisation 8.2 Test planning and estimation 8.3 Test program monitoring and control 8.4 Configuration management 8.5 Risk and testing 8.6 Summary

Independent Testing Test tasks can be taken by people in a specific test role or in a different role for example project manager, quality manager, developer, technical and domain expert, staff in infrastructure or IT-department Independent testers improve the effectiveness of defect finding

Advantages and Disadvantages of Advantages Independent Tests Independent testers are impartial and therefore see other and different possibilities for defects, which are to be tested Independent testers can test (possibly wrong) assumptions made by developers when specifying and implementing the system Disadvantages Higher effort for communication because of the separation of the tester from the development team (in perfect independence) An independent test team can ultimately become a bottle neck when it is the final testing instance (in case of bad planning or insufficient equipment) There is the danger that developers do not take over their responsibility for quality sufficiently and hand it over to the independent testers

Spectrum of Independence No independent testers, developers test their own code Independent testers within the development teams Independent test team or group within the organisation, reporting to project management or executive management Independent testers from the business organisation or user community Independent test specialists for specific test targets such as usability testers, security testers or certification testers Independent testers outsourced or external to the organisation Dependent Professional testing department Independent

Independence and Test Organisation

Testing Roles within the Team Independent testing who does what Where possible some or all of the testing should be carried out by independent testers Developers may contribute at the lower levels (typically Component Testing and maybe Component integration) Specialist testers mainly play a role at Functional, Non Functional and System integration test levels The Business and System Administrators provide testing for Acceptance testing (e.g. UAT, OAT) The Test processes and rules are often defined by the Independent testers but must be agreed by management

Why independence? Given complex and/or critical applications, one needs Multiple levels of testing Some or all the levels done by independent testers Development testing, while important, is typically is much less effective at finding bugs If independent testers want/have the authority to require and define processes and rules, they should have clear direction

The Test Leader Role of the Test Leader Also known as Test Manager or Test Coordinator This role may also be done by Project manager QA manager Development manager In large projects this role may be split into a Test Manager Test (team) Leader Key tasks of a leader Test Planning Test Monitoring Test Control

Strategy & Management The Test Leader s tasks Write and review the test strategy Coordinate the test strategy Plan testing effort context, risks & approach Proactive representation in other project activities ensure testing has correct focus Ensure proper configuration management of Testware exists Determine what should be automated Select and implement the most appropriate tools for testing including necessary training Management and definition of the test environmental requirements Define the test schedule based on the delivery of code in to test Monitor Define, record and continually review the testing project metrics Monitor test progress against the test schedule Write the test summary reports Control Adapt testing effort based results and progress

Test Leads / Test Manager Devise test strategies, plans Write or review test policy Consult on testing for other project activities Test estimation Test resource acquisition Lead specification, preparation, implementation and execution of tests Monitor and control the test execution Adapt the test plan based on test results Ensure configuration management of testware Ensure traceability Measure test progress, evaluate the quality of the testing and the product Plan any test automation Select tools and organise any tester training Ensure implementation of the test environment Schedule tests Write test summary reports

Testers Roles Review and contribute to test plans Analyse, review and assess user requirements, specifications Create test suites, cases, data and procedures Set up the test environment Implement tests on all test levels Execute and log the tests, evaluate results and document problems found Monitor testing using the appropriate tools Automate tests Measure performance of components and systems Review each others tests

Refining the Tester Position Test engineers Technical peers of programmers Chose testing as a specialty Write test cases, organise test suites Create, customise and use advanced test tools Have unique skills Test technicians Skilled and experienced tester May be an aspiring test engineer Run tests Report bugs Update test status Assists the test engineers Other test team members System and database administrators Release and configuration engineer Test toolsmiths (automation test engineer)

8.2 Test planning and estimation Test Planning Test Planning Activities Exit Criteria Test Estimation Test Approaches

Test Planning Test strategies and test plans All projects require a set of plans and strategies which define how the testing will be conducted. There are number of levels at which these are defined: TestPolicy Defines how the organisation will conduct testing Master Test Plan/Test Strategy Defines how the project will conduct testing Functional Test Plan System Integration Test Plan UAT Test Plan Defines how each level of testing will be conducted

Master Test Plan - Structure

Test Planning: Activities (1) Defining the project specific test strategies, the test levels, the entry and exit criteria. Integration and coordination of the test activities with the activities in the software lifecycle (from the analysis phase to the execution and maintenance) Making decisions Which parts are to be tested how intensively, By whom, when and how the test activities are to be executed, How the test results are to be evaluated and When the output criteria are fulfilled Allocation of the necessary resources

Test Planning: Activities (2) Defining the volume, degree of detail and the structure of test documentation (as well as prepared templates) Selection of metrics to monitor and control the test preparation and execution, defect resolution and risk factors Defining the level of detail and the description of the test specification so as to ensure a reproduce-able test execution

Test Planning: Points to be Considered Development phase The software which is available at the beginning of the test iteration may be different to the original plans and may have limitations or changes to the functionality. This may result in the need to make adjustments to test specifications and test cases. Test results Problems detected in previous test iterations may result in changes to the test priorities. Corrected defects require re-tests which need to be newly planned. Additional tests might also be needed if problems can not be completely reproduced and analysed. Resources The planning of the current test iteration must be coordinated with the current project plan. Aspects to be considered include e.g. the effects of the current staffing commitment, vacation planning, the current availability of the test environment, and special testing tools needed.

Test Planning: Factors to be Considered (1) Maturity of the software development process Frequency of errors made by the developer Amount of software changes Validity, consistency and informative value of plans Discipline of configuration and change management Testability of the software Informative value, quality and timeliness of documentation used as the test basis Type of software (e.g. embedded) and system environment Complexity of software Test infrastructure Availability of testing tools Availability of test harness and test infrastructure Availability and awareness of the test process, standards and procedures

Test Planning: Factors to be Considered (2) Staff qualification and skills Experience and know-how of the tester regarding Testing Testing tools and test context Test object Cooperation between tester, developer, management and customer Project- and product-specific requirements Quality goals Risk classes and criticality Test objectives and targeted test coverage Target residual error rate or reliability after test completion Test guidelines of the organization and test strategy Scope of the test levels (component, integration, system, acceptance testing, ) Selection of test techniques (which Black-Box or White-Box techniques) Schedule of tests (beginning and execution of test tasks in a project or in a software lifecycle)

Transitions: Entry Criteria Entry criteria measure whether the system is ready for a particular test phase Deliverables (test objects, test items) ready and testable? Lab (including test cases, test data, test environment, and test tools) ready? Teams (developers, test, others) ready? These tend to become increasingly rigorous as the phases proceeds System Test can begin when

Transitions: Exit Criteria Purpose of exit criteria is to define when we STOP testing either at the: End of all testing i.e. product Go Live End of phase of testing (e.g. hand over from System Test to UAT) Exit Criteria typically measures: Thoroughness measures, such as coverage of requirements or of code or risk coverage Estimates of defect density or reliability measures. (e.g. how many defects open by category) Cost Residual Risks, such as defects not fixed or lack of test coverage in certain areas Schedules - such as those based on time to market. Remember that these are business decisions The system test will end when

How do we know when to stop testing? Run out of time? Run out of budget? The business tells you it went live last night! Boss says stop? All defects have been fixed? When our exit criteria have been met?

Test Strategy/Procedure, Test Approach Test strategy: the general way in which a test team goes about testing IEEE 829 test plan includes Test Approach Test approach: the implementation of the test strategy for a specific project Defined in test plan, refined in test design The test approach includes decisions made about testing Different situations call for different approaches

Test Procedure (1) Test approaches can be classified wrt. the time, when the test design was started Preventive approach Design tests as early as possible Reactive Approach Design tests after programming Typical approaches or strategies contain Analytic approaches e.g. Risk-based tests with focus on the domains with the highest risks of defects (analysis of the test object) Model-based analysis e.g. Stochastic testing, using statistic information about failure rates (e.g. reliability expansion rate model) or system usage (e.g. usage profiles) Methodological approaches e.g. Defect-based testing (Error Guessing) where testing is based on check lists (experience-based) and qualitycharacteristic testing

Test Approaches Preventative Test Design done here Reactive Test Design done here Requirements Delivered Code Delivered Go Live

Test Approaches Pre-emptive / preventive (*early testing / pre-coding) Reactive / dynamic (*only after coding) Analytical e.g. risk-based and requirement-based BBT, WBT, Heuristics e.g. experience-based A preventative, Risk & Process based approach would start testing early and use a standard industry approach such as the V model using the defined risks as a basis for testing or A reactive, heuristic approach - would start after the code has been delivered and use unplanned testing techniques such as exploratory testing

Selection of the Test Procedure Selection is of highest importance and should consider the relevant overall conditions, including The risk of project failure Risk for the project and risk for the people, the environment and the enterprise by product defects or failures Qualification and experience of the people in their respective fields, tools and methods Test objectives and assignment of the test teams Formalities of the development process Product objectives, type and business domain Select a mixture of procedures, which can represent under the overall conditions an optimal relationship between test cost, available resources and expected defect costs Test costs should remain visibly lower than the costs of unresolved defects and deficiencies in the end product Metrics which allow this cost-benefit-relation to be quantified is rarely available to companies developing software

Not-to-Test Can Be Expensive Testing can become very extensive and be an important cost factor in a development project The test manager needs to find out How much test expenses are appropriate for a certain software project? When do test expenses exceed their potential benefit wrt. failure costs? To answer this, the cost of failures must be considered when deciding which tests to execute or not to execute A balance needs to be achieved between failure costs and test expenses

Test Complexity Test complexity depends on certain factors, among others On the characteristics of the products: Quality of the test basis (specifications and further documents, which are referred to for the testing), the size and testability of the product, the complexity and domain of definition, standards on reliability, security and performance, as well as the volume of documentation On the characteristics of the development process: Consistency of the organisation and degree of maturity of the development- and testing process. Continuity of the used tools, the adopted test infrastructure and the respective test strategy, capabilities of the staff and existing time pressure On the results of the test: The amount of the defects detected influences the extra effort needed for re-working (correction and defect post test) Only if the test complexity is estimated, test resources can be calculated and a test schedule can be created

Question 1: Break Exercise 1 Within in your group, answer this question immediately. Consider the use of reactive testing as opposed to pre-designed tests. In the table below, put a + in the column where the factor or concern motivates towards using the approach and a - in the column where the factor or concern motivates away from using the approach.

Approaches to Estimate Testing Efforts Two approaches for estimating testing efforts Expert estimation via people in charge of these tasks or external experts Analogy estimation based on metrics of previous or similar projects, or based on typical values (metric-based approach)

Expert Estimation (1) Estimation by Single Expert Experts (often the test manager or experienced testers) estimate the effort on the basis of the task description and their experience Advantages Easiest procedure Fast, no computing. No parameter estimation Disadvantages High subjectivity Size estimation uncertainty No transparency of the estimation process

Expert Estimation (2) Estimation by many experts Multiple Query Several estimators, if possible from different fields of the organisation, are asked and an average value taken or, (in case of strong differences), discussed and a consensus attempted Delphi-Estimation Technique Formal, written query of several experts Two or more interview sessions with meetings to discuss the intermediate results of the previous round Estimation Examination Transferring the Delphi-Principle to one single group meeting

Analogy Estimation Comparison of the test project to be estimated with one or more similar projects which are already finished, Same or similar field of application or assignment of tasks Same or similar product size Same or similar context conditions (e.g. project team, SEU and similar) Differences in task assignment and implementation conditions will be considered by the estimator based on their intuitive experience Advantages Estimation already in very early project levels possible Effort derived from actual project values Estimation of the complete project, as well as on system level Disadvantages Difficult evaluation of the how representative the analogy project is High subjectivity of the allocated discounts and/or extra charges

Test Progress Monitoring (1)

Test Progress Monitoring (2)

Test Progress Monitoring (3)

Exit Criteria and Test Progress Monitoring The objective of exit criteria is to determine when the testing can be terminated, e.g. at the end of a test level, or when the testing has achieved a specified result Typical exit criteria are e.g. Coverage measurements, e.g. coverage of code, functionality or risk Estimation of (residual) defect density or reliability estimation Amount of the remaining risk, such as detected but unsolved defects or lacking test coverage in separate software parts Costs Time, e.g. milestones for delivery or deployment to the market Test progress monitoring delivers feedback and an overview of the test activities The required information to measure the exit criteria can be collected manually or in an automated manner Metrics can also be collected to enable monitoring of the progress in meeting the schedule and adherence to the budget

Test status control The measured data is used to analyse the testing status and to answer the following questions: How much has the test progressed? Can the test be terminated and can the product be delivered? The specific criteria considered to be useful and appropriate for determining testing status depend on the quality requirements to be fulfilled (criticality of the software), and also on the availability of test resources (time, staff, tools) The exit criteria which are defined for the project are also defined in the test plan Each test exit criteria must be chosen in a way, that it can be computed from the continuously monitored test metrics

Test Status Report(1) After each test iteration, the test manager creates a test status report, in which the following information about the status of the testing can be found: Test object(s), test level, test iteration-date (from... until...) Test progress: What happened during the test time frame (planned/executed/blocked tests) Defect status: new/open/corrected defects Risks: new/changed/known Outlook: Planning of the next test iteration Overall evaluation Evaluation of remaining defects Estimate of whether continuation of the test is economically reasonable Evaluation of the release maturity of the test object Degree of confidence in the test object Decisions about further activities

Test Status Report (2) Metrics should be considered for evaluation and decision making The following aspects should be considered Suitability of the test objectives for the test level Suitability of the selected test strategy Effectiveness of the tests in achieving the defined objectives

Test Control (1) If there are delays compared to the project plan or test plan, adequate corrective measures are to be taken. For example: If (newly) recognised risks appear, change the prioritisation of tests (e.g. delayed delivery of software parts) Adjustment of the time planning, if the availability of the test environment causes delays Corrected defects are to be re-tested by a developer before the software is taken further If needed, request and deploy additional test resources (staff, work space, tools) in order to catch up with the test plan in remaining iterations

Test Control (2) If no additional resources are available, the test plan must be adjusted Low priority test cases may be cancelled Test cases which are designed in different variants may be executed only for one variant (e.g. tests are only executed in one operating system instead of different ones as originally planned) As a result of these adjustments, some interesting test cases may not be executed, but the saved resources will enable at least the high priority test cases to be executed

Test Control (3) Depending on how serious the detected defects and issues are, the test duration can be extended to allow for additional test iterations which may be needed After each correction phase, the changed software has to be re-tested This can lead to a delay of the deployment or the delivery of the software product It is important, that the test manager documents and communicates all changes to the plan Changes in the original test plan mean generally an increase in the release risk It is the responsibility of the test manager to present the risk to the project manager openly, clearly and in a timely manner.

Test Control (Summary) Test control describes any guiding or corrective action taken by the test manager as a result of metrics or information gathered and reported Measures may address any test activity and may affect any other software lifecycle activity or task. Examples of measures for test control are: Re-prioritisation of tests if new or changed risks occur Changes of the test schedule based on the availability of the test environment Setting of an entry criteria that requires defect corrections to be retested before the software is given to the next test level

Configuration Management

Configuration Management

Requirements on Configuration Management (1)

Requirements on Configuration Management (2)

Configuration Management and Testing

Test Release Management Release schedule Update apply (process to install new build) Update unapply (process to remove bad build) Build naming (internal code revision level); e.g., 1.01.017 Interrogation (process to determine revision level) Synchronizing with databases, other systems, etc. Roles and responsibilities for each step.

Risk-Based Testing

What is a Risk?

Project Risks The risks that surround the project s capability to deliver its objectives, such as: Organizational factors Skill, training and staff shortages Personnel issues Technical issues: Problems in defining the right requirements. Low quality of design, code, test data. Test environment is not available when needed. Supplier issues: Failure of a third party Contractual issues

Handling Project Risks For each project risk, you have 4 options: Mitigation: Reduce the likelihood or impact through preventive steps. Contingency: Have a plan in place to reduce the impact. Transfer: Get some other party to accept the consequences. Ignore: Do nothing about it.

Product Risks Potential failure areas in the software or system. A risk to the quality of the product. Include: Failure prone software delivered. Poor data integrity and quality (e.g. data migration issues, data conversion problems, violation of data standards) Software that does not perform its intended functions.

Break Exercise 2 Question 2: Within in your group, answer this question immediately. You are working as a test manager on an online banking project. List 5 product risks and 5 project risks for your project?

Risk-Based Testing The level of risk varies, depending on: Likelihood arises from technical risk The chance that something might happen. Impact arises from business risk In risk-based testing, testing responds to risk: Allocation of effort, test sequencing, prioritization of defect repair. Providing mitigation and contingency responses Reporting test results and project status.

Risk Impact Scale Example

List of Risk

Example of Typical Risk & Its Mitigation Logistics or product quality problems Mitigation: Careful planning, robust test design. Test items that won t install in the test environment. Mitigation: Smoke testing prior to starting test phases. Contingency: Having a defined uninstall process.

Risk Management