Contractual Aspects of Testing Some Basic Guidelines CONTENTS

Size: px
Start display at page:

Download "Contractual Aspects of Testing Some Basic Guidelines CONTENTS"

Transcription

1 CONTENTS 1 Introduction Background Structure Some Conventions Feedback Test Schedule List of Contents Testing Deliverables Coverage Guidance Definition Usage Custom Coverage Selecting a Coverage Target Coverage in Contract Schedules Contractor Negotiations - Risks Contractor Evaluation Criteria Gerrard Consulting TestContractGuide.doc Version /9/2006 Page i

2 1 INTRODUCTION 1.1 Background This document sets out some tables, checklists and guidance for the negotiation and documentation of agreements between customers and suppliers of software systems and services. Although these guidelines may assist with contract negotiations, some aspects may also be of use for audit purposes in existing projects. 1.2 Structure There are five sections aimed at providing assistance: Test Schedule List of Contents aimed at providing a set of contents for a testing schedule. Testing Deliverables an inventory of potential deliverables for a project together with brief descriptions of each. Coverage Guidance this section provides some guidance on the more formal definition of coverage and how it can be included in a contract schedule. Supplier Negotiations Risks this section sets out the main risks associated with supplier negotiations. Supplier Evaluation Criteria this section sets out some basic criteria, some questions to ask and a scoring table for use with supplier evaluations. 1.3 Some Conventions In the pages that follow, the following convention has been used: AUTHORITY refers to the party more commonly known as the customer or purchasing party. CONTRACTOR refers to the supplier contracted to deliver the products or services in the contract. 1.4 Feedback If you find these guidelines useful, or you have any suggestions for improvements or additions, please contact Paul Gerrard by at paul AT gerrardconsulting DOT com. The address has been obfuscated to confuse spammers harvesting bots as this document is available on the gerrardconsulting.com website Gerrard Consulting TestContractGuide.doc Version /9/2006 Page 1

3 2 TEST SCHEDULE LIST OF CONTENTS 1. Scope of testing Products in scope Products out of scope 2. Goals of testing 3. Approach to testing Design documentation Functional requirements Non-functional requirements COTS components Interfaces to other systems Migrated data Security Usability Use and validation of simulators, stubs and drivers 4. Product risk management 5. Management of sub-contractor s testing 6. Test Phases Objectives Scope Goals and targets Acceptance Environments() i. Supply ii. Timing of supply iii. Provision of technical support Roles and responsibilities Entry criteria Exit criteria Test plans Reporting arrangements 7. Change Management 8. Incident management 9. Regression Test Policy 10. Test Tools 11. Documentation standards 12. Deliverables 13. Other (more commercial considerations) Service Levels Charges Contract Management Key subcontractors Continuity of service Insurance Guarantees Intellectual property Contract termination Non-disclosure agreement Locations Gerrard Consulting TestContractGuide.doc Version /9/2006 Page 2

4 3 TESTING DELIVERABLES The table below presents a potential set of testing-related deliverables. Many are derived directly from the IEEE 829 Standard for Test Documentation and supplemented with additional deliverables deemed necessary to provide a fuller deliverables set. Deliverable Test Strategy Phase Test Plans Test Design Specifications Test Case Specification Test Scripts (aka Procedures) Test Readiness Review Report Test Item Transmittal Report Test Phase Log Incident Reports Incident Analyses (various) Description The CONTRACTOR s overall approach to testing and assurance of ALL software, hardware, infrastructure and documentation deliverables throughout the project. Includes goals, targets, objectives, entry/exit criteria, techniques, tools, deliverables, policies for regression testing, data, checking and analysis, coverage, incident classification scheme, process for incident management. Details scope, approach, resources, and schedule of testing activities for test phases, including sub-contractor test phases and the means by which those phases will be managed and controlled. A phase test plan will refine the statements of approach made in the strategy. Specifies refinements of the test approach and to identify the artefacts and requirements to be tested by this design and associated test cases. Defines a test case identified by a test design specification. A Test Case is collection of conditions that will be exercised in a single test. Specifies the steps for executing a set of test cases or to evaluate functionality or other attributes of any software or hardware component. Assessment of the readiness to commence a test, covering status of entry criteria, documentation, environments etc. Identifies the specific items to be released into a test environment for testing in a test phase. It includes details of the item status, persons responsible for its status, version, configuration, source and description. A chronological record of all planned and unplanned events. These to include, but not be limited to, test passes and failures, environmental outages, other events affecting test planning, preparation or execution. Documents any anomalies or events occurring during test planning, preparation or execution requiring investigation and/or further action. Throughout the test phases, varied analyses of incident and the status should be provided. Particular attributes upon which reporting would be done include incident status, associated risk, date raised, date reviewed, date assigned, date corrected, date tested, date signed off Gerrard Consulting TestContractGuide.doc Version /9/2006 Page 3

5 Deliverable Inspection/Review Anomaly Logs Test Phase Summary Report Requirements Verification Matrix Product Risk Register Defect Management Strategy Test Environment Requirement Test Environment Specification Test Data Test Configuration Automated Test Scripts Test Results Build Strategy or Sequence Description Similar to Incident reports, these describe anomalies in any component or artefact identified during an inspection or review. The anomalies should be classified and their status recoded. Summarised results of a test phase and evaluations of the items under test. To include a definition of the items under tests, the activities undertaken, the status of all incidents relating to that test and a conclusive assessment of the item(s) under test. A cross reference of the AUTHORITY s requirements and how they are covered by the various test phases. An inventory of all product risks identified by the project team. Each risk description should include an analysis of impact and probability, the status of the risk, the test objective for this risk, the test phase(s) to which those test objectives have been allocated and the technique(s) to be used to address that risk. The CONTRACTOR s approach to defect management. In this context, defect refers to any fault or bug associated with software, hardware or documentation components. A specification of the test environment required to support a phase or phases of testing. A specification of the technical components that comprise a test environment, their configurations, version numbers and status. Will include hardware, software and networking components. May include other environmental aspects, such as desks, phones, terminals, printers, interfaces, access codes and passes and so on. Test data is a key Testware deliverable required to implement a test. Definition of particular configurations of infrastructure components in the test environment. Specific to a test execution tool or test harness, these scripts may be derived from a Test Design Specification. Actual test outputs in the form of files, hardcopy reports or instrumentation logs that provide evidence of the behaviour of the components or systems under test. A document specifying the particular construction sequence of the sub-system or systems and identifying the interfaces that are activated and ready for testing in an integration test Gerrard Consulting TestContractGuide.doc Version /9/2006 Page 4

6 4 COVERAGE GUIDANCE 4.1 Definition 4.2 Usage The BS definition is: The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test case suite. A Coverage Item represents the fundamental element or attribute of software, specification or other basis for testing that will be used: To drive the development of test procedures As a target for the amount of testing to be done, and As a metric to calculate progress towards that target. Coverage items can be derived most formally, using test design techniques (see BS for more examples) such as: Equivalence partitions, boundary values (black box or functional) Statements, branches in code (white box or structural) Note that all the formal test design techniques use a model, (e.g. control flowgraph, state transition diagram etc.) and particular attributes of that model (e.g. branches, states, transitions etc.) are identified and used as the coverage item. Using a model, the requirements or other specification, or code is analysed using the selected model to identify coverage items. The conditions generated could be logical (e.g. X>10) or actual values (e.g. X=14) required to cover a coverage item (partition, boundary, statement or branch). Conditions are then packaged up into test cases and/or test procedures. 4.3 Custom Coverage Coverage items need not necessarily be defined in a standard, or textbook. Modern requirements and design modelling techniques, such as the UML models, do not have formally defined test design techniques. However, it is reasonable, if these models are objective and systematic, to define one s own test models and coverage items. Any model that lists or diagrams elements that can be defined and counted can be used as the basis for the definition of coverage items, and a coverage target. For example, a graphical model that includes blobs and arrows could be used to define two basic coverage targets: All blobs coverage All arrows coverage And so on. Any modelling technique that tabulates or itemizes requirements elements in narrative text could have an associated coverage technique. For example, textual use cases normally define a main success scenario and a series of exceptions where the state or input conditions of the use cases vary and the behaviour of the system differs accordingly. Simple use-case coverage could represent coverage of the main success scenario plus every exception identified in the use case description. Extensions to that coverage target could be, plus all equivalence partitions or boundary values included within those exceptions Gerrard Consulting TestContractGuide.doc Version /9/2006 Page 5

7 Other models represent the flow of control in business processes. In the same way that software can be modelled using flowchart or control-flow diagrams, and coverage items derived from that, business process-flow diagrams can be used to derive coverage measures. Processes, branches, and paths can all be used as coverage items. 4.4 Selecting a Coverage Target Test models and coverage targets provide an objective, measurable method of generating a set of evenly-distributed test conditions that cover requirements, specification or code. As a consequence, the number of tests generated by the various techniques varies. In some cases, test techniques subsume others. E.g. in the case of numeric ranges, boundary value analysis subsumes equivalence partitions implying 100% BV coverage guarantees 100% EP coverage and so on. There is very little evidence to demonstrate the comparative effectiveness or value for money of the established techniques. Objective comparison of effectiveness or value for money is impossible, so the selection of one technique above another is subjective, but is influenced by cost. Some techniques obviously generate more tests than others from the same specification or code sample and are more expensive as a consequence. Since the selection of a coverage target is somewhat subjective, custom coverage targets are no more or less acceptable than the established ones. 4.5 Coverage in Contract Schedules In documenting the approach to coverage, the CONTRACTOR should aim to provide information that provides evidence of understanding and obligations along the following lines: 1. The CONTRACTOR should understand the purpose, value and usage of a coverage target to specify a quantifiable amount of testing and the ability to measure progress towards that target. 2. The CONTRACTOR should propose that all testing that is based on processes, documentation, models, code or other project artifact should have associated coverage targets. 3. The achievement of coverage targets forms a key element of exit criteria of test phases. 4. The CONTRACTOR should propose a mechanism for deciding what coverage target is appropriate for all tests. 5. The coverage target selection process will be influenced by the type of baseline (process, document, code etc.), the availability/quality of that baseline, the associated product risk, the need to provide evidence (and confidence) to the AUTHORITY, the capability of the CONTRACTOR and/or AUTHORITY staff, test environments and so on. 6. The selection of coverage targets should be agreed with the AUTHORITY % coverage targets are not always achievable or economic, but sub-100% targets would need to be justified by the CONTRACTOR. 8. Selected coverage items would form the basis of progress measurement and a significant amount of test reporting Gerrard Consulting TestContractGuide.doc Version /9/2006 Page 6

8 5 CONTRACTOR NEGOTIATIONS - RISKS Risk Impact Severity Likelihood Mitigating Action(s) CONTRACTOR fails to understand the approach and pays it lip service without understanding the implications CONTRACTOR proposes a test approach that is unacceptable. CONTRACTOR testing is poor and/or CONTRACTOR testing may overrun and cause delivery slippage. Med the Test Approach AUTHORITY to gain verbal assurances that the CONTRACTOR has understood the approach CONTRACTOR resists pressure to change their plan and commit more resource to testing CONTRACTOR subsequently argues for change of scope and additional budget to proceed. CONTRACTOR fails to take the approach seriously, assumes it will not be contractual and it will not be held to it CONTRACTOR proposes a test approach that is unacceptable. Med the Test Approach AUTHORITY to gain verbal assurances that the CONTRACTOR has understood the approach CONTRACTOR feels the approach is too strict in terms of the gates it will have to go through and the penalties it will face for noncompliance CONTRACTOR can not agree to the AUTHORITY test approach. CONTRACTOR proposes a test approach that is unacceptable. Low AUTHORITY to explain the reasoning/justification of the approach Gerrard Consulting TestContractGuide.doc Version September 2006 Page 7

9 Risk Impact Severity Likelihood Mitigating Action(s) CONTRACTOR fails to buy in to the approach and tries to dilute it on grounds that its existing methods are more appropriate etc Time is lost in the negotiation CONTRACTOR can not agree to the AUTHORITY test approach. Med the Test Approach AUTHORITY to explain the reasoning/justification of the approach. AUTHORITY to gain verbal assurances that the CONTRACTOR has understood the approach CONTRACTOR signs up to the approach but fails to document the detail in a way that can be enforced under contract CONTRACTOR proposes a test approach that is unacceptable. Time is lost in the negotiation. Med the Test Approach AUTHORITY to explain the reasoning/justification of the approach. AUTHORITY to incorporate AUTHORITY Test Approach in the contract and to define performance indicators to be used to enforce it. CONTRACTOR fails to demonstrate how it will apply the approach consistently across its entire team which must include its sub-contractors CONTRACTOR BAFO (Best and Final Offer) loses credibility and may prove unacceptable. Low the Test Approach Gerrard Consulting TestContractGuide.doc Version September 2006 Page 8

10 Risk Impact Severity Likelihood Mitigating Action(s) CONTRACTOR underestimates the effort required to implement the approach CONTRACTOR testing is poor and/or CONTRACTOR testing may overrun and cause delivery slippage. CONTRACTOR resists pressure to change their plan and commit more resource to testing CONTRACTOR subsequently argues for change of scope and additional budget to proceed. Med the Test Approach AUTHORITY to explain the reasoning/justification of the approach. AUTHORITY to incorporate AUTHORITY Test Approach in the contract and to define performance indicators to be used to enforce it. CONTRACTOR overestimates its ability to deliver against the approach CONTRACTOR testing is poor and/or CONTRACTOR testing may overrun and cause delivery slippage. CONTRACTOR resists pressure to change their plan and commit more resource to testing CONTRACTOR subsequently argues for change of scope and additional budget to proceed. Med the Test Approach AUTHORITY to explain the reasoning/justification of the approach. AUTHORITY to incorporate AUTHORITY Test Approach in the contract and to define performance indicators to be used to enforce it Gerrard Consulting TestContractGuide.doc Version September 2006 Page 9

11 Risk Impact Severity Likelihood Mitigating Action(s) CONTRACTOR unable to commit to activities covered by the approach (and their timing) in an informed way as opposed to a generic way e.g. discussions on migration could impact on testing and vice-versa Proposed CONTRACTOR approach may not take account of other factors in the project. CONTRACTOR testing may be underspecified and/or underestimated. the Test Approach AUTHORITY to explain the reasoning/justification of the approach. AUTHORITY to incorporate AUTHORITY Test Approach in the contract and to define performance indicators to be used to enforce it. AUTHORITY to monitor other factors in the project and potential impact on test approach. CONTRACTOR testing experts do not attend the discussions CONTRACTOR proposes a test approach that is unacceptable. CONTRACTOR testing is poor and/or CONTRACTOR testing may overrun and cause delivery slippage. CONTRACTOR resists pressure to change their plan and commit more resource to testing CONTRACTOR subsequently argues for change of scope and additional budget to proceed. Med the Test Approach BEFORE negotiations begin Gerrard Consulting TestContractGuide.doc Version September 2006 Page 10

12 6 CONTRACTOR EVALUATION CRITERIA Evaluation Criteria Questions to Ask Intolerable Tolerable Good Very Good Suitability of the proposed test process Appropriate Testing Philosophy Auditability of the proposed process Confidence in the technical approach to test environments The CONTRACTOR test phases are well defined Do the test phases align with sensible stages of development and integration of the system? Are the objectives of each test phase appropriate? Has the CONTRACTOR set out goals of testing that give the AUTHORITY confidence in their overall approach? Do their goals align with the authorities overall testing assurance objectives? Are the outputs from each activity traceable to inputs to that phase? Has the CONTRACTOR allowed for specific reviews of outputs in their planning? Has the CONTRACTOR allowed for audits of test activities? Will the technical test environment, specified for each test phase support the test objectives for that phase? Do the proposed phases of testing have appropriate objectives and goals/targets? Are the entry and exit criteria for each test phase objective, unambiguous, workable? Do the arrangements defined in each testing phase give sufficient confidence that the phase objectives will be met? Gerrard Consulting TestContractGuide.doc Version September 2006 Page 11

13 Evaluation Criteria Questions to Ask Intolerable Tolerable Good Very Good Management of change Incident Management Is the CONTRACTOR s proposed test process flexible enough to allow for change? Will the CONTRACTORs proposed arrangements for change management provide for sufficient control? Will the objectives of each test phase or the test programme overall be compromised by the CONTRACTOR s change management arrangements? Will the CONTRACTOR s proposed incident management process give the AUTHORITY confidence that anomalies, faults etc. will be rectified, re-tested and regression tested? Do the CONTRACTORs proposed arrangements for incident management provide for sufficient control? Will the objectives of each test phase or the test programme overall be compromised by the CONTRACTOR s incident management arrangements? Gerrard Consulting TestContractGuide.doc Version September 2006 Page 12

14 Evaluation Criteria Questions to Ask Intolerable Tolerable Good Very Good Regression Testing Deliverables Adequacy of the depth of testing proposed Does the CONTRACTORs regression test policy set out their approach to selecting tests for inclusion in a regression test pack? Does the CONTRACTOR incorporate an appropriate impact analysis activity when changes due to rectification or requirements changes occur? Does the CONTRACTORs regression test policy set out their approach to selecting tests for execution in a test phase when change has occurred? Will the proposed set of deliverables provide confidence that the scope of testing to be performed in each test phase is appropriate? Will the proposed deliverables provide sufficient evidence for an assurance decision to be made? Are the target levels of testing for a phase defined? Is the process for arriving at a planned level of testing appropriate and visible to the AUTHORITY? Are targets measurable, objective, realistic? Gerrard Consulting TestContractGuide.doc Version September 2006 Page 13

15 Evaluation Criteria Questions to Ask Intolerable Tolerable Good Very Good Ability of the CONTRACTOR to manage the scope of testing Ability of the CONTRACTOR to manage the testing done by subcontractors Level of involvement of the AUTHORITY in the management of scope of the testing Level of involvement of the AUTHORITY in the definition of tests Is the proposed scope of testing accurate and appropriate? Has the CONTRACTOR a mechanism/process for defining and controlling the scope of testing? Is the scope of each test phase sufficiently well defined for it to be understood by the AUTHORITY? Has the CONTRACTOR described the scope of testing in an objective, unambiguous way? Has the CONTRACTOR a mechanism/process for defining and controlling the testing done by subcontractors? involvement of the AUTHORITY in the scoping of tests? involvement of the AUTHORITY in the witnessing of tests? involvement of the AUTHORITY in the definition of tests? involvement of the AUTHORITY in the allocation of suitable tests to product risks? Gerrard Consulting TestContractGuide.doc Version September 2006 Page 14

16 Evaluation Criteria Questions to Ask Intolerable Tolerable Good Very Good Level of involvement of the AUTHORITY in the execution/witnessing and sign-off of tests Level of involvement of the AUTHORITY in the management of product risks Confidence in the CONTRACTORs commitment to testing CONTRACTORs Commitment to testing in as realistic environment as possible involvement of the AUTHORITY in the execution/witnessing of tests? involvement of the AUTHORITY in the review and sign-off of tests? involvement of the AUTHORITY in the identification of risks? involvement of the AUTHORITY in the allocation of risks to test phases? involvement of the AUTHORITY in the assessment of risks after tests are complete? Has the CONTRACTOR demonstrated their commitment to thorough testing during the planning phase? Has the CONTRACTOR demonstrated their commitment to thorough testing during the execution phase (when there may be pressure to cut testing)? Do the CONTRACTOR s arrangements for testing in realistic environments give confidence the objectives for each test phase will be met? Gerrard Consulting TestContractGuide.doc Version September 2006 Page 15

17 Evaluation Criteria Questions to Ask Intolerable Tolerable Good Very Good Technical competence of the staff involved in testing Credibility of the timescales estimated for the phases of testing Credibility of the resources estimated for the phases of testing Is the testing approach consistent with the migration approach? Do the proposed staff demonstrate a level of competence that gives confidence the goals of the CONTRACTORs test phases will be met? Are the CONTRACTOR s estimates of timescales credible? Are the CONTRACTOR s estimates of resources credible? Do the arrangements for testing fit with the CONTRACTOR s proposed rate of deployment of the system as a whole? Is the testing approach flexible to allow for rescheduling implementation of System deployment should delays occur? Gerrard Consulting TestContractGuide.doc Version September 2006 Page 16