Building quality into the software from the. Keeping and. the software. software life cycle

Similar documents
B.H. Far

Chapter 6. Software Quality Management & Estimation

B.H. Far

CMMI V2.0 MODEL AT-A-GLANCE. Including the following views: Development Services Supplier Management. CMMI V2.0 outline BOOKLET FOR print.

Summary of TL 9000 R4.0 Requirements Beyond ISO 9001:2000


WORK PLAN AND IV&V METHODOLOGY Information Technology - Independent Verification and Validation RFP No IVV-B

Course Hours: H(3) Calendar Reference (choose as appropriate):

QuEST Forum. TL 9000 Quality Management System. Requirements Handbook

Work Plan and IV&V Methodology

mainstream of probability and statistics, and nineteenth century maritime and life

QUALITY ASSURANCE PLAN OKLAHOMA DEPARTMENT OF HUMAN SERVICES ENTERPRISE SYSTEM (MOSAIC PROJECT)

Project Management Knowledge Areas SECTION III

Software configuration management

Testing 2. Testing: Agenda. for Systems Validation. Testing for Systems Validation CONCEPT HEIDELBERG

Surviving the Top Ten Challenges of Software Testing

T Software Testing and Quality Assurance Test Planning

version NDIA CMMI Conf 3.5 SE Tutorial RE - 1

The Components of the SW Quality Assurance System - Overview. 08/09/2006 SE7161 Software Quality Assurance Slide 1

Implement Effective Computer System Validation. Noelia Ortiz, MME, CSSGB, CQA

Measuring and Assessing Software Quality

Appendix C: MS Project Software Development Plan and Excel Budget.

Research on software systems dependability at the OECD Halden Reactor Project

SE curriculum in CC2001 made by IEEE and ACM: What is Software Engineering?

Testing. Testing is the most important component of software development that must be performed throughout the life cycle

Fundamentals Test Process

For the Medical Device Industry

Software Quality Engineering Courses Offered by The Westfall Team

An Application of Causal Analysis to the Software Modification Process

Software Quality Engineering Courses Offered by The Westfall Team

1 Introduction. 20 August 1995; 19:29 1 Master04.Doc

Introduction to Software Engineering

VC SOFTWARE PROJECT MANAGEMENT PLAN

Software Quality Management

COPYRIGHTED MATERIAL RELIABILITY ENGINEERING AND PRODUCT LIFE CYCLE 1.1 RELIABILITY ENGINEERING

CSE 435 Software Engineering. Sept 14, 2015

R.POONKODI, ASSISTANT PROFESSOR, COMPUTER SCIENCE AND ENGINEERING, SRI ESHWAR COLLEGE OF ENGINEERING, COIMBATORE.

Introduction to software testing and quality process

Quality Manual Template ISO 9001:2015 Quality Management System

What is SQA? Software Quality Assurance. Quality Concepts. Quality Concept (cont.)

Developing Software Quality Plans a Ten Step Process. Phil Robinson Lonsdale Systems. Software Quality Plans. We all agree that you need one

Quality Manual. This manual complies with the requirements of the ISO 9001:2015 International Standard.

Requirements Gathering using Object- Oriented Models

NATO REQUIREMENTS FOR DELIVERABLE QUALITY PLANS

IEC Functional Safety Assessment

Independent Verification and Validation (IV&V)

Chapter 26. Quality Management

Business Management System Manual Conforms to ISO 9001:2015 Table of Contents

Integration and Testing

Project Management Framework

IT6004/ Software Testing

PART THREE: Work Plan and IV&V Methodology (RFP 5.3.3)

INF 3121 Software Testing - Lecture 05. Test Management

Question Bank Unit 4 & Unit 5

CMMI-DEV V1.3 CMMI for Development Version 1.3 Quick Reference Guide

mywbut.com Software Reliability and Quality Management

CAPITAL AVIONICS, INC. Quality Manual

Software Reliability

Software Project & Risk Management Courses Offered by The Westfall Team

1 Management Responsibility 1 Management Responsibility 1.1 General 1.1 General

Capability Maturity Model for Software (SW-CMM )

SENG Software Reliability and Software Quality Project Assignments

Lecture 1. In practice, most large systems are developed using a. A software process model is an abstract representation

Data Warehousing provides easy access

Software Quality Management

Association of American Railroads Quality Assurance System Evaluation (QASE) Checklist Rev. 1/12/2017

Centerwide System Level Procedure

á1058ñ ANALYTICAL INSTRUMENT QUALIFICATION

DO-178B 김영승 이선아

DORNERWORKS QUALITY SYSTEM

Capability Maturity Model the most extensively used model in the software establishments

Project Management Auditing Guide

By: MSMZ. Standardization

Implementing an Automated Testing Program

How mature is my test organization: STDM, an assessment tool

! To solve problems. ! To take up new opportunities. ! Requirements - descriptions of. " Behavior. " Data. " Constraints (eg. cost and schedule)

CORPORATE MANUAL OF INTEGRATED MANAGEMENT SYSTEM

Information Technology Audit & Cyber Security

BELDEN QUALITY SYSTEM OVERVIEW

EVERYTHING CAN BE IMPROVED.

PROJECT QUALITY MANAGEMENT. 1 Powered by POeT Solvers LImited

IT6004/ Software Testing Unit-I

A Review Paper on Software Testing

<Full Name> Quality Manual. Conforms to ISO 9001:2015. Revision Date Record of Changes Approved By

Software product quality assurance

Desk Audit of. Based on Federal Transit Administration (FTA) Quality Assurance and Quality Control Guidelines FTA-IT

DHANALAKSHMI COLLEGE OF ENGINEERING, CHENNAI DEPARTMENT OF COMPUTER SCIENCE AND ENGINEERING IT6004 SOFTWARE ESTING UNIT I : INTRODUCTION

Project Quality Management

QUALITY MANAGEMENT FOR MOBILE COMMUNICATION SOFTWARE

IEC Functional Safety Assessment

Darshan Institute of Engineering & Technology for Diploma Studies Rajkot Unit-1

Subject : Computer Science. Paper : Software Quality Management. Module : Quality Management Activities Module No: CS/SQM/15

Quality Manual ISO 9001:2008 ISO 9001:2015

It will also enable you to manage the expectations of your clients or management, as they will know exactly what to expect.

Systems Assurance within the Systems Engineering Lifecycle

7. Project Management

Advantages and Disadvantages of. Independent Tests. Advantages. Disadvantages

On the management of nonfunctional requirements

Software Quality Assurance Framework (SQA) Yujuan Dou 窦玉娟 2008/11/28

Transcription:

SENG 521 Software Reliability & Software Quality Chapter 14: SRE Deployment Department t of Electrical l & Computer Engineering, i University it of Calgary B.H. Far (far@ucalgary.ca) http://www.enel.ucalgary.ca/people/far/lectures/seng521 Contents Quality in software development process Software Quality System (SQS); Software Quality Assurance (SQA) and Software Reliability Engineering (SRE) Quality, test and data plans Roles and responsibilities Sample quality and test plan Best practices of SRE far@ucalgary.ca 1 far@ucalgary.ca 2 Quality in Software Development Process Q. How to include quality concerns in the Process? Architectural analysis Quality attributes Software Reliability Software Quality Method: ATAM, CBAM, etc. Engineering (SRE) Assurance (SQA) Requirement & Architecture Design & Implementation Maintenance & Release Section 1 Software Quality System (SQS) and Software Quality Assurance (SQA) programs Software Quality Assessment Method: RAM, etc. far@ucalgary.ca 3 far@ucalgary.ca 4 What is Reliable Software? Rlibl Reliable software products are those that thtrun correctly and consistently, have fewer remaining defects, handle abnormal situation properly, p and need less installation effort The remaining defects should not affect the normal behaviour and the use of the software; they will not do any destructive ti damages to system and dits hardware or software environment; and rarely are evident to the users Developing reliable software requires: Establishing Software Quality System (SQS) and Software Quality Assurance (SQA) programs Establishing Software Reliability Engineering (SRE) process Software Quality System (SQS) Goals: Building quality into the software from the beginning Keeping and tracking quality in the software throughout the software life cycle What we have covered Technology John W. Horch: Practical Guide to Software Quality Management far@ucalgary.ca 5 far@ucalgary.ca 6

SQS Concerns Software quality management is the discipline that maximizes the probability that a software system will conform to its requirements, as those requirements are perceived dby the user on an ongoing basis. Software Quality Assurance (SQA) Software quality Assurance (SQA) is a planned and systematic approach to ensure that both software process and software product conform to the established standards, processes, and procedures. The goals of SQA are to improve software quality by monitoring i both software and the development process to ensure full compliance with the established standards and procedures. Steps to establish an SQA program Get the top management s agreement on its goal and support. Identify SQA issues, write SQA plan, establish standards and SQA functions, implement the SQA plan and evaluate SQA program. John W. Horch: Practical Guide to Software Quality Management far@ucalgary.ca 7 far@ucalgary.ca 8 SRE: Process SRE: Process & Plans Requirement & Architecture Design & Implementation Requirement & Architecture Design & Implementation Define Necessary Reliability Define Necessary Reliability SRE Proc Develop Operational Profile Prepare for SRE Proc Develop Operational Profile Prepare for Execute Apply Failure Data Execute Apply Failure Data Quality Plan Plan Data Plan time There may be many and Data (measurement) plans for various parts of the same project far@ucalgary.ca 9 far@ucalgary.ca 10 Defect reports DB Defect Handling: Without & With SQS Defect reporting, tracking, and closure procedure SCN: software change notice SRE: Who is Involved? Typical roles: Senior management coordinator (manager) Data coordinator (manager) Customer or user STR: software trouble report John W. Horch: Practical Guide to Software Quality Management far@ucalgary.ca 11 far@ucalgary.ca 12

SRE: Management Concerns Perception and specification i of a customer s real needs. Translation of specification into a conforming design. Maintaining conformity throughout the development processes. Product and sub-product demonstrations which provide convincing indications of product and project meet requirements. Ensuring that the tests and demonstrations are designed and controlled, so as to be both achievable and manageable. Roles & Responsibilities /1 Coordinator (Manager): coordinator is expected to ensure that every specific statement of intent in the product requirement, specification and design, is matched by a well designed (cost-effective, convincing, self-reporting, etc.) test, measurement or demonstration. Data Coordinator (Manager) : Data coordinator ensures that the physical and administrative structures for data collection exist and are documented in the quality plan, receives and validates the data during development, and through analysis and communication ensures that the meaning of the information is known to all, in time, for effective application. far@ucalgary.ca 13 far@ucalgary.ca 14 Roles & Responsibilities /2 Quality Plans /1 Customer or User: Actively encouraging the making and following of detailed quality plans for the products and projects. Requiring access to previous quality plans and their recorded outcomes before accepting the figures and methods quoted in the new plan. Enquiring into the sources and validity of synthetics and formulae used in estimating and planning. Appointing appropriate personnel to provide authoritative responses to queries from the developer and a managed interface to the developer. Receiving and reviewing reports of significant audits, reviews, tests and demonstrations. Making any queries and objections in detail and in writing, at the earliest possible time. The most promising mechanisms for gaining and improving predictability and controllability of software qualities are quality plan and its subsidiary documents, including test plans and data (measurement) plans. The creation of the quality plan can be instrumental in raising project effectiveness and in preventing expensive and timeconsuming misunderstandings during the project, and at release/acceptance time. Quality Plan Plan Data Plan far@ucalgary.ca 15 far@ucalgary.ca 16 Quality Plan /2 Quality plan and quality record, provide guidelines for carrying out and controlling the followings: Requirement and specification management Development processes Documentation management Design evaluation Product testing SRE related Data collection and interpretation activities Acceptance and release processes Quality Plan /3 Quality planning should be made at the very earliest point in a project, preferably before a final decision is made on feasibility, and before a software development contract is signed. Quality yplan should be devised and agreed between all the concerned parties: senior management, software development management (both administrative i ti and dtechnical), software development team, customers, and any involved general support functions such as resource management and companywide quality management. far@ucalgary.ca 17 far@ucalgary.ca 18

Data (Measurement) Plan The data (measurement) plan prescribes: What should be measured and recorded during a project; How it should be checked and collated; How it should be interpreted and applied. Data may be collected in several ways, within the specific project and beyond it. Ideally, there should be a higher level of data collection and application into which project data is fed. Plan /1 The purpose of test t plan is to ensure that t all testing ti activities iti (including those used for controlling the process of development, and in indicating the progress of the project) are expected, are manageable and are managed. plans are created as a subsection or as an associated document of the quality plan. plans become progressively more detailed and expanded during a project. Each test plan defines its own objectives and scope, and the means and methods by which the objectives are expected to be met. far@ucalgary.ca 19 far@ucalgary.ca 20 Plan /2 For the software product, tthe test tplan is usually restricted titdby the scope of the test: certification, feature and load test. The plan predicts the resources and means required to reach the required levels of assurance about the end products, and the scheduling of all testing, measuring and demonstration activities. iti s, measurements and demonstrations are used to establish that the software product satisfies the requirements document, and that each process during a development is carried out correctly and results in acceptable outcomes. Effective Coordination Coordination i among quality plan, test plans and data plans is necessary. Effective coordination can only be introduced and practiced if the environment and supporting structures exist. To make the coordination work, all those involved must be prepared to question and evaluate every aspect of what they are doing, and must be ready to both give and to accept suggestions and information outside their normal field of interest and authority. far@ucalgary.ca 21 far@ucalgary.ca 22 Effective Coordination /2 Serial coordination Serial coordination means application of information i from one phase or process in a later and different phase or process. Parallel coordination Parallel coordination is the application of information i from one instance of an activity or process to other instances of the same process, whether in the same project or in others in progress. Coordination of Quality Plans The coordination of quality plans includes: Selective reuse of methods and procedures (to reduce reinvention). Harmonization of goals and measurements. Provision of support tools and services. Extraction from project and product records of indications of what works and what should be avoided. d far@ucalgary.ca 23 far@ucalgary.ca 24

Coordination of Data Plans /1 Coordinating (or sharing) )data plans between projects A collection of data which covers more than one project and several different development routes provides opportunities to Compare the means of production (and thus supports rational choices between them), as well as allowing Selection of standard expectations for performance which can be used in project planning and project control. Coordination of Data Plans /2 Coordinating (or sharing) data between organizations Providing a wider base for evaluation. Leading to a more general view of what is comprised in good practice. Leading to a more general view of connections between working methods and their results. far@ucalgary.ca 25 far@ucalgary.ca 26 Coordination of Data Plans /3 Coordination of data plans improves quantity and quality of data in the sense of: Estimation and re-estimation estimation of projects, both in administrative and technical terms; Management of the project, its products, processes and resources; Selective re-use of methods and procedures to reduce reinvention, and to benefit by experience; Harmonization of goals and measurements across projects; Rationalization of the provision of support tools and services; Coordination of Plans /1 Uses in the management and planning of resources and environments. Role of test t plans in ensuring the applicability and testability of the design and the code. plans used as a guide for those managing testing. plans used as an input to quality assurance and quality control processes. Use of test t results to decide on an appropriate course of action following a testing activity. far@ucalgary.ca 27 far@ucalgary.ca 28 Coordination of Plans /2 plans and test results used as an input to project management. Reuse of the format of the test plan from one project to another. Use of test results to identify unusual modules. Use of test results to assess the effectiveness of testing procedures. Section 2 Elements of Quality & Plan far@ucalgary.ca 29 far@ucalgary.ca 30

Sample SQS Plan /1 1 Purpose 2 Reference Documents 3 Management 3.1 Organization 3.2 Tasks 3.3 Responsibilities Sample SQS Plan (cont d) /2 4D Documentation ti 4.1 Purpose 4.2 Minimum Documentation 4.2.1 Software Requirements Specification 4.2.2 Software Design Description 4.2.3 Software Verification and Validation Plan 4.2.4 Software Verification and Validation Report 4.2.5 User Documentation 426C 4.2.6 Configuration i Management Plan 4.3 Other Documentation Based on IEEE Standard 730.1-1989 far@ucalgary.ca 31 Based on IEEE Standard 730.1-1989 far@ucalgary.ca 32 Sample SQS Plan (cont d) /3 5 Standards, Practices, Conventions, and Metrics 5.1 Purpose 5.2 Documentation, Logic, Coding, and Commentary Standards and Conventions 53T 5.3 ing Standards, d Conventions, and Practices 5.4 Metrics Based on IEEE Standard 730.1-1989 far@ucalgary.ca 33 Sample SQS Plan (cont d) /4 6R Review and daudits 6.1 Purpose 6.2 Minimum Requirements 6.2.1 Software Requirements Review 6.2.2 Preliminary Design Review 6.2.3 Critical Design Review 6.2.4 Software Verification and Validation Review 6.2.5 Functional Audit 6.2.6 6 Physical Audit 6.2.7 In-process Reviews 6.2.8 Managerial Reviews 629C 6.2.9 Configuration Management tplan Review 6.2.10 Postmortem Review 6.3 Other Reviews and Audits Based on IEEE Standard 730.1-1989 far@ucalgary.ca 34 Sample SQS Plan (cont d) /5 7T 8 Problem Reporting and Corrective Action 8.1 Practices and Procedures 8.2 Organizational Responsibilities 9 Tools, Techniques, and Methodologies 10 Code Control 11 Media Control 12 Supplier Control 13 Records Collection, Maintenance, and Retention 14 Training ii 15 Risk Management Sample Plan /1 1 Plan identifier 2 Introduction 2.1 Objectives 2.2 22Background 2.3 Scope 2.4 References Based on IEEE Standard 730.1-1989 far@ucalgary.ca 35 Based on IEEE Standard 829-1983 far@ucalgary.ca 36

Sample Plan (cont d) /2 3 Items 3.1 Program Modules 3.2 Job Control Procedures 3.33 User Procedures 3.4 Operator Procedures 4 Features To Be ed 5 Feature Not To be ed Based on IEEE Standard 829-1983 far@ucalgary.ca 37 Sample Plan (cont d) /3 6A Approach 6.1 Conversion ing 6.2 Job Stream ing 6.3 Interface ing 6.4 Security ing 6.5 Recovery ing 6.6 Performance ing 6.7 Regression 6.8 Comprehensiveness 6.9 Constraints Based on IEEE Standard 829-1983 far@ucalgary.ca 38 Sample Plan (cont d) /4 7 Item Pass/Fail Criteria 8 Suspension Criteria and Resumption Requirements 81S 8.1 Suspension Cit Criteriai 8.2 Resumption Requirements 9 Deliverables 10 ing Tasks Sample Plan (cont d) /5 11 Environmental Needs 11.1 Hardware 11.2 Software 11.3 Security 11.4 Tools 11.5 Publications 12 Responsibilities 12.1 Group 12.22 User Department 12.3 Development Project Group Based on IEEE Standard 829-1983 far@ucalgary.ca 39 Based on IEEE Standard 829-1983 far@ucalgary.ca 40 Sample Plan (cont d) /6 13 Staffing and Training Needs 13.1 Staffing 13.2 Training 14 Schedule 15 Risks and Contingencies 16 Approvals Section 3 Best Practice SRE Based on IEEE Standard 829-1983 far@ucalgary.ca 41 far@ucalgary.ca 42

fully! Hopef Practice of SRE /1 The practice of SRE provides the software engineer or manager the means to predict, estimate, and measure the rate of failure occurrences in software. Using SRE in the context of Software Engineering, one can: Analyze, manage, and improve the reliability of software products. Balance customer needs for competitive price, timely delivery, and a reliable product. Determine when the software is good enough to release to customers, minimizing the risks of releasing software with serious problems. Avoid excessive time to market due to overtesting. Practice of SRE /2 The practice of SRE may be summarized in six steps: 1) Quantify product usage by specifying how frequently customers will use various features and how frequently various environmental conditions that influence processing will occur. 2) Define quality quantitatively with the customers by defining failures and failure severities and by specifying the balance among the key quality objectives of reliability, delivery date, and cost. 3) Employ product usage data and quality objectives to guide design and implementation of the product and to manage resources to maximize productivity (i.e., customer satisfaction per unit cost). 4) Measure reliability of reused software and acquired software components as an acceptance requirement. 5) Track reliability and use this information to guide product release. 6) Monitor reliability in field operation and use results to guide new feature introduction, as well as product and process improvement. far@ucalgary.ca 43 far@ucalgary.ca 44 Incremental Implementation Most projects implement the SRE activities incrementally. A typical implementation i sequence Implementing SRE /1 Feasibility and requirements phase: Define and classify failures, i.e., failure severity classes Identify customer reliability needs Determine operational profile Conduct trade-off studies (among reliability, time, cost, people, technology) Set reliability objectives far@ucalgary.ca 45 far@ucalgary.ca 46 Implementing SRE /2 Design and implementation phase: Allocate reliability among components, acquired software, hardware and other systems Engineer to meet reliability objectives Focus resources based on operational profile Measure reliability of acquired software, hardware and other systems, i.e., certification test Manage fault introduction and propagation Implementing SRE /3 System test and field trial phase: Determine operational profile used for testing, i.e. test profile Conduct reliability growth testing Track testing progress Project additional testing ti needed dd Certify reliability objectives and release criteria are met far@ucalgary.ca 47 far@ucalgary.ca 48

Implementing SRE /4 Post delivery and maintenance: Project post-release staff needs Monitor field reliability vs. objectives Track customer satisfaction with reliability Time new feature introduction by monitoring reliability Guide product and process improvement with reliability measures Feasibility Phase Activity i 1: Dfi Define and classify failures fil Define failure from customer s perspective Group identified dfailures into a group of severity classes from customer s perspective Usually 3-4 classes are sufficient Activity 2: Identify customer reliability needs What is the level of reliability that the customer needs? Who are the rival companies and what are rival products and what is their reliability? Activity 3: Determine operational profile Based on the tasks performed and the environmental factors far@ucalgary.ca 49 far@ucalgary.ca 50 Requirements Phase Activity 4: Conduct trade-off studies Reliability and functionality Reliability, cost, delivery date, technology, team Activity 5: Set reliability objectives based on Explicit requirement statements from a request for proposal or standard document Customer satisfaction i with a previous release or similar il product Capabilities of competition Trade-offs with performance, delivery date and cost Warranty, technology capabilities Design Phase Atiit Activity it 6: Allocate reliability among acquired software, components, hardware and other systems Determine which systems and components are involved and how they affect the overall system reliability Activity 7: Engineer to meet reliability objectives Plan using fault tolerance, fault removal and fault avoidance Activity 8: Focus resources based on operational profile Operational profile guides the designer to focus on features that are supposed to be more critical Develop more critical functions first in more detail far@ucalgary.ca 51 far@ucalgary.ca 52 Implementation Phase Activity i 9: Measure reliability i i of acquired software, hardware and other systems Certification test using reliability demonstration chart Activity 10: Manage fault introduction and propagation Practicing a development methodology; constructing modular system; employing reuse; conducting inspection and review; controlling change System Phase Activity i 11: Determine operational profile used for testing Decide upon critical operations Decide upon need of multiplicity of operational profile Activity 12: Conduct reliability growth testing Activity 13: Track testing gprogress and certify that reliability objectives are met Conduct feature test, regression test and performance and load test Conduct reliability ygrowth test far@ucalgary.ca 53 far@ucalgary.ca 54

Field Trial Phase Activity i 14: Project additional i testing needed Check accuracy of test: time and coverage Plan for changes in test strategies and methods Activity 15: Certify that reliability objectives and release criteria are met Check accuracy of data collection Check whether test operational profile reflects field operational profile Check customer s definition of failure matches with what was defined for testing the product Post Delivery Phase /1 Atiit Activity 16: Project post-release staff needs Customer s staff for system recovery; supplier s staff to handle customer-reported reported failures and to remove faults Activity 17: Monitor field reliability vs. objectives Collect post release failure data systematically Activity 18: Track customer satisfaction with reliability Survey product features with a sample customer set far@ucalgary.ca 55 far@ucalgary.ca 56 Post Delivery Phase /2 Atiit Activity 19: Time new feature introduction ti by monitoring reliability New features bring new defects. Add new features desired by the customers if they can be managed without sacrificing reliability of the whole system Activity 20: Guide product and process improvement with reliability measures Root-cause analysis for the faults Why the fault was not detected earlier in the development phase and what should be done to reduce the probability bili of introducing similar faults Feasibility Phase: Benefits Atiit Activity 1 and d2 2: Dfi Define and classify failures, fil identify customer reliability needs Benefits: Release software at a time that meets customer reliability needs but is as early and inexpensive as possible Activity 3: Determine operational profiles Benefits: Speed up time to market by saving test time, reduce test cost, have a quantitative measure for reliability far@ucalgary.ca 57 far@ucalgary.ca 58 Requirements Phase: Benefits Activity i 4: Conduct trade-off studies Benefits: Increase market share by providing a software product that matches better to customer needs Activity 5: Set reliability objectives Benefits: Release software at a time that meets customer reliability needs but is as early and inexpensive as possible Design Phase : Benefits Atiit Activity it 6: Allocate reliability among acquired software, components, hardware and other systems Benefits: Reduce development time and cost by striking better balance among components Atiit Activity 7: Engineer to meet reliability objectives Benefits: Reduce development time and cost with better design Activity 8: Focus resources based on operational profile Benefits: Speed up time to market by yguiding gdevelopment priorities, reduce development cost far@ucalgary.ca 59 far@ucalgary.ca 60

Implementation Phase : Benefits Atiit Activity 9: Measure reliability of acquired software, hardware and other systems Benefits: Reduce risks to reliability, schedule, cost from unknown software and systems Activity 10: Manage fault introduction and propagation Benefits: Maximize cost-effectiveness of reliability improvement System Phase : Benefits Activity 11: Determine operational profile used for testing Benefits: Reduce the chance of critical operations going unattended, speed up time to market by saving test time, reduce test cost Activity 12: Conduct reliability growth testing Benefits: Determine how the product reliability is improving. Activity 13: Conduct reliability growth testing, track testing progress Benefits: Know exactly what reliability the customer would experience at different points in time if the software is released at those points far@ucalgary.ca 61 far@ucalgary.ca 62 Field Trial Phase : Benefits Atiit Activity 14: Project additional testing ti needed d Benefits: Planning tests ahead in time when the reliability measure is not satisfactory will reduce the time for integration and release. Activity 15: Certify that reliability objectives are met Benefits: Release software at a time that meets customer reliability needs but is as early and inexpensive as possible; verify that the customer reliability needs are actually met Post Delivery Phase: Benefits Activity 16: Project post-release staff needs Benefits: Reduce post-release costs with better planning Activity 17-18: 18: Monitor field reliability vs objectives, track customer satisfaction with reliability Benefits: Maximize likelihood of pleasing customer with reliability far@ucalgary.ca 63 far@ucalgary.ca 64 Post Delivery Phase: Benefits Activity 19: Time new feature introduction by monitoring reliability Benefits: Ensure that software continues to meet customer reliability needs in the field Activity 20: Guide product and process improvement with reliability measures Benefits: Maximize cost-effectiveness of product and process improvements selected Example: Project Additional ing Needed A test team runs tests for a new software project. There are 12 planned tests per day. After 13 days into the testing, the progress lagged what had dbeen projected. The following table depicts the data: Date Daily execution of tests Planned Completed Dec. 1 12 13 Dec. 2 12 11 Dec. 3 12 11 Dec. 4 12 12 Dec. 5 12 8 Dec. 6 12 11 Dec. 7 12 10 Dec. 8 12 11 Dec. 9 12 11 Dec. 10 12 16 Dec. 11 12 10 Dec. 12 12 3 Dec. 13 12 7 far@ucalgary.ca 65 SENG521 (Winter 2008) far@ucalgary.ca 66

Example (cont d) There were 5 testers assigned to this project partially and table below shows data for each day that testers were available to do testing and the number of tests t they completed each day. Date ers er Completed A B C D E days tests Dec. 1 + + 2 13 Dec. 2 + + 2 11 Dec. 3 + + 2 11 Dec. 4 + + 2 12 Dec. 5 + + 2 8 Dec. 6 + + + 3 11 Dec. 7 + + + 3 10 Dec. 8 + + + 3 11 Dec. 9 + + + 3 11 Dec. 10 + + + + 4 16 Dec. 11 + + + 3 10 Dec. 12 + + + 3 3 Dec. 13 + 1 7 Total 33 134 Example (cont d) Calculate the average number of tests that a tester completes per day. Total tests executed: 134 Total tester days: 33 Average tests completed per day: 134 33 = 4.06 Calculate efficiency of test Total tests planned: 12 13= 156 Total tests executed: 134 efficiency: 134 156 = 0.858 or about %86 SENG521 (Winter 2008) far@ucalgary.ca 67 SENG521 (Winter 2008) far@ucalgary.ca 68 Example (cont d) Assume that the current date is Dec 13 th and currently one tester assigned to this project. We want to bring the test execution back on plan in the next 10 working days. How many testers do we need to hire for this project, assuming that the plan for the next 10 days is the execution of 12 tests per day? In 10 working days, the team needs to complete 10 12 = 120 tests to match the planned rate. execution is currently 156-134 134 = 22 tests behind the goal. This means 120+22=142 tests in 10 days to accomplish. Using the average rate of about 4 tests per day calculated above, 3 testers would only complete 120 (3 testers 4 tests/day 10 days =120) tests in that time, which is less than what is needed. However, if we hire 4 testers they can complete 160 (4 testers 4 tests/day 10 days) which is a bit above the need. Therefore 4-1=3 more testers t are to be hired for this project. Existing vs. New Projects There is no essential difference between new and existing i projects in applying SRE for the first time. However, determining failure intensity objective and operational profile for existing projects is easier. Most of the SRE activities will require only small updates after they have been completed once, e.g., operational profile should only be updated for the new operations added. (remember interaction factor) After SRE has been applied to one release, less effort is needed ddfor succeeding releases, e.g., new test cases should be added to the existing ones. SENG521 (Winter 2008) far@ucalgary.ca 69 far@ucalgary.ca 70 Short-Cycle Projects Small projects or releases or those with short development cycles may require a modified set of SRE activities to keep costs low or activity durations short. Reduction in cost and time can be obtained by limiting the number of elements in the operational profile and by accepting less precision. Examples: Setting one operational mode and performing certification test rather than reliability growth test. Cost Concerns There may be a training i cost when starting to apply SRE. The principal cost in applying SRE is determining the operational profile. Another cost is associated with processing and analyzing failure data during reliability ygrowth test. As most projects have multiple releases, the SRE cost drops sharply after initial release. far@ucalgary.ca 71 far@ucalgary.ca 72

Practice Variation Defining an operational profile based on customer modeling. Automatic test cases generation based on frequency of use reflected in operational profile. Employing cleanroom development techniques together with feature and certification i testing. Automatic tracking of reliability ygrowth. SRE for Agile software development. Conclusions Practical implementation of an effective SRE program is a non-trivial task. Mechanisms for collection and analysis of data on software product and process quality must be in place. Fault identification and elimination techniques must be in place. Other organizational i abilities i such as the use of reviews and inspections, reliability based testing, and software process improvement are also necessary for effective SRE. Quality oriented mindset and training are necessary! far@ucalgary.ca 73 far@ucalgary.ca 74 far@ucalgary.ca 75