B.H. Far

Size: px
Start display at page:

Download "B.H. Far"

Transcription

1 SENG 521 Software Reliability & Software Quality Chapter 14: SRE Deployment Department t of Electrical l & Computer Engineering, i University it of Calgary B.H. Far (far@ucalgary.ca) far@ucalgary.ca 1

2 Contents Quality in software development process Software Quality System (SQS); Software Quality Assurance (SQA) and Software Reliability Engineering (SRE) Quality, test and data plans Roles and responsibilities Sample quality and test plan Best practices of SRE 2

3 Quality in Software Development Process Q. How to include quality concerns in the Process? Architectural analysis Quality attributes Software Reliability Software Quality Method: ATAM, CBAM, etc. Engineering (SRE) Assurance (SQA) Requirement & Architecture Design & Implementation Test & Release Maintenance Software Quality Assessment Method: RAM, etc. 3

4 Section 1 Software Quality System (SQS) and Software Quality Assurance (SQA) programs far@ucalgary.ca 4

5 What is Reliable Software? Rlibl Reliable software products are those that thtrun correctly and consistently, have fewer remaining defects, handle abnormal situation properly, p and need less installation effort The remaining defects should not affect the normal behaviour and the use of the software; they will not do any destructive ti damages to system and dits hardware or software environment; and rarely are evident to the users Developing reliable software requires: Establishing Software Quality System (SQS) and Software Quality Assurance (SQA) programs Establishing Software Reliability Engineering (SRE) process far@ucalgary.ca 5

6 Software Quality System (SQS) Goals: Building quality into the software from the beginning Keeping and tracking quality in the software throughout the software life cycle What we have covered Technology John W. Horch: Practical Guide to Software Quality Management 6

7 SQS Concerns Software quality management is the discipline that maximizes the probability that a software system will conform to its requirements, as those requirements are perceived dby the user on an ongoing basis. John W. Horch: Practical Guide to Software Quality Management far@ucalgary.ca 7

8 Software Quality Assurance (SQA) Software quality Assurance (SQA) is a planned and systematic approach to ensure that both software process and software product conform to the established standards, processes, and procedures. The goals of SQA are to improve software quality by monitoring i both software and the development process to ensure full compliance with the established standards and procedures. Steps to establish an SQA program Get the top management s agreement on its goal and support. Identify SQA issues, write SQA plan, establish standards and SQA functions, implement the SQA plan and evaluate SQA program. far@ucalgary.ca 8

9 SRE: Process Requirement & Architecture Design & Implementation Test Define Necessary Reliability SRE Proc Develop Operational Profile Prepare for Test Execute Test Apply Failure Data 9

10 SRE: Process & Plans Requirement & Architecture Design & Implementation Test Define Necessary Reliability SRE Proc Develop Operational Profile Prepare for Test Execute Test Apply Failure Data Quality Plan Test Plan Data Plan time There may be many Test and Data (measurement) plans for various parts of the same project 10

11 Defect Handling: Without & With SQS Defect reporting, tracking, and closure procedure Defect reports DB SCN: software change notice STR: software trouble report John W. Horch: Practical Guide to Software Quality Management 11

12 SRE: Who is Involved? Typical roles: Senior management Test coordinator (manager) Data coordinator (manager) Customer or user 12

13 SRE: Management Concerns Perception and specification i of a customer s real needs. Translation of specification into a conforming design. Maintaining conformity throughout the development processes. Product and sub-product demonstrations which provide convincing indications of product and project meet requirements. Ensuring that the tests and demonstrations are designed and controlled, so as to be both achievable and manageable. far@ucalgary.ca 13

14 Roles & Responsibilities /1 Test Coordinator (Manager): Test coordinator is expected to ensure that every specific statement of intent in the product requirement, specification and design, is matched by a well designed (cost-effective, convincing, self-reporting, etc.) test, measurement or demonstration. Data Coordinator (Manager) : Data coordinator ensures that the physical and administrative structures for data collection exist and are documented in the quality plan, receives and validates the data during development, and through analysis and communication ensures that the meaning of the information is known to all, in time, for effective application. far@ucalgary.ca 14

15 Roles & Responsibilities /2 Customer or User: Actively encouraging the making and following of detailed quality plans for the products and projects. Requiring access to previous quality plans and their recorded outcomes before accepting the figures and methods quoted in the new plan. Enquiring into the sources and validity of synthetics and formulae used in estimating and planning. Appointing appropriate personnel to provide authoritative responses to queries from the developer and a managed interface to the developer. Receiving and reviewing reports of significant audits, reviews, tests and demonstrations. Making any queries and objections in detail and in writing, at the earliest possible time. far@ucalgary.ca 15

16 Quality Plans /1 The most promising mechanisms for gaining and improving predictability and controllability of software qualities are quality plan and its subsidiary documents, including test plans and data (measurement) plans. The creation of the quality plan can be instrumental in raising project effectiveness and in preventing expensive and timeconsuming misunderstandings during the project, and at release/acceptance time. Quality Plan Test Plan Data Plan 16

17 Quality Plan /2 Quality plan and quality record, provide guidelines for carrying out and controlling the followings: Requirement and specification management Development processes Documentation management Design evaluation Product testing Data collection and interpretation Acceptance and release processes SRE related activities 17

18 Quality Plan /3 Quality planning should be made at the very earliest point in a project, preferably before a final decision is made on feasibility, and before a software development contract is signed. Quality yplan should be devised and agreed between all the concerned parties: senior management, software development management (both administrative i ti and dtechnical), software development team, customers, and any involved general support functions such as resource management and companywide quality management. far@ucalgary.ca 18

19 Data (Measurement) Plan The data (measurement) plan prescribes: What should be measured and recorded during a project; How it should be checked and collated; How it should be interpreted and applied. Data may be collected in several ways, within the specific project and beyond it. Ideally, there should be a higher level of data collection and application into which project data is fed. far@ucalgary.ca 19

20 Test Plan /1 The purpose of test t plan is to ensure that t all testing ti activities iti (including those used for controlling the process of development, and in indicating the progress of the project) are expected, are manageable and are managed. Test plans are created as a subsection or as an associated document of the quality plan. Test plans become progressively more detailed and expanded during a project. Each test plan defines its own objectives and scope, and the means and methods by which the objectives are expected to be met. far@ucalgary.ca 20

21 Test Plan /2 For the software product, tthe test tplan is usually restricted titdby the scope of the test: certification, feature and load test. The plan predicts the resources and means required to reach the required levels of assurance about the end products, and the scheduling of all testing, measuring and demonstration activities. iti Tests, measurements and demonstrations are used to establish that the software product satisfies the requirements document, and that each process during a development is carried out correctly and results in acceptable outcomes. far@ucalgary.ca 21

22 Effective Coordination Coordination i among quality plan, test plans and data plans is necessary. Effective coordination can only be introduced and practiced if the environment and supporting structures exist. To make the coordination work, all those involved must be prepared to question and evaluate every aspect of what they are doing, and must be ready to both give and to accept suggestions and information outside their normal field of interest and authority. far@ucalgary.ca 22

23 Effective Coordination /2 Serial coordination Serial coordination means application of information i from one phase or process in a later and different phase or process. Parallel coordination Parallel coordination is the application of information i from one instance of an activity or process to other instances of the same process, whether in the same project or in others in progress. far@ucalgary.ca 23

24 Coordination of Quality Plans The coordination of quality plans includes: Selective reuse of methods and procedures (to reduce reinvention). Harmonization of goals and measurements. Provision of support tools and services. Extraction from project and product records of indications of what works and what should be avoided. d far@ucalgary.ca 24

25 Coordination of Data Plans /1 Coordinating (or sharing) )data plans between projects A collection of data which covers more than one project and several different development routes provides opportunities to Compare the means of production (and thus supports rational choices between them), as well as allowing Selection of standard expectations for performance which can be used in project planning and project control. far@ucalgary.ca 25

26 Coordination of Data Plans /2 Coordinating (or sharing) data between organizations Providing a wider base for evaluation. Leading to a more general view of what is comprised in good practice. Leading to a more general view of connections between working methods and their results. far@ucalgary.ca 26

27 Coordination of Data Plans /3 Coordination of data plans improves quantity and quality of data in the sense of: Estimation and re-estimation estimation of projects, both in administrative and technical terms; Management of the project, its products, processes and resources; Selective re-use of methods and procedures to reduce reinvention, and to benefit by experience; Harmonization of goals and measurements across projects; Rationalization of the provision of support tools and services; 27

28 Coordination of Test Plans /1 Uses in the management and planning of resources and environments. Role of test t plans in ensuring the applicability and testability of the design and the code. Test plans used as a guide for those managing testing. Test plans used as an input to quality assurance and quality control processes. Use of test t results to decide on an appropriate course of action following a testing activity. far@ucalgary.ca 28

29 Coordination of Test Plans /2 Test plans and test results used as an input to project management. Reuse of the format of the test plan from one project to another. Use of test results to identify unusual modules. Use of test results to assess the effectiveness of testing procedures. 29

30 Section 2 Elements of Quality & Test Plan far@ucalgary.ca 30

31 Sample SQS Plan /1 1 Purpose 2 Reference Documents 3 Management 3.1 Organization 3.2 Tasks 3.3 Responsibilities Based on IEEE Standard far@ucalgary.ca 31

32 Sample SQS Plan (cont d) /2 4D Documentation ti 4.1 Purpose 4.2 Minimum Documentation Software Requirements Specification Software Design Description Software Verification and Validation Plan Software Verification and Validation Report User Documentation 426C Configuration i Management Plan 4.3 Other Documentation Based on IEEE Standard far@ucalgary.ca 32

33 Sample SQS Plan (cont d) /3 5 Standards, Practices, Conventions, and Metrics 5.1 Purpose 5.2 Documentation, Logic, Coding, and Commentary Standards and Conventions 53T 5.3 Testing Standards, d Conventions, and Practices 5.4 Metrics Based on IEEE Standard far@ucalgary.ca 33

34 Sample SQS Plan (cont d) /4 6R Review and daudits 6.1 Purpose 6.2 Minimum Requirements Software Requirements Review Preliminary Design Review Critical Design Review Software Verification and Validation Review Functional Audit Physical Audit In-process Reviews Managerial Reviews 629C Configuration Management tplan Review Postmortem Review 6.3 Other Reviews and Audits Based on IEEE Standard far@ucalgary.ca 34

35 Sample SQS Plan (cont d) /5 7T Test 8 Problem Reporting and Corrective Action 8.1 Practices and Procedures 8.2 Organizational Responsibilities 9 Tools, Techniques, and Methodologies 10 Code Control 11 Media Control 12 Supplier Control 13 Records Collection, Maintenance, and Retention 14 Training ii 15 Risk Management Based on IEEE Standard far@ucalgary.ca 35

36 Sample Test Plan /1 1 Test Plan identifier 2 Introduction 2.1 Objectives Background 2.3 Scope 2.4 References Based on IEEE Standard far@ucalgary.ca 36

37 Sample Test Plan (cont d) /2 3 Test Items 3.1 Program Modules 3.2 Job Control Procedures 3.33 User Procedures 3.4 Operator Procedures 4 Features To Be Tested 5 Feature Not To be Tested Based on IEEE Standard far@ucalgary.ca 37

38 Sample Test Plan (cont d) /3 6A Approach 6.1 Conversion Testing 6.2 Job Stream Testing 6.3 Interface Testing 6.4 Security Testing 6.5 Recovery Testing 6.6 Performance Testing 6.7 Regression 6.8 Comprehensiveness 6.9 Constraints Based on IEEE Standard

39 Sample Test Plan (cont d) /4 7 Item Pass/Fail Criteria 8 Suspension Criteria and Resumption Requirements 81S 8.1 Suspension Cit Criteriai 8.2 Resumption Requirements 9 Test Deliverables 10 Testing Tasks Based on IEEE Standard far@ucalgary.ca 39

40 Sample Test Plan (cont d) /5 11 Environmental Needs 11.1 Hardware 11.2 Software 11.3 Security 11.4 Tools 11.5 Publications 12 Responsibilities 12.1 Test Group User Department 12.3 Development Project Group Based on IEEE Standard

41 Sample Test Plan (cont d) /6 13 Staffing and Training Needs 13.1 Staffing 13.2 Training 14 Schedule 15 Risks and Contingencies 16 Approvals Based on IEEE Standard

42 Section 3 Best Practice SRE far@ucalgary.ca 42

43 Hopef fully! Practice of SRE /1 The practice of SRE provides the software engineer or manager the means to predict, estimate, and measure the rate of failure occurrences in software. Using SRE in the context of Software Engineering, one can: Analyze, manage, and improve the reliability of software products. Balance customer needs for competitive price, timely delivery, and a reliable product. Determine when the software is good enough to release to customers, minimizing the risks of releasing software with serious problems. Avoid excessive time to market due to overtesting. far@ucalgary.ca 43

44 Practice of SRE /2 The practice of SRE may be summarized in six steps: 1) Quantify product usage by specifying how frequently customers will use various features and how frequently various environmental conditions that influence processing will occur. 2) Define quality quantitatively with the customers by defining failures and failure severities and by specifying the balance among the key quality objectives of reliability, delivery date, and cost. 3) Employ product usage data and quality objectives to guide design and implementation of the product and to manage resources to maximize productivity (i.e., customer satisfaction per unit cost). 4) Measure reliability of reused software and acquired software components as an acceptance requirement. 5) Track reliability and use this information to guide product release. 6) Monitor reliability in field operation and use results to guide new feature introduction, as well as product and process improvement. 44

45 Incremental Implementation Most projects implement the SRE activities incrementally. A typical implementation i sequence far@ucalgary.ca 45

46 Implementing SRE /1 Feasibility and requirements phase: Define and classify failures, i.e., failure severity classes Identify customer reliability needs Determine operational profile Conduct trade-off studies (among reliability, time, cost, people, technology) Set reliability objectives 46

47 Implementing SRE /2 Design and implementation phase: Allocate reliability among components, acquired software, hardware and other systems Engineer to meet reliability objectives Focus resources based on operational profile Measure reliability of acquired software, hardware and other systems, i.e., certification test Manage fault introduction and propagation 47

48 Implementing SRE /3 System test and field trial phase: Determine operational profile used for testing, i.e. test profile Conduct reliability growth testing Track testing progress Project additional testing ti needed dd Certify reliability objectives and release criteria are met 48

49 Implementing SRE /4 Post delivery and maintenance: Project post-release staff needs Monitor field reliability vs. objectives Track customer satisfaction with reliability Time new feature introduction by monitoring reliability Guide product and process improvement with reliability measures 49

50 Feasibility Phase Activity i 1: Dfi Define and classify failures fil Define failure from customer s perspective Group identified dfailures into a group of severity classes from customer s perspective Usually 3-4 classes are sufficient Activity 2: Identify customer reliability needs What is the level of reliability that the customer needs? Who are the rival companies and what are rival products and what is their reliability? Activity 3: Determine operational profile Based on the tasks performed and the environmental factors far@ucalgary.ca 50

51 Requirements Phase Activity 4: Conduct trade-off studies Reliability and functionality Reliability, cost, delivery date, technology, team Activity 5: Set reliability objectives based on Explicit requirement statements from a request for proposal or standard document Customer satisfaction i with a previous release or similar il product Capabilities of competition Trade-offs with performance, delivery date and cost Warranty, technology capabilities far@ucalgary.ca 51

52 Design Phase Atiit Activity it 6: Allocate reliability among acquired software, components, hardware and other systems Determine which systems and components are involved and how they affect the overall system reliability Activity 7: Engineer to meet reliability objectives Plan using fault tolerance, fault removal and fault avoidance Activity 8: Focus resources based on operational profile Operational profile guides the designer to focus on features that are supposed to be more critical Develop more critical functions first in more detail 52

53 Implementation Phase Activity i 9: Measure reliability i i of acquired software, hardware and other systems Certification test using reliability demonstration chart Activity 10: Manage fault introduction and propagation Practicing a development methodology; constructing modular system; employing reuse; conducting inspection and review; controlling change far@ucalgary.ca 53

54 System Test Phase Activity i 11: Determine operational profile used for testing Decide upon critical operations Decide upon need of multiplicity of operational profile Activity 12: Conduct reliability growth testing Activity 13: Track testing gprogress and certify that reliability objectives are met Conduct feature test, regression test and performance and load test Conduct reliability ygrowth test far@ucalgary.ca 54

55 Field Trial Phase Activity i 14: Project additional i testing needed Check accuracy of test: time and coverage Plan for changes in test strategies and methods Activity 15: Certify that reliability objectives and release criteria are met Check accuracy of data collection Check whether test operational profile reflects field operational profile Check customer s definition of failure matches with what was defined for testing the product far@ucalgary.ca 55

56 Post Delivery Phase /1 Atiit Activity 16: Project post-release staff needs Customer s staff for system recovery; supplier s staff to handle customer-reported reported failures and to remove faults Activity 17: Monitor field reliability vs. objectives Collect post release failure data systematically Activity 18: Track customer satisfaction with reliability Survey product features with a sample customer set far@ucalgary.ca 56

57 Post Delivery Phase /2 Atiit Activity 19: Time new feature introduction ti by monitoring reliability New features bring new defects. Add new features desired by the customers if they can be managed without sacrificing reliability of the whole system Activity 20: Guide product and process improvement with reliability measures Root-cause analysis for the faults Why the fault was not detected earlier in the development phase and what should be done to reduce the probability bili of introducing similar faults 57

58 Feasibility Phase: Benefits Atiit Activity 1 and d2 2: Dfi Define and classify failures, fil identify customer reliability needs Benefits: Release software at a time that meets customer reliability needs but is as early and inexpensive as possible Activity 3: Determine operational profiles Benefits: Speed up time to market by saving test time, reduce test cost, have a quantitative measure for reliability far@ucalgary.ca 58

59 Requirements Phase: Benefits Activity i 4: Conduct trade-off studies Benefits: Increase market share by providing a software product that matches better to customer needs Activity 5: Set reliability objectives Benefits: Release software at a time that meets customer reliability needs but is as early and inexpensive as possible far@ucalgary.ca 59

60 Design Phase : Benefits Atiit Activity it 6: Allocate reliability among acquired software, components, hardware and other systems Benefits: Reduce development time and cost by striking better balance among components Atiit Activity 7: Engineer to meet reliability objectives Benefits: Reduce development time and cost with better design Activity 8: Focus resources based on operational profile Benefits: Speed up time to market by yguiding gdevelopment priorities, reduce development cost far@ucalgary.ca 60

61 Implementation Phase : Benefits Atiit Activity 9: Measure reliability of acquired software, hardware and other systems Benefits: Reduce risks to reliability, schedule, cost from unknown software and systems Activity 10: Manage fault introduction and propagation Benefits: Maximize cost-effectiveness of reliability improvement far@ucalgary.ca 61

62 System Test Phase : Benefits Activity 11: Determine operational profile used for testing Benefits: Reduce the chance of critical operations going unattended, speed up time to market by saving test time, reduce test cost Activity 12: Conduct reliability growth testing Benefits: Determine how the product reliability is improving. Activity 13: Conduct reliability growth testing, track testing progress Benefits: Know exactly what reliability the customer would experience at different points in time if the software is released at those points far@ucalgary.ca 62

63 Field Trial Phase : Benefits Atiit Activity 14: Project additional testing ti needed d Benefits: Planning tests ahead in time when the reliability measure is not satisfactory will reduce the time for integration and release. Activity 15: Certify that reliability objectives are met Benefits: Release software at a time that meets customer reliability needs but is as early and inexpensive as possible; verify that the customer reliability needs are actually met far@ucalgary.ca 63

64 Post Delivery Phase: Benefits Activity 16: Project post-release staff needs Benefits: Reduce post-release costs with better planning Activity 17-18: 18: Monitor field reliability vs objectives, track customer satisfaction with reliability Benefits: Maximize likelihood of pleasing customer with reliability 64

65 Post Delivery Phase: Benefits Activity 19: Time new feature introduction by monitoring reliability Benefits: Ensure that software continues to meet customer reliability needs in the field Activity 20: Guide product and process improvement with reliability measures Benefits: Maximize cost-effectiveness of product and process improvements selected 65

66 Example: Project Additional Testing Needed A test team runs tests for a new software project. There are 12 planned tests per day. After 13 days into the testing, the progress lagged what had dbeen projected. The following table depicts the data: Date Daily execution of tests Planned Completed Dec Dec Dec Dec Dec Dec Dec Dec Dec Dec Dec Dec Dec SENG521 (Winter 2008) far@ucalgary.ca 66

67 Example (cont d) There were 5 testers assigned to this project partially and table below shows data for each day that testers were available to do testing and the number of tests t they completed each day. Date Testers Tester Completed A B C D E days tests Dec Dec Dec Dec Dec Dec Dec Dec Dec Dec Dec Dec Dec Total SENG521 (Winter 2008) far@ucalgary.ca 67

68 Example (cont d) Calculate the average number of tests that a tester completes per day. Total tests executed: 134 Total tester days: 33 Average tests completed per day: = 4.06 Calculate efficiency of test Total tests planned: 12 13= 156 Total tests executed: 134 efficiency: = or about %86 SENG521 (Winter 2008) far@ucalgary.ca 68

69 Example (cont d) Assume that the current date is Dec 13 th and currently one tester assigned to this project. We want to bring the test execution back on plan in the next 10 working days. How many testers do we need to hire for this project, assuming that the plan for the next 10 days is the execution of 12 tests per day? In 10 working days, the team needs to complete = 120 tests to match the planned rate. Test execution is currently = 22 tests behind the goal. This means =142 tests in 10 days to accomplish. Using the average rate of about 4 tests per day calculated above, 3 testers would only complete 120 (3 testers 4 tests/day 10 days =120) tests in that time, which is less than what is needed. However, if we hire 4 testers they can complete 160 (4 testers 4 tests/day 10 days) which is a bit above the need. Therefore 4-1=3 more testers t are to be hired for this project. SENG521 (Winter 2008) far@ucalgary.ca 69

70 Existing vs. New Projects There is no essential difference between new and existing i projects in applying SRE for the first time. However, determining failure intensity objective and operational profile for existing projects is easier. Most of the SRE activities will require only small updates after they have been completed once, e.g., operational profile should only be updated for the new operations added. (remember interaction factor) After SRE has been applied to one release, less effort is needed ddfor succeeding releases, e.g., new test cases should be added to the existing ones. far@ucalgary.ca 70

71 Short-Cycle Projects Small projects or releases or those with short development cycles may require a modified set of SRE activities to keep costs low or activity durations short. Reduction in cost and time can be obtained by limiting the number of elements in the operational profile and by accepting less precision. Examples: Setting one operational mode and performing certification test rather than reliability growth test. far@ucalgary.ca 71

72 Cost Concerns There may be a training i cost when starting to apply SRE. The principal cost in applying SRE is determining the operational profile. Another cost is associated with processing and analyzing failure data during reliability ygrowth test. As most projects have multiple releases, the SRE cost drops sharply after initial release. far@ucalgary.ca 72

73 Practice Variation Defining an operational profile based on customer modeling. Automatic test cases generation based on frequency of use reflected in operational profile. Employing cleanroom development techniques together with feature and certification i testing. Automatic tracking of reliability ygrowth. SRE for Agile software development. far@ucalgary.ca 73

74 Conclusions Practical implementation of an effective SRE program is a non-trivial task. Mechanisms for collection and analysis of data on software product and process quality must be in place. Fault identification and elimination techniques must be in place. Other organizational i abilities i such as the use of reviews and inspections, reliability based testing, and software process improvement are also necessary for effective SRE. Quality oriented mindset and training are necessary! far@ucalgary.ca 74

75 75