Vidyalankar T.Y. Diploma : Sem. V [CO/CM/IF] Software Engineering Prelim Question Paper Solution

Size: px
Start display at page:

Download "Vidyalankar T.Y. Diploma : Sem. V [CO/CM/IF] Software Engineering Prelim Question Paper Solution"

Transcription

1 1. (a) T.Y. Diploma : Sem. V [CO/CM/IF] Software Engineering Prelim Question Paper Solution (i) Rapid Application Development Rapid application development (RAD) is an incremental software development process model that emphasizes an extremely short development cycle. The RAD model is a high-speed adaptation of the linear sequential model in which rapid development is achieved by using component-based construction. If requirements are well understood and project scope is constrained, the RAD process enables a development team to create a fully functional system within very short time periods (e.g., 60 to 90 days). Used primarily for information systems applications, the RAD approach encompasses the following phases. Business modeling : The information flow among business functions is modeled in a way that answers the following questions: What information drives the business process? What information is generated? Who generates it? Where does the information go? Who processes it? Data modeling : The information flow defined as part of the business modeling phase is refined into a set of data objects that are needed to support the business. Characteristics (called attributes) of each object are identified and the relationships between these objects defined. Process modeling : The data objects defined in the data modeling phase are transformed to achieve the information flow necessary to implement a business function. Processing descriptions are created for adding, modifying, deleting, or retrieving a data object. Application generation : RAD assumes the use of fourth generation techniques. Rather than creating software using conventional third generation programming languages the RAD process works to reuse existing program components (when possible) or create reusable components (when necessary). In all cases, automated tools are used to facilitate construction of the software. Testing and turnover : Since the RAD process emphasizes reuse, many of the program components have already been tested. This reduces overall testing time. However, new components must be tested and all interfaces must be fully exercised. Drawbacks of RAD Model For large but scalable projects, RAD requires sufficient human resources to create the right number of RAD teams. RAD requires developers and customers who are committed to the rapidfire activities necessary to get a system complete in a much abbreviated time frame. If commitment is lacking from either constituency, RAD projects will fail. 1013/TY/Pre_Pap/Comp/SE_Soln 15

2 : T.Y. Diploma SE 1. (a) 1. (a) Not all types of applications are appropriate for RAD. If a system cannot be properly modularized, building the components necessary for RAD will be problematic. If high performance is an issue and performance is to be achieved through tuning the interfaces to system components, the RAD approach may not work. RAD is not appropriate when technical risks are high. This occurs when a new application makes heavy use of new technology or when the new software requires a high degree of interoperability with existing computer programs. (ii) "An important underlying law or assumption required in a system of thought." Following, are the core principles that focus on software engineering practice which will have more value. i) The reason it all exists : The software system exists to provide value to its users. ii) Keep it simple : The software must be simple to use, maintain. iii) Maintain the vision : Clear vision is essential for successful project. iv) What you produce, others will consume : The software what you developed will be used and maintained by others, so make it easy for others who uses it. v) Be open to the Future : Change to the system must be easy. vi) Plan Ahead for Reuse : Since reusability provides cost and time benefit, it must be well planned in the development. vii) Think : Think before taking action. (iii) The ISO 9000 quality standards A quality assurance system may be defined as the organizational structure, responsibilities, procedures, processes, and resources for implementing quality management. The ISO 9000 standards have been adopted by many countries including all members of the European Community, Canada, Mexico, the United States, Australia, New Zealand. To become registered to one of the quality assurance system models contained in ISO 9000, a company s quality system and operations are scrutinized by third party auditors for compliance to the standard and for effective operation. Upon successful registration, a company is issued a certificate from a registration body represented by the auditors. Semi-annual surveillance audits ensure continued compliance to the standard. The ISO 9000 quality assurance models treat an enterprise as a network of interconnected processes. ISO 9000 describes the elements of a quality assurance system in general terms. These elements include the organizational structure, procedures, processes, and resources needed to implement quality planning, quality control, quality assurance, and quality improvement. However, ISO 9000 does not describe how an organization should implement these quality system elements. Consequently, the challenge lies in designing and implementing a quality assurance system that meets the standard and fits the company s products, services, and culture /TY/Pre_Pap/Comp/SE_Soln

3 Prelim Question Paper Solution 1. (a) (iv) COCOMO II model COCOMO II (COnstructive COst Model) is actually a hierarchy of estimation models that address the following areas : Application composition model. Used during the early stages of software engineering. Early design stage model. Used once requirements have been stabilized and basic software architecture has been established. Post-architecture-stage model. Used during the construction of the software. Like function points, the object point is an indirect software measure that is computed using counts of the number of : (1) screens (at the user interface), (2) reports, and (3) components likely to be required to build the application. Each object instance (e.g., a screen or report) is classified into one of three complexity levels (i.e., simple, medium, or difficult). Object type Complexity weight Simple Medium Difficult Screen Report GL component 10 Fig.: Complexity weighting for object types. Once complexity is determined, the number of screens, reports, and components are weighted. The object point count is then determined by multiplying the original number of object instances by the weighting factor. When component-based development or general software reuse is to be applied, the percent of reuse (%reuse) is estimated and the object point count is adjusted: NOP = (object points) x [(100 %reuse)/100] where NOP is defined as new object points. To derive an estimate of effort based on the computed NOP value, a productivity rate must be derived. Developer's experience / capability Very low Low Nominal High Very High Environment maturity / capability Very low Low Nominal High Very High PROD PROD = NOP / person-month Fig.: Productivity rate for object points. Once the productivity rate has been determined, an estimate of project effort can be derived as estimated effort = NOP / PROD 1013/TY/Pre_Pap/Comp/SE_Soln 17

4 : T.Y. Diploma SE 1. (b) 1. (b) (i) Integration Testing Integration testing is a systematic technique for constructing the program structure while at the same time conducting tests to uncover errors associated with interfacing. The objective is to take unit tested components and build a program structure that has been dictated by design. Regression Testing Each time a new module is added as part of integration testing, the software changes. New data flow paths are established, new I/O may occur, and new control logic is invoked. These changes may cause problems with functions that previously worked flawlessly. In the context of an integration test strategy, regression testing is the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects. Regression testing may be conducted manually, by re-executing a subset of all test cases. The regression test suite (the subset of tests to be executed) contains three different classes of test cases : A representative sample of tests that will exercise all software functions. Additional tests that focus on software functions that are likely to be affected by the change. The regression test suite should be designed to include only those tests that address one or more classes of errors in each of the major program functions. It is impractical and inefficient to re-execute every test for every program function once a change has occurred. (ii) Requirements Engineering Tasks Requirements engineering provides the framework for understanding user requirements in better manner. Following tasks forms requirements engineering. Inception Elicitation Elaboration Negotiation Specification Validation Requirements Management Inception : All software projects starts as need of the organization or need of some individual of an organization or person. It may be requirement of changing nature of business. It starts with formal communication. Elicitation : Try to find out the requirements of users. What are, customer expectations form the product. Following problems makes requirements elicitation difficult. Problems of scope. Problems of understanding. Problems of volatility (Requirements change over time) /TY/Pre_Pap/Comp/SE_Soln

5 Prelim Question Paper Solution 2. (a) Elaboration : Elaboration is to understand the requirements gathered during inception and elicitation in detailed. Models are generated during elaboration. The result of elaboration is an analysis model that defines the informational, functional, and behavioral domain of the problem. Negotiation : Conflicts regarding requirements are resolved during negotiation. All the stakeholders should be satisfied. Specification : Specification is the final work product produced by the requirements engineering activity. It specifies all the requirements, constraints on the system. Requirements Management : All the identified requirements are assigned an identifier, and a traceability table is developed. Each requirement is trace to some aspect of the system or its environment. Following types are traceability tables are developed. Features traceability table Source traceability table Dependency traceability table Subsystem traceability table Interface traceability table The Design Model The design model can be viewed in two different dimensions. The process dimension indicates the evolution of the design model as design tasks are executed as part of the software process. The abstraction dimension represents the level of detail as each dement of the analysis model is transformed into a design equivalent and then refined iteratively. Elements of design model i) Data Design Elements : Like other software engineering activities, data design (sometimes referred to as data architecting) creates a model of data and/or information that is represented at a high level of abstraction. The structure of data has always been an important part of software design. At the program component level, the design of data structures and the associated algorithms required to manipulate them is essential to the creation of high-quality applications. ii) Architectural Design Elements : The architectural design for software is the equivalent to the floor plan of a house. The floor plan depicts the overall layout of the rooms, their size, shape, and relationship to one another, and the doors and windows that allow movement into and out of the rooms. The floor plan gives us an overall view of the house. Architectural design elements give us an overall view of the software. The architectural model is derived from three sources : (a) information about the application domain for the software to be built; (b) specific analysis model elements such as data (low diagrams or analysis classes, their relationships and collaborations for the problem at hand, and (c) the availability of architectural patterns. 1013/TY/Pre_Pap/Comp/SE_Soln 19

6 : T.Y. Diploma SE iii) Interface Design Elements : The interface design for software is the equivalent to a set of detailed drawings (and specifications) for the doors, windows, and external utilities of a house. These drawings depict the size and shape of doors and windows, the manner in which they operate, the way in which utilities connections (e.g., water, electrical, gas, telephone) come into the house and are distributed among the rooms depicted in the floor plan. There are three important elements of interface design : (a) the user interface (UI); (b) external interfaces to other systems, devices, networks, or other producers or consumers of information; and (c) internal interfaces between various design components. These interface design elements allow the software to communicate externally and enable internal communication and collaboration among the components that populate the software architecture. iv) Component-Level Design Elements : The component-level design for software is equivalent to a set of detailed drawings (and specifications) for each room in a house. These drawings depict wiring and plumbing within each room, the location of electrical receptacles and switches, faucets, sinks, showers, tubs, drains, cabinets, and closets. They also describe the flooring to be used, the moldings to be applied, and every other detail associated with a room. The component-level design for software fully describes the internal detail of each software component. To /TY/Pre_Pap/Comp/SE_Soln

7 Prelim Question Paper Solution accomplish this, the component-level design defines data structures for all local data objects and algorithmic detail for ail processing that occurs within a component and an interface that allows access to all component operations (behaviors). e.g. UML Component diagram for sensor management 2. (b) 2. (c) v) Deployment-Level Design Elements : Deployment-level design elements indicate how software functionality and subsystems will be allocated within the physical computing environment that will support the software. Six Sigma Strategy Six Sigma is the most widely used strategy for statistical quality assurance in industry today. Six Sigma strategy is a rigorous and disciplined methodology that uses data and statistical analysis to measure and improve a company s operational performance by identifying and eliminating defects in manufacturing. The Six Sigma strategy defines 3 core steps : Define customer requirements and project goals via well defined methods of customer communication. Measure the existing process and its output to determine current quality performance. Analyze defect metrics and determine the vital few causes. If an existing software process is in place, but improvement is required, Six Sigma suggests 2 additional steps : Improve the process by eliminating the root causes of defects. Control the process to ensure that future work does not reintroduce the causes of defects. Once the software is developed it is deployed. The deployment activity consists of following actions: delivery, support, and feedback. In evolutionary development deployment is done many times. Following principles must be followed when software is deployed. i) Customer expectations for the software must be managed: Every delivery must contain what the customer wants. False promises should be avoided. ii) A complete delivery package should be assembled and tested: Test programs, document and supporting data. Conduct beta test. iii) A support regime must be established before the software is delivered: The support team must be kept ready before delivery to solve any problem after delivery. iv) Appropriate instructional material must be provided to end-users. v) Buggy software should be fixed first, delivered later. 1013/TY/Pre_Pap/Comp/SE_Soln 21

8 : T.Y. Diploma SE 2. (d) 2. (e) Sometimes called the classic life cycle or the waterfall model, the linear sequential model suggests a systematic, sequential approach to software development that begins at the system level and progresses through analysis, design, coding, testing, and support. System/information engineering and modeling : Because software is always part of a larger system (or business), work begins by establishing requirements for all system elements and then allocating some subset of these requirements to software. Software requirements analysis : The requirements gathering process is intensified and focused specifically on software. Design : Software design is actually a multistep process that focuses on four distinct attributes of a program: data structure, software architecture, interface representations, and procedural (algorithmic) detail. Code generation : The design must be translated into a machine-readable form. The code generation step performs this task. Testing : Once code has been generated, program testing begins. The testing process focuses on the logical internals of the software, ensuring that all statements have been tested, and on the functional externals; Support : Software will undoubtedly undergo change after it is delivered to the customer (a possible exception is embedded software). Throughout the design process, a software engineer should look for every opportunity to reuse existing design patterns (when they meet the needs of the design) rather than creating new ones. Describing a Design Pattern Mature engineering disciplines make use of thousands of design patterns. For example, a mechanical engineer uses a two-step, keyed shaft as a design pattern. Inherent in the pattern are attributes (the diameters of the shaft, the dimensions of the keyway, etc.) and operations (e.g., shaft rotation, shaft connection). An electrical engineer uses an integrated circuit (an extremely complex design pattern) to solve a specific element of a new problem. A description of the design pattern may also consider a set of design forces. Design forces describe nonfunctional requirements {e.g., ease of maintainability, portability} associated the software for which the pattern is to be applied. The names of design patterns should be chosen with care. One of the key technical problems in software reuse is the inability to find existing reusable patterns when hundreds or thousands of candidate patterns exist. The search for the "right" pattern is aided immeasurably by a meaningful pattern name. Using Patterns in Design Design patterns can be used throughout software design. The problem description is examined at various levels of abstraction to determine if it is amenable to one or more of the following types of design patterns Architectural patterns : These patterns define the overall structure of the software, indicate the relationships among subsystems and software components, and define the rules for specifying relationships among the elements (classes, packages, components, subsystems} of the architecture /TY/Pre_Pap/Comp/SE_Soln

9 Prelim Question Paper Solution 2. (f) Design patterns : These patterns address a specific element of the design such as an aggregation of components to solve some design problem, relationships among components, or the mechanisms for effecting component-to-component communication. Idioms : Sometimes called coding patterns, these language-specific patterns generally implement an algorithmic element of a component, a specific interface protocol, or a mechanism for communication among components. Frameworks In some cases it may be necessary to provide an implementation-specific skeletal infrastructure, called a framework, for design work. That is, the designer may select a "ramble mini-architecture that provides the generic structure and behavior for a family of software abstractions, along with a context... which specifies their collaboration and use within a given domain" A framework is not an architectural pattern, but rather a skeleton with a collection of "plug points" (also called hooks and slots) that enable it to be adapted to a specific problem domain. The plug points enable a designer to integrate problem specific classes or functionality within the skeleton. In an object-oriented context, a framework is a collection of cooperating classes. Risk Mitigation, Monitoring and Management All of the risk analysis activities presented to this point have a single goal to assist the project team in developing a strategy for dealing with risk. An effective strategy must consider three issues : risk avoidance risk monitoring risk management and contingency planning Let us consider the example of staff turnover. To mitigate this risk, project management must develop a strategy for reducing turnover. Among the possible steps to be taken are : Meet with current staff to determine causes for turnover (e.g., poor working conditions, low pay, competitive job market). Mitigate those causes that are under our control before the project starts. Once the project commences, assume turnover will occur and develop techniques to ensure continuity when people leave. Organize project teams so that information about each development activity is widely dispersed. Define documentation standards. Assign a backup staff member for every critical technologist. As the project proceeds, risk monitoring activities commence. The project manager monitors factors that may provide an indication of whether the risk is becoming more or less likely. In the case of high staff turnover, the following factors can be monitored : General attitude of team members based on project pressures. The degree to which the team has jelled. Interpersonal relationships among team members. Potential problems with compensation and benefits. The availability of jobs within the company and outside it. 1013/TY/Pre_Pap/Comp/SE_Soln 23

10 : T.Y. Diploma SE 3. (a) Risk management and contingency planning assumes that mitigation efforts have failed. The project is well underway and a number of people announce that they will be leaving. If the mitigation strategy has been followed, backup is available, information is documented, and knowledge has been dispersed across the team. In addition, the project manager may temporarily refocus resources to those functions that are fully staffed, enabling newcomers who must be added to the team to get up to speed. Those individuals who are leaving are asked to stop all work and spend their last weeks in knowledge transfer mode. It is important to note that RMMM steps incur additional project cost. For example, Spending the time to "backup" every critical technologist costs money. For a large project, 30 or 40 risks may identified. If between three and seven risk management steps are identified for each, risk management may become a project in itself. McCall s quality factors McCall, Richards, and Walters propose a useful categorization of factors that affect software quality. These software quality factors, shown in Figure, focus on three important aspects of a software product: its operational characteristics, its ability to undergo change, and its adaptability to new environments. Product Revision Product Operation Product Transition Correctness Usability Efficiency Reliability Integrity Fig.: McCall's software quality factors. Correctness : The extent to which a program satisfies its specification and fulfills the customer's mission objectives. Reliability : The extent to which a program can be expected to perform its intended function with required precision. Efficiency : The amount of computing resources and code required by a program to perform its function. Integrity : Extent to which access to software or data by unauthorized persons can be controlled. Usability : Effort required to learn, operate, prepare input, and interpret output of a program. Maintainability : Effort required to locate and fix an error in a program. Flexibility : Effort required to modify an operational program /TY/Pre_Pap/Comp/SE_Soln

11 Prelim Question Paper Solution Testability : Effort required to test a program to ensure that it performs its intended function. Portability : Effort required to transfer the program from one hardware and/or software system environment to another. Reusability : Extent to which a program [or parts of a program] can be reused in other applications related to the packaging and scope of the functions that the program performs. Interoperability : Effort required to couple one system to another. Auditability : The ease with which conformance to standards can be checked. Accuracy : The precision of computations and control. Communication commonality : The degree to which standard interfaces, protocols, and bandwidth are used. Completeness : The degree to which full implementation of required function has been achieved. Conciseness : The compactness of the program in terms of lines of code. Consistency : The use of uniform design and documentation techniques throughout the software development project. Data commonality : The use of standard data structures and types throughout the program. Error tolerance : The damage that occurs when the program encounters an error. Execution efficiency : The run-time performance of a program. Expandability : The degree to which architectural, data, or procedural design can be extended. Generality : The breadth of potential application of program components. Hardware independence : The degree to which the software is decoupled from the hardware on which it operates. Instrumentation : The degree to which the program monitors its own operation and identifies errors that do occur. Modularity : The functional independence (Chapter 13) of program components. Operability : The ease of operation of a program. Security : The availability of mechanisms that control or protect programs and data. Self-documentation : The degree to which the source code provides meaningful documentation. Simplicity. The degree to which a program can be understood without difficulty. Software system independence : The degree to which the program is independent of nonstandard programming language features, operating system characteristics, and other environmental constraints. 1013/TY/Pre_Pap/Comp/SE_Soln 25

12 : T.Y. Diploma SE Traceability : The ability to trace a design representation or actual program component back to requirements. Training : The degree to which the software assists in enabling new users to apply the system. 3. (b) 3. (c) System software System software is a collection of programs written to service other programs. Some system software (e.g., compilers, editors, and file management utilities) process complex, but determinate, information structures. Other systems applications (e.g., operating system components, drivers, telecommunications processors) process largely indeterminate data. In either case, the system software area is characterized by heavy interaction with computer hardware; heavy usage by multiple users; concurrent operation that requires scheduling, resource sharing, and sophisticated process management; complex data structures; and multiple external interfaces. Business software Business information processing is the largest single software application area. Discrete "systems" (e.g., payroll, accounts receivable/payable, inventory) have evolved into management information system (MIS) software that accesses one or more large databases containing business information. Applications in this area restructure existing data in a way that facilitates business operations or management decision making. In addition to conventional data processing application, business software applications also encompass interactive computing (e.g., pointof-sale transaction processing). The Capability Maturity Model Integration (CMMI) CMMI is a meta process model developed by SEI (Software Engineering Institute). As Organization reaches a certain level of maturity, it must have developed a process model which fulfils the guidelines suggested by SCI. CMMI represents process meta model in two ways : i) As a continues model ( defines 5 capability levels) ii) As a staged model (defines 5 maturity levels) Different maturity levels are : Level 0 : Incomplete The process area is not performed. Level 1 : Performed All the areas defined by CMMI are satisfied. All the tasks required to produce work product are performed. Level 2 : Managed All level 1 criteria is achieved. All activities are carried out as per the organization policy. All people have access to resources that are required to complete the task. All activities are monitored to satisfy the standards /TY/Pre_Pap/Comp/SE_Soln

13 Prelim Question Paper Solution Level 3 : Defined All level 2 criteria is achieved. The process is derived form the organization's standard process, which is derived from the organization's guidelines. 3. (d) Level 4 : Quantitatively managed All level 3 criteria is achieved. The process is improved using the statistical data collected. Level 5 : Optimized All level 4 criteria is achieved. The process always tries to satisfy the changing requirements of the customer and to improve the process to produce optimized solution. CMMI defines each process area in terms of "specific goals" and "specific practices" Process areas defined for different CMM levels. Performed No Process area is defined. Managed (Focus is Basic Project Management) Requirements management Project Planning Project Monitoring and control Supplier Agreement Management Measurement and Analysis Process and Product Quality Assurance Configuration management Defined (Focus is Process Standardization) Requirements Development Technical Solution Product Integration Verification Validation Organizational process focus Organization process definition Organizational Training Integrated Project Management Integrated Supplier Management Risk Management Decision Analysis and Resolution Organizational Environment for Integration Integrated Teaming Risk Management Reactive Vs Proactive Risk Strategies Reactive risk strategy The majority of software teams rely solely on reactive risk strategies. At best, a reactive strategy monitors the project for likely risks. The software team does nothing about risks until something goes wrong. Then, the team flies into action in an attempt to correct the problem rapidly. This is often called a fire fighting mode. Proactive risk strategy A considerably more intelligent strategy for risk management is to be proactive. A proactive strategy begins long before technical work is initiated. Potential risks are identified, their probability and impact are assessed, and they are ranked by importance. Then, the software team establishes a plan for managing risk. 1013/TY/Pre_Pap/Comp/SE_Soln 27

14 : T.Y. Diploma SE Software Risks It has two characteristics : Uncertainty the risk may or may not happen; that is, there are no 100% probable risks. Loss if the risk becomes a reality, unwanted consequences or losses will occur. Different categories of risks Project risks threaten the project plan. That is, if project risks become real, it is likely that project schedule will slip and that costs will increase. Technical risks threaten the quality and timeliness of the software to be produced. If a technical risk becomes a reality, implementation may become difficult or impossible. Business risks threaten the viability of the software to be built. Known risks are those that can be uncovered after careful evaluation of the project plan, the business and technical environment in which the project is being developed. Predictable risks are extrapolated from past project experience. Unpredictable risks are extremely difficult to identify in advance. Risk Identification Risk identification is a systematic attempt to specify threats to the project plan. One method for identifying risks is to create a risk item checklist. Product size risks associated with the overall size of the software to be built or modified. Business impact risks associated with constraints imposed by management or the marketplace. Customer characteristics risks associated with the sophistication of the customer and the developer's ability to communicate with the customer in a timely manner. Process definition risks associated with the degree to which the software process has been defined. Development environment risks associated with the availability and quality of the tools to be used to build the product. Technology to be built risks associated with the complexity of the system to be built and the "newness" of the technology that is packaged by the system. Staff size and experience risks associated with the overall technical and project experience of the software engineers who will do the work. Risk Components and Drivers Risk components are defined in the following manner : Performance risk the degree of uncertainty that the product will meet its requirements and be fit for its intended use. Cost risk the degree of uncertainty that the project budget will be maintained. Support risk the degree of uncertainty that the resultant software will be easy to correct, adapt, and enhance. Schedule risk the degree of uncertainty that the project schedule will be maintained and that the product will be delivered on time /TY/Pre_Pap/Comp/SE_Soln

15 Prelim Question Paper Solution 3. (e) Risk Projection Risk projection, also called risk estimation, attempts to rate each risk in two ways: the likelihood or probability that the risk is real the consequences of the problems associated with the risk. The project planner, along with other managers and technical staff, performs four risk projection activities: i) Establish a scale that reflects the perceived likelihood of a risk. ii) Delineate the consequences of the risk. iii) Estimate the impact of the risk on the project and the product. iv) Note the overall accuracy of the risk projection so that there will be no misunderstandings. Unit Testing Unit testing focuses verification effort on the smallest unit of software design the software component or module. Unit Test Considerations The module interface is tested to ensure that information properly flows into and out of the program unit under test. The local data structure is examined to ensure that data stored temporarily maintains its integrity during all steps in an algorithm's execution. Boundary conditions are tested to ensure that the module operates properly at boundaries established to limit or restrict processing. All independent paths (basis paths) through the control structure are exercised to ensure that all statements in a module have been executed at least once. And finally, all error handling paths are tested. Unit Test Procedures After source level code has been developed, reviewed, and verified for correspondence to component level design, unit test case design begins. A review of design information provides guidance for establishing test cases that are likely to uncover errors in each of the categories discussed earlier. Each test case should be coupled with a set of expected results. Because a component is not a stand-alone program, driver and/or stub software must be developed for each unit test. In most applications a driver is nothing more than a "main program" that accepts test case data, passes such data to the component (to be tested), and prints relevant results. Stubs serve to replace modules that are subordinate (called by) the component to be tested. A stub or "dummy subprogram" uses the subordinate module's interface, may do minimal data manipulation, prints verification of entry, and returns control to the module undergoing testing. 1013/TY/Pre_Pap/Comp/SE_Soln 29

16 : T.Y. Diploma SE Drivers and stubs represent overhead. That is, both are software that must be written but that is not delivered with the final software product. If drivers and stubs are kept simple, actual overhead is relatively low. 4. (a) 4. (a) (i) Black-Box Testing Black-box testing, also called behavioral testing, focuses on the functional requirements of the software. That is, black-box testing enables the software engineer to derive sets of input conditions that will fully exercise all functional requirements for a program. Black-box testing attempts to find errors in the following categories : i) incorrect or missing functions, ii) interface errors, iii) errors in data structures or external data base access, iv) behavior or performance errors, and v) initialization and termination errors. Unlike white-box testing, which is performed early in the testing process, blackbox testing tends to be applied during later stages of testing White-Box Testing White-box testing, sometimes called glass-box testing, is a test case design method that uses the control structure of the procedural design to derive test cases. Using white-box testing methods, the software engineer can derive test cases that i) guarantee that all independent paths within a module have been exercised at least once, ii) exercise all logical decisions on their true and false sides, iii) execute all loops at their boundaries and within their operational bounds, iv) exercise internal data structures to ensure their validity. (ii) Alpha and Beta Testing It is virtually impossible for a software developer to foresee how the customer will really use a program. Instructions for use may be misinterpreted; strange combinations of data may be regularly used; output that seemed clear to the tester may be unintelligible to a user in the field. When custom software is built for one customer, a series of acceptance tests are conducted to enable the customer to validate all requirements. If software is developed as a product to be used by many customers, it is impractical to perform formal acceptance tests with each one. Most software product builders use a process called alpha and beta testing to uncover errors that only the end-user seems able to find. The alpha test is conducted at the developer's site by a customer. Alpha tests are conducted in a controlled environment /TY/Pre_Pap/Comp/SE_Soln

17 4. (a) 4. (a) Prelim Question Paper Solution The beta test is conducted at one or more customer sites by the end-user of the software. Unlike alpha testing, the developer is generally not present. Therefore, the beta test is a "live" application of the software in an environment that cannot be controlled by the developer. The customer records all problems (real or imagined) that are encountered during beta testing and reports these to the developer at regular intervals. As a result of problems reported during beta tests, software engineers make modifications and then prepare for release of the software product to the entire customer base. (iii) Different categories of risks Project risks threaten the project plan. That is, if project risks become real, it is likely that project schedule will slip and that costs will increase. Technical risks threaten the quality and timeliness of the software to be produced. If a technical risk becomes a reality, implementation may become difficult or impossible. Business risks threaten the viability of the software to be built. Known risks are those that can be uncovered after careful evaluation of the project plan, the business and technical environment in which the project is being developed. Predictable risks are extrapolated from past project experience. Unpredictable risks are extremely difficult to identify in advance. (iv) Cyclomatic Complexity Cyclomatic complexity is a software metric that provides a quantitative measure of the logical complexity of a program. When used in the context of the basis path testing method, the value computed for cyclomatic complexity defines the number of independent paths in the basis set of a program and provides us with an upper bound for the number of tests that must be conducted to ensure that all statements have been executed at least once. Complexity is computed in one of three ways: i) The number of regions of the flow graph correspond to the cyclomatic complexity. ii) Cyclomatic complexity, V(G), for a flow graph, G, is defined as V(G) = E N + 2 where E is the number of flow graph edges, N is the number of flow graph nodes. iii) Cyclomatic complexity, V(G), for a flow graph, G, is also defined as V(G) = P + 1 where P is the number of predicate nodes contained in the flow graph G. 4. (b) (i) Verification and Validation Verification refers to the set of activities that ensure that software correctly implements a specific function. Validation refers to a different set of activities that ensure that the software that has been built is traceable to customer requirements. 1013/TY/Pre_Pap/Comp/SE_Soln 31

18 : T.Y. Diploma SE Unit Testing Unit testing focuses verification effort on the smallest unit of software design the software component or module. Integration Testing Integration testing is a systematic technique for constructing the program structure while at the same time conducting tests to uncover errors associated with interfacing. The objective is to take unit tested components and build a program structure that has been dictated by design. Top-down Integration Top-down integration testing is an incremental approach to construction of program structure. Modules are integrated by moving downward through the control hierarchy, beginning with the main control module (main program). Bottom-up Integration Bottom-up integration testing, as its name implies, begins construction and testing with atomic modules (i.e., components at the lowest levels in the program structure). Because components are integrated from the bottom up, processing required for components subordinate to a given level is always available and the need for stubs is eliminated. Regression Testing Each time a new module is added as part of integration testing, the software changes. New data flow paths are established, new I/O may occur, and new control logic is invoked. These changes may cause problems with functions that previously worked flawlessly. In the context of an integration test strategy, regression testing is the re-execution of some subset of tests that have already been conducted to ensure that changes have not propagated unintended side effects. Regression testing may be conducted manually, by re-executing a subset of all test cases. Smoke Testing Smoke testing is an integration testing approach that is commonly used when shrink wrapped. The smoke testing approach encompasses the following activities : i) Software components that have been translated into code are integrated into a build. ii) A series of tests is designed to expose errors that will keep the build from properly performing its function. iii) The build is integrated with other builds and the entire product (in its current form) is smoke tested daily. The integration approach may be top down or bottom up /TY/Pre_Pap/Comp/SE_Soln

19 Prelim Question Paper Solution Smoke testing provides a number of benefits when it is applied on complex, time critical software engineering projects : Integration risk is minimized. The quality of the end-product is improved. Error diagnosis and correction are simplified. Progress is easier to assess. Validation Testing Validation can be defined in many ways, but a simple definition is that validation succeeds when software functions in a manner that can be reasonably expected by the customer. Reasonable expectations are defined in the Software Requirements Specification a document that describes all user-visible attributes of the software. The specification contains a section called Validation Criteria. Information contained in that section forms the basis for a validation testing approach. After each validation test case has been conducted, one of two possible conditions exist: i) The function or performance characteristics conform to specification and are accepted or ii) A deviation from specification is uncovered and a deficiency list is created. Alpha and Beta Testing It is virtually impossible for a software developer to foresee how the customer will really use a program. Instructions for use may be misinterpreted; strange combinations of data may be regularly used; output that seemed clear to the tester may be unintelligible to a user in the field. When custom software is built for one customer, a series of acceptance tests are conducted to enable the customer to validate all requirements. If software is developed as a product to be used by many customers, it is impractical to perform formal acceptance tests with each one. Most software product builders use a process called alpha and beta testing to uncover errors that only the end-user seems able to find. The alpha test is conducted at the developer's site by a customer. Alpha tests are conducted in a controlled environment. The beta test is conducted at one or more customer sites by the end-user of the software. Unlike alpha testing, the developer is generally not present. Therefore, the beta test is a "live" application of the software in an environment that cannot be controlled by the developer. The customer records all problems (real or imagined) that are encountered during beta testing and reports these to the developer at regular intervals. 1013/TY/Pre_Pap/Comp/SE_Soln 33

20 : T.Y. Diploma SE 4. (b) 5. (a) (ii) Debugging occurs as a consequence of successful testing. That is, when a test case uncovers an error, debugging is the process that results in the removal of the error. Although debugging can and should be an orderly process, it is still very much an art. A software engineer, evaluating the results of a test, is often confronted with a "symptomatic" indication of a software problem. That is, the external manifestation of the error and the internal cause of the error may have no obvious relationship to one another. The poorly understood mental process that connects a symptom to a cause is debugging. The Debugging Process Debugging is not testing but always occurs as a consequence of testing. The debugging process begins with the execution of a test case. Results are assessed and a lack of correspondence between expected and actual performance is encountered. The debugging process will always have one of two outcomes : i) the cause will be found and corrected, or ii) the cause will not be found. In the latter case, the person performing debugging may suspect a cause, design a test case to help validate that suspicion, and work toward error correction in an iterative fashion. During debugging, we encounter errors that range from mildly annoying (e.g., an incorrect output format) to catastrophic (e.g. the system fails, causing serious economic or physical damage). As the consequences of an error increase, the amount of pressure to find the cause also increases. Often, pressure sometimes forces a software developer to fix one error and at the same time introduce two more. PSP and TSP Personal and Team Process Models The best software process is one that is close to the people who will be doing the work. If a software process model has been developed at a corporate or organizational level it can be effective only if it is amenable to significant adaptation to meet the needs of the project team that is actually doing software engineering work. Personal Software Process (PSP) Every developer uses some process to build computer software. The process may be haphazard or ad hock may change on the daily basis, may not be efficient, effective or even successful, but a process does exist. The PSP process model defines five framework activities: planning, high-level.design, high-level design review, development, and postmortem. Planning : This activity isolates requirements and, based on these, develops both size and resource estimates, in addition, a defect estimate (the number of defects projected for the work) is made. All metrics are recorded on worksheets or templates. Finally, development tasks are identified and a project schedule is created /TY/Pre_Pap/Comp/SE_Soln

21 Prelim Question Paper Solution 5. (b) High-level design : External specifications for each component to be constructed are developed and a component design is created. Prototypes are built when uncertainty exists. All issues are recorded and tracked. High-level design review : Formal verification methods are applied to uncover errors in the design. Metrics are maintained for all important tasks and work results. Development : The component level design is refined and reviewed. Code is generated, reviewed, compiled, and tested. Metrics are maintained for all important tasks and work results. Postmortem : Using the measures and metrics collected (a substantial amount of data that should be analyzed statistically), the effectiveness of the process is determined. Measures and metrics should provide guidance for modifying the process to improve its effectiveness. PSP represents a disciplined, metrics-based approach to software engineering that may lead to culture shock for many practitioners. However, when PSP is properly introduced to software engineers, the resulting improvement in software engineering productivity and software quality are significant. Team Software Process (TSP) Because many industry-grade software projects are addressed by a team of practitioners, Watts Humphrey extended the lessons learned from the introduction of PSP and proposed a team software process (TSP). The goal of TSP is to build a "self-directed" project team that organizes itself to produce highquality software. Objectives for TSP : Build self-directed teams that plan and track their work, establish goals, and own their processes and plans. These can be pure software teams or integrated product teams (IPT) of 3 to about 20 engineers. Show managers how to coach and motivate their teams and how to help them sustain peak performance. Accelerate software process improvement by making CMM level 5 behavior normal and expected. Provide improvement guidance to high-maturity organizations. Facilitate university teaching of industrial-grade team skills. The SCM Process SCM introduces a set of complex questions : How does an organization identify and manage the many existing versions fo a program in a manner that will enable change to be accommodated efficiently? How does an organization control changes before and after software is released to a customer? Who has responsibility for approving and ranking changes? How can we ensure that changes have been made properly? What mechanism is used to appraise others of changes that are made? 1013/TY/Pre_Pap/Comp/SE_Soln 35