Software Engineering

Size: px
Start display at page:

Download "Software Engineering"

Transcription

1 Software Engineering

2 This book is a part of the course by Jaipur National University, Jaipur. This book contains the course content for Software Engineering. JNU, Jaipur First Edition 2013 The content in the book is copyright of JNU. All rights reserved. No part of the content may in any form or by any electronic, mechanical, photocopying, recording, or any other means be reproduced, stored in a retrieval system or be broadcast or transmitted without the prior permission of the publisher. JNU makes reasonable endeavours to ensure content is current and accurate. JNU reserves the right to alter the content whenever the need arises, and to vary it at any time without prior notice.

3 Index I. Content...II II. List of Figures... V III. List of Tables...VI IV. Abbreviations... VII V. Application VI. Bibliography VII. Self Assessment Answers Book at a Glance I/JNU OLE

4 Contents Chapter I... 1 Introduction to Software Engineering... 1 Aim... 1 Objective... 1 Learning outcome The Problem Domain Industrial Strength Software Software: Late and Unreliable Software: Maintenance and Rework Software Engineering Challenges Scale Quality and Productivity Consistency and Repeatability Change Software Engineering Approach Phased Development Process Managing the Process Requirement Analysis Software Design Coding Testing... 9 Summary References Recommended Reading...11 Self Assessment Chapter II Software Process Aim Objectives Learning outcome Introduction to Software Process Software Process Model Linear Sequential Model Prototyping Model RAD Model Evolutionary Software Process Model The Incremental Model The Spiral Model The Concurrent Development Model Component Based Model Process Technology Summary References Recommended Reading Self Assessment Chapter III Software Development Life Cycle Aim Objectives Learning outcome Introduction to Software Development Life Cycle II/JNU OLE

5 3.2 Requirement Analysis Feasibility Study Coding Testing Integration and Testing Maintenance Systems Analysis and Design Summary References Recommended Reading Self Assessment Chapter IV Software Requirement Specification Aim Objectives Learning outcome Waterfall Model Prototyping Model Iterative Model Spiral Model Role of Management in Software Development Problem Analysis Requirement Specification Summary References Recommended Reading Self Assessment Chapter V System Design Aim Objectives Learning outcome Problem Partitioning Abstraction Top-Down and Bottom-Up Design Structured Approach Function v/s Object Oriented Approach Design Specification and Verification Summary References Recommended Reading Self Assessment Chapter VI Coding Aim Objectives Learning outcome Top-Down and Bottom Up Approach Structured Programming Information Hiding Programming Style Internal Documentation III/JNU OLE

6 Summary References Recommended Reading Self Assessment Chapter VII Testing Aim Objectives Learning outcome Levels of Testing Functional Testing Structural Testing Test Plan Test Cases Specifications Reliability Assessment Summary References Recommended Reading Self Assessment Chapter VIII Software Project Management Aim Objectives Learning outcome Cost Estimation Project Scheduling Staffing Benefits and drawbacks of IT staffing: Software Configuration Management Quality Assurance Project Monitoring Risk Management Summary References Recommended Reading Self Assessment IV/JNU OLE

7 List of Figures Fig. 1.1 Hardware-software cost trend... 3 Fig. 1.2 Basic problem... 4 Fig. 1.3 The problem of scale... 5 Fig. 1.4 Software quality attributes... 6 Fig. 1.5 The iron triangle... 7 Fig. 2.1 Software process Fig. 2.2 Phases of a problem solving loop [RAC95] Fig. 2.3 Phases within phases of the problem solving loop [RAC95] Fig. 2.4 Linear sequential model Fig. 2.5 RAD model Fig. 2.6 Incremental model Fig. 2.7 A typical spiral model Fig. 2.8 One element of the concurrent process model Fig. 2.9 Component based development Fig. 3.1 Analysis as a bridge between system engineering and software design Fig. 3.2 Integration and test stage Fig. 4.1 Waterfall model Fig. 4.2 Prototyping model Fig. 4.3 The iterative enhancement model Fig. 4.4 Spiral life cycle model Fig. 4.5 Factors of management dependency Fig. 4.6 Context diagram for the restaurant Fig. 7.1 Software verification vs software validation testing Fig. 7.2 Configuration for a structural unit test Fig. 7.3 Incremental integration structural testing Fig. 7.4 Test plan Fig. 7.5 Effective testing window Fig. 7.6 Reliability measurement process Fig. 8.1 Filled-in estimation form Fig. 8.2 Initial estimates Fig. 8.3 Identify dependencies Fig. 8.4 Create the schedule Fig. 8.5 Software engineering institute Fig. 8.6 Project monitoring Fig. 8.7 Risk management process V/JNU OLE

8 List of Tables Table 8.1 Possible software risks Table 8.2 Types of risk Table 8.3 Risk analysis Table 8.4 Risk management strategies Table 8.5 Risk factors VI/JNU OLE

9 Abbreviation CA - Configuration Authentication CBD - Component-Based Development CI - Configurable Item CMM - Capability Maturity Model COIs/CTIs - Critical Operational and/or Technical Issues CVS - Concurrent Versions System DFD - Data Flow Diagram FDA - Food and Drug Administration IT - Information Technology IUT - Implementation Under Test KLOC - Kilo Lines of Code KPA - Key Process Areas OO - Object Oriented OOD - Object Oriented Design OS - Operating System Q&P - Quality and Productivity QSM - Quantitative Software Management RAD - Rapid Application Development RCS - Revision Control System SCCS - Source Code Control System SDM - Structured Design Methodology SEI - Software Engineering Institute SRS - Software Requirements Specification UML - Unified Modeling Language WBS - Work Breakdown Structure VII/JNU OLE

10

11 Chapter I Introduction to Software Engineering Aim The aim of this chapter is to: explain the concept of problem domain discuss the software engineering challenges describe software engineering approach Objective The objectives of this chapter are to: determine the steps taken towards software engineering challenges explain the changes arising in software engineering elucidate the scale required for the software development Learning outcome At the end of this chapter, you will be able to: evaluate quality of software development determine consistency in a software understand the effect of repeatability of software engineering 1/JNU OLE

12 Software Engineering 1.1 The Problem Domain In software engineering there is nothing to deal with programs which are developed by people to exemplify something. Instead the problem area is the software that solves some problem of required users where larger systems or businesses may depend on the software, and where problems in the software can lead to major direct or indirect loss Industrial Strength Software A student system is primarily meant for demonstration purposes; it is generally not used for solving any real problem of any organisation. The software is generally not designed with quality issues like: portability robustness reliability usability An industrial strength software system is built to solve some problems of a client and is used by the client s organisation for operating some part of business. In other words, important activities depend on the correct functioning of the system. A malfunction of such a system can have huge impact in terms of financial or business loss, inconvenience to users, or loss of property and life. The software system needs to be of high quality with respect to properties like dependability reliability user-friendliness Industrial strength software has other properties which do not exist in student software systems. Typically, for the same problem, the detailed requirements of what the software should do increase considerably. Besides quality requirements, there are requirements of backup and recovery fault tolerance following of standards portability The size of the industrial strength software system may be two times or more than the student system for the same problem. For example, if one-fifth of productivity, and an increase in size by a factor of two for the same problem, an industrial strength software system will take about 10 times. The thumb rule says that industrial strength software may cost about 10 times the student software. The software industry is largely interested in developing industrial strength software, and the area of software engineering focuses on how to build such systems. The discipline dealing with the development of software should not deal only with developing programs, but with developing all the things that constitute software Software: Late and Unreliable Regardless of significant progress in techniques for developing software, software development remains a weak area. In a survey of over 600 firms, more than 35% reported having some computer-related development project that they categorised as a runaway. A fugitive is not a project that is somewhat late or somewhat over budget. It is one where the budget and schedule are out of control. The problem has become so rigorous that it has spawned an industry of its own; there are consultancy companies that advise how to rein such projects. 2/JNU OLE

13 For example, in a defence survey, it was reported that more than 70% of all the equipment failures were due to software. And this is in systems that are loaded with electrical, hydraulic, and mechanical systems. This just indicates that all other engineering disciplines have advanced far more than software engineering, and a system comprising the products of various engineering disciplines finds that software is the weakest component. Many banks have lost millions of dollars due to factual error and other problems in their software. In software, failures occur owing to bugs or errors that get introduced during the design and development process. Hence, even though a software system may fail after operating correctly for some time, the bug that causes that failure was there from the start Software: Maintenance and Rework Once the software is delivered and deployed, it enters the maintenance phase. Software needs to be maintained not because some of its components wear out and need to be replaced, but because there are often some enduring errors remaining in the system that must be removed as they are revealed. The errors, once discovered, need to be removed, leading to the software being changed. This is sometimes called corrective maintenance. Even without bugs, software frequently experience change. The main reason is that software often must be upgraded and improved to include more features and provide more services. This also requires modification of the software. Once a software system is deployed, the environment in which it operates changes. Hence, the needs that commence the software development also change to reflect the needs of the new environment. Hence, the software must adapt to the needs of the changed environment. The changed software then changes the environment, which in turn requires further change. This phenomenon is sometimes called the law of software evolution. Maintenance due to this phenomenon is sometimes called adaptive maintenance. For example if we consider the total life of software, the cost of maintenance generally exceeds the cost of developing the software. The maintenance-to-development-cost ratio has been variously suggested as 80:20, 70:30, or 60:40. Figure below also shows how the maintenance costs are increasing: Hardware Software Development Software Maintenance Fig. 1.1 Hardware-software cost trend 3/JNU OLE

14 Software Engineering Maintenance work is based on existing software. Understanding the software involves understanding not only the code but also the related documents. During the modification of the software, the effects of the change have to be clearly understood by the maintainer because introducing undesired side effects in the system during modification is easy. Thus, maintenance involves understanding the existing software understanding the effects of change making changes to both the code making changes to the documents Maintenance is one form of change that typically is done after the software development is completed and the software has been deployed. Changing requirements and associated rework are a major problem of the software industry. It is estimated that rework costs are 30 to 40% of the development cost. In other words, of the total development effort, rework due to various changes consumes around 30 to 40% of the effort. The problem of rework and change is not just a reflection of the state of software development, as changes are frequently initiated by clients as their needs change. 1.2 Software Engineering Challenges Software engineering is defined as the systematic approach to the development, operation, maintenance, and retirement of software. The use of the term systematic approach for the development of software implies that methodologies are used for developing software which is repeatable. If the methodologies are applied by different groups of people, similar software will be produced. In essence, the goal of software engineering is to take software development closer to science and engineering and away from ad-hoc approaches for development whose outcomes are not predictable but which have been used heavily in the past and still continue to be used for developing software. Industrial strength software is meant to solve some problem of the client. The problem therefore is to systematically develop software to satisfy the needs of some users or clients. This fundamental problem that software engineering deals with is shown in the figure below. Less than 5% Used with changes Used with changes More than 95% Major Rework Req d Required changes or was unusable Delivered But Unusable Paid for But Not Delivered Fig. 1.2 Basic problem (Source: BTNItVD) 4/JNU OLE

15 Though the basic problem is to systematically develop software to satisfy the client, there are some factors which affect the approaches selected to solve the problem. These factors are the primary forces that drive the progress and development in the field of software engineering Scale A fundamental factor that software engineering must deal with is the issue of scale. Development of a very large system requires a very different set of methods compared to developing a small system. The methods that are used for developing small systems generally do not scale up to large systems. For example, consider the problem of counting people in a room versus taking a census of a country. Both are essentially counting problems. But the methods used for counting people in a room will just not work when taking a census. Different set of methods will have to be used for conducting a census, and the census problem will require considerably more management, organisation, and validation, in addition to counting. Methods that one can use to develop programs of a few hundred lines cannot be expected to work when software of a few hundred thousand lines needs to be developed. A different set of methods must be used for developing large software. Any large project involves the use of engineering and project management. In small projects, informal methods for development and management can be used. However, for large projects, both have to be much more formal, as shown in the figure below. Formal Large Complex Projects Project Management Small projects Informal Informal Formal Development Methods Fig. 1.3 The problem of scale When dealing with a small software project, the engineering capability required is low and the project management requirement is also low. However, when the scale changes to large, to solve such problems properly, it is essential that we move in both directions. The engineering methods used for development need to be more formal, and the project management for the development project also needs to be more formal. There is no universally acceptable definition of what a small project is and what is a large project, and the scales are clearly changing with time. However, we can use the order of magnitudes and say that a project is small if its size is less than 10 KLOC, medium if the size is less than 100 KLOC. large if the size is less than one million LOC, and very large if the size is many million LOC. 5/JNU OLE

16 Software Engineering Quality and Productivity An engineering discipline is driven by practical parameters of cost, schedule, and quality. A solution that takes enormous resources and many years may not be acceptable. Similarly, a poor-quality solution, even at low cost, may not be of much use. Like all engineering disciplines, software engineering is driven by three major factors: cost, schedule, and quality. The cost of developing a system is the cost of resources used for the system, which, in case of software, is dominated by manpower cost, as development is largely labour-intensive. Schedule is an important factor in many projects. Business trends are dictating that time to market of a product should be reduced; that is, the cycle time from concept to delivery should be small. Productivity in terms of output per person-month can adequately capture both cost and schedule concerns. If productivity is higher, it should be clear that the cost in terms of person-months will be lower. Similarly, if productivity is higher, the potential of developing the software in shorter time improves a team of higher productivity will finish a job in lesser time than a same-size team with lower productivity. Productivity is a key driving factor in all businesses and desire for high productivity dictates, to a large extent, how things are done. According to the quality model adopted by this standard, software quality comprises of six main attributes as shown in fig Software Quality Functionality Reliability Usability Efficiency Maintainability Portability Fig. 1.4 Software quality attributes These six attributes have detailed characteristics which are considered the basic ones and which can and should be measured using suitable metrics. At the top level, for a software product, these attributes can be defined as follows: Functionality - The capability to provide functions which meet stated and implied needs when the software is used. Reliability- The capability to maintain a specified level of performance. Usability- The capability to be understood, learned, and used. Efficiency- The capability to provide appropriate performance relative to the amount of resources used. Maintainability - The capability to be modified for purposes of making corrections, improvements, or adaptation. Portability- The capability to be adapted for different specified environments without applying actions or means other than those provided for this purpose in the product. There are two important consequences of having multiple dimensions to quality. First, software quality cannot be reduced to a single number. And second, the concept of quality is project-specific. For each software development project, a quality objective must be specified before the development starts, and the goal of the development process should be to satisfy that quality objective. Despite the fact that there are many quality factors, reliability is generally accepted to be the main quality criterion Consistency and Repeatability A key challenge that software engineering faces is how to ensure that successful results can be repeated, and there can be some degree of consistency in quality and productivity. We can say that an organisation that develops one system with high quality and reasonable productivity, but is not able to maintain the quality and productivity levels for other projects, does not know good software engineering. An organisation involved in software development not only wants high quality and productivity, but it wants these consistently. In other words, a software development organisation would like to produce consistent quality software with consistent productivity. Consistency of performance is an 6/JNU OLE

17 important factor for any organisation; it allows an organisation to predict the outcome of a project with reasonable accuracy, and to improve its processes to produce higher-quality products and to improve its productivity. Without consistency, even estimating cost for a project will become difficult. This requirement of consistency will force some standardised procedures to be followed for developing software. However, within an organisation, consistency is achieved by using its chosen methodologies in a consistent manner. Frameworks like ISO9001 and the Capability Maturity Model (CMM) encourage organisations to standardise methodologies, use them consistently, and improve them based on experience Change In today s world change in business is very rapid. As businesses changes, it requires that the software supporting it also changes. Overall, as the world changes faster, software has to change faster. Rapid change has a special impact on software. As software is easy to change due to its lack of physical properties that may make changing harder, the expectation is much more from software for change. Therefore, one challenge for software engineering is to accommodate and embrace change. Different approaches are used to handle change. But change is a major driver today for software engineering. Approaches that can produce high quality software at high productivity but cannot accept and accommodate change are of little use today. 1.3 Software Engineering Approach The view of high quality and productivity (Q&P) is the basic objective which is to be achieved consistently for large scale problems and under the dynamics of changes. The Q&P achieved during a project will clearly depend on many factors, but the three main forces that govern Q&P are the people, processes, and technology, often called the Iron Triangle, as shown in the figure below. Technology Quality & Productivity People Process Fig. 1.5 The iron triangle For high Q&P good technology has to be used, good processes or methods have to be used, and the people doing the job have to be properly trained. In software engineering, the focus is primarily on processes. Process is what takes us from user needs to the software that satisfies the needs. The basic approach of software engineering is to separate the process for developing software from the developed product. The premise is that to a large degree the software process determines the quality of product and productivity achieved. Hence, to tackle the problem domain and successfully face the challenges that software engineering faces, one must focus on the software process. Design of proper software processes and their control then becomes a key goal of software engineering research. Most other computing disciplines focus on some type of product algorithms, operating systems, databases, etc. while software engineering focuses on the process for producing the products. It is essentially the software equivalent of manufacturing engineering. 7/JNU OLE

18 Software Engineering Phased Development Process A development process consists of various phases, each phase ending with a defined output. The phases are performed in an order specified by the process model being followed. The main reason for having a phased process is that it breaks the problem of developing software into successfully performing a set of phases, each handling a different concern of software development. This ensures that the cost of development is lower than what it would have been if the whole problem was tackled together. A phased process allows proper checking for quality and progress at some defined points during the development. Without this, one would have to wait until the end to see what software has been produced. Clearly, this will not work for large systems. Hence, for managing the complexity, project tracking, and quality, all the development processes consist of a set of phases. A phased development process is central to the software engineering approach for solving the software crisis Managing the Process Most organisations that follow a process have their own version. In general, we can say that any problem solving in software must consist of requirement specification for understanding and clearly stating the problem, design for deciding a plan for a solution, coding for implementing the planned solution, and testing for verifying the programs. For small problems, these activities may not be done explicitly, the start and end boundaries of these activities may not be clearly defined, and no written record of the activities may be kept. For large systems, each activity can itself be extremely complex, and methodologies and procedures are needed to perform them efficiently and correctly. Though different process models will perform these phases in different manner, they exist in all processes Requirement Analysis Requirements analysis is done in order to understand the problem the software system is to solve. The emphasis in requirements analysis is on identifying what is needed from the system, not how the system will achieve its goals. For complex systems, even determining what is needed is a difficult task. The goal of the requirements activity is to document the requirements in a software requirements specification document. Understanding the requirements of a system that does not exist is difficult and requires creative thinking. The problem becomes more complex because an automated system offers possibilities that do not exist otherwise. Once the problem is analysed and the essentials understood, the requirements must be specified in the requirement specification document. The requirements document must specify all functional and performance requirements; the formats of inputs and outputs; and all design constraints that exist due to political, economic, environmental, and security reasons. A preliminary user manual that describes all the major user interfaces frequently forms a part of the requirements document Software Design The purpose of the design phase is to plan a solution of the problem specified by the requirements document. This phase is the first step in moving from the problem domain to the solution domain. The design activity often results in three separate outputs architecture: design, high level design, and detailed design. Architecture focuses on looking at a system as a combination of many different components. The high level design identifies the modules that should be built for developing the system and the specifications of these modules. In detailed design, the internal logic of each of the modules is specified. In architecture the focus is on identifying components or subsystems and how they connect; in high level design the focus is on identifying the modules; and during detailed design the focus is on designing the logic for each of the modules Coding The goal of the coding phase is to translate the design of system into code in a given programming language. The coding phase affects both testing and maintenance profoundly. Well written code can reduce the testing and maintenance effort. During coding the focus should be on developing programs that are easy to read and understand, and not simply on developing programs that are easy to write. Simplicity and clarity should be strived for during the coding phase. 8/JNU OLE

19 Testing Testing s basic function is to detect defects in the software. After coding, computer programs are available that can be executed for testing purposes. This implies that testing not only has to uncover errors introduced during coding, but also errors introduced during the previous phases. The starting point of testing is unit testing, where the different modules or components are tested individually. As modules are integrated into a system, integration testing is performed, which focuses on testing the interconnection between modules. After the system is put together, system testing is performed. Finally, acceptance testing is performed to demonstrate to the client, on the real-life data of the client, the operation of the system. The testing process starts with a test plan that identifies all the testing-related activities that must be performed and specifies the schedule, allocates the resources, and specifies guidelines for testing. During the testing of the unit, specified test cases are executed and the actual result is compared with the expected output. The final output of the testing phase is the test report and the error report, or a set of such reports. Each test report contains a set of test cases and the result of executing the code with these test cases. 9/JNU OLE

20 Software Engineering Summary In software engineering there is nothing to deal with programs that are developed by people to exemplify something. Instead the problem area is the software that solves some problem of required users where larger systems or businesses may depend on the software, and where problems in the software can lead to major direct or indirect loss. A student system is primarily meant for demonstration purposes; it is generally not used for solving any real problem of any organisation. An industrial strength software system is built to solve some problem of a client and is used by the client s organisation for operating some part of business. Industrial strength software has other properties which do not exist in student software systems. The size of the industrial strength software system may be two times or more than the student system for the same problem. Regardless of significant progress in techniques for developing software, software development remains a weak area. Once the software is delivered and deployed, it enters the maintenance phase. Software needs to be maintained not because some of its components wear out and need to be replaced, but because there are often some enduring errors remaining in the system that must be removed as they are revealed. Maintenance work is based on existing software. Understanding the software involves understanding not only the code but also the related documents. Maintenance is one form of change that typically is done after the software development is completed and the software has been deployed. Changing requirements and associated rework are major problems of the software industry. Software engineering is defined as a systematic approach to the development, operation, maintenance, and retirement of software. The use of the term systematic approach for the development of software implies that methodologies are used for developing software which is repeatable. A fundamental factor that software engineering must deal with is the issue of scale. Development of a very large system requires a very different set of methods compared to developing a small system. The methods that are used for developing small systems generally do not scale up to large systems. An engineering discipline is driven by practical parameters of cost, schedule, and quality. A solution that takes enormous resources and many years may not be acceptable. Similarly, a poor-quality solution, even at low cost, may not be of much use. A key challenge that software engineering faces is how to ensure that successful results can be repeated, and there can be some degree of consistency in quality and productivity. In today s world change in business is very rapid. As businesses change, they require that the software supporting to change. Overall, as the world changes faster, software has to change faster. Rapid change has a special impact on software. The view of high quality and productivity (Q&P) is the basic objective which is to be achieved consistently for large scale problems under the dynamics of changes. A development process consists of various phases, each phase ending with a defined output. References Sundar, D., Software Engineering, Laxmi Publications, Ltd. Mall, R., Fundamentals of Software Engineering, PHI Learning Pvt. Ltd SPMN Focus Team Lessons Learned [Online] Available at: < lessons.html#eleven> [Accessed 7 November 2011]. CSE IIT Kharagpur, Characteristics of Software Maintenance [pdf] Available at: < Webcourse-contents/IIT%20Kharagpur/Soft%20Engg/pdf/m14L36.pdf> [Accessed 7 November 2011]. 10/JNU OLE

21 Prof. N. L., Lecture - 2 Introduction to Software Engineering [Video Online] Available at: < youtube.com/watch?v=an5i6ffxyfs> [Accessed 7 November 2011]. Prof. N. L., Lecture - 3 Overview of Phases [Video Online] Available at: < ch?v=nzcufjmc5xk&feature=relmfu> [Accessed 7 November 2011]. Recommended Reading Saleh, A. K., Software Engineering, J. Ross Publishing. Vliet, V. H., Software engineering: principles and practice, 2nd ed., John Wiley. Jawadekar, S. W., Software Engineering: Principles and Practice, Tata McGraw-Hill Education. 11/JNU OLE

22 Software Engineering Self Assessment 1. Which phase comes after a software is delivered and deployed? a. Maintenance phase b. Development phase c. Organisation phase d. Dispatch phase is the process of finding error and once it is discovered, error is removed, leading it to change the software. a. Development phase b. c. d. Organisation phase Dispatch phase Corrective maintenance The changed software then changes the environment, which in turn requires further change. This phenomenon is sometimes called the. a. law of software evolution b. c. d. organisation phase dispatch phase corrective maintenance is defined as a systematic approach to the development, operation, maintenance, and retirement of software. a. Computer engineering b. c. d. Software engineering Electronic engineering Design engineering Which of the following sentences is true? a. An engineering discipline is driven by practical parameters of cost, schedule, and quality. b. c. d. An engineering discipline is driven by practical parameter of planning. An engineering discipline is driven by practical parameters of rules. An engineering discipline is driven by practical parameter of development. is the capability to provide functions which meet stated and implied needs when the software is used. a. Reliability b. c. d. Functionality Usability Efficiency is the capability to maintain a specified level of performance. a. Reliability b. c. d. Functionality Usability Efficiency 12/JNU OLE

23 8. 9. is the capability to be understood, learned, and used. a. Reliability b. Functionality c. Usability d. Efficiency is the capability to provide appropriate performance relative to the amount of resources used. a. Reliability b. Functionality c. Usability d. Efficiency 10. is the capability to be modified for purposes of making corrections, improvements, or adaptation. a. Maintainability b. c. d. Functionality Usability Efficiency 13/JNU OLE

24 Software Engineering Chapter II Software Process Aim The aim of this chapter is to: explain the concept of software process enlist the process maturity levels discuss the linear sequential model Objectives The objectives of this chapter are to: explain software process model describe software requirement analysis discuss prototyping model Learning outcome At the end of this chapter, you will be able to: identify RAD model understand data modeling describe process modeling 14/JNU OLE

25 2.1 Introduction to Software Process A software process can be characterised as shown in the figure below. A common process framework is established by defining a small number of framework activities that are applicable to all software projects, regardless of their size or complexity. A number of tasks set a collection of software engineering work tasks, project milestones, work products, and quality assurance points which enable the framework activities to be adapted to the characteristics of the software project and the requirements of the project team. Finally, umbrella activities such as software quality assurance, software configuration management, and measurement overlay the process model. Umbrella activities are independent of any one framework activity and occur throughout the process. Common process framework Framework activities Task sets Tasks Milestones, deliverables SQA points Umbrella activities Fig. 2.1 Software process Work products, and quality assurance points enable the framework activities to be adapted to the characteristics of the software project and the requirements of the project team. Finally, umbrella activities such as software quality assurance, software configuration management, and measurement overlay the process model. Umbrella activities are independent of any one framework activity and occur throughout the process. There has been a significant emphasis on process maturity. The Software Engineering Institute (SEI) has developed a comprehensive model predicated on a set of software engineering capabilities that should be present as organisations reach different levels of process maturity. To determine an organisation s current state of process maturity, the SEI uses an assessment that results in a five point grading scheme. The SEI approach provides a measure of the global effectiveness of a company s software engineering practices and establishes five process maturity levels that are defined in the following manner: Level 1: Initial The software process is characterised as adhoc and occasionally even chaotic. Few processes are defined, and success depends on individual effort. 15/JNU OLE

26 Software Engineering Level 2: Repeatable Basic project management processes are established to track cost, schedule, and functionality. The necessary process discipline is in place to repeat earlier successes on projects with similar applications. Level 3: Defined The software process for both management and engineering activities is documented, standardised, and integrated into an organisation wide software process. Level 4: Managed Detailed measures of the software process and product quality are collected. Both the software process and products are quantitatively understood and controlled using detailed measures. Level 5: Optimising Continuous process improvement is enabled by quantitative feedback from the process and from testing innovative ideas and technologies. The SEI has associated key process areas (KPAs) with each of the maturity levels. The KPAs describe those software engineering functions (e.g., software project planning, requirements management) that must be present to satisfy good practice at a particular level. Each KPA is described by identifying the following characteristics: Goals: the overall objectives that the KPA must achieve. Commitment: requirements (imposed on the organisation) that must be met to achieve the goals or provide proof of intent to comply with the goals. Abilities: those things that must be in place (organisationally and technically) to enable the organisation to meet the commitments. Activities: the specific tasks required to achieve the KPA function. Methods for monitoring implementation: the manner in which the activities are monitored as they are put into place. Methods for verifying implementation: the manner in which proper practice for the KPA can be verified. Eighteen KPAs (each described using these characteristics) are defined across the maturity model and mapped into different levels of process maturity. The following KPAs should be achieved at each process maturity level: Software subcontract management Software project tracking and oversight Software project planning Requirements management Process maturity level 3 Peer reviews Intergroup coordination Software product engineering Integrated software management Training program Organisation process definition Organisation process focus 16/JNU OLE

27 Process maturity level 4 Software quality management Quantitative process management Process maturity level 5 Process change management Technology change management Defect prevention Each of the KPAs is defined by a set of key practices that contribute to satisfying its goals. The key practices are policies, procedures, and activities that must occur before a key process area has been fully instituted. 2.2 Software Process Model To solve actual problems in an industry setting, a software engineer or a team of engineers must incorporate a development strategy that encompasses the process, methods, and tools layers and the generic phases. This strategy is often referred to as a process model or a software engineering paradigm. A process model for software engineering is chosen based on the nature of the project and application, the methods and tools to be used, and the controls and deliverables that are required. Problem definition Status quo Technical development Solution integration Fig. 2.2 Phases of a problem solving loop [RAC95] 17/JNU OLE

28 Software Engineering problem definition status quo solution integration status quo status quo technical development problem definition technical development solution integration problem definition status quo technical development solution integration problem definition Status quo status quo technical development solution integration problem definition status quo technical development solution integration Fig. 2.3 Phases within phases of the problem solving loop [RAC95] All software development can be characterised as a problem solving loop in which four distinct stages are encountered: status quo, problem definition, technical development, and solution integration. Status quo represents the current state of affairs ; problem definition identifies the specific problem to be solved; technical development solves the problem through the application of some technology, and solution integration delivers the results Linear Sequential Model Sometimes called the classic life cycle or the waterfall model, the linear sequential model suggests a systematic, sequential approach to software development that begins at the system level and progresses through analysis, design, coding, testing, and support. Figure shown below illustrates the linear sequential model for software engineering. Modelled after a conventional engineering cycle, the linear sequential model encompasses the following activities: System/information engineering and modeling. Because software is always part of a larger system (or business), work begins by establishing requirements for all system elements and then allocating some subset of these requirements to software. This system view is essential when software must interact with other elements such as hardware, people, and databases. System engineering and analysis encompass requirements gathering at the system level with a small amount of top level design and analysis. 18/JNU OLE

29 System/information engineering Analysis Design Code Test Fig. 2.4 Linear sequential model Software requirements analysis The requirements gathering process is intensified and focused specifically on software. To understand the nature of the program(s) to be built, the software engineer ( analyst ) must understand the information domain. Design Software design is actually a multistep process that focuses on four distinct attributes of a program: Data structure Software architecture Interface representations Procedural (algorithmic) details The design is documented and becomes part of the software configuration. Code generation The design must be translated into a machine-readable form. The code generation step performs this task. Testing Once code has been generated, program testing begins. The testing process focuses on the logical internals of the software, ensuring that all statements have been tested, and on the functional externals; that is, conducting tests to uncover errors and ensure that defined input will produce actual results that agree with required results. Support Software will undoubtedly undergo change after it is delivered to the customer. Change will occur because errors have been encountered, because the software must be adapted to accommodate changes in its external environment e.g., a change required because of a new operating system or peripheral device, or because the customer requires functional or performance enhancements. The linear sequential model is the oldest and the most widely used paradigm for software engineering. Among the problems that are sometimes encountered when the linear sequential model is applied are: Real projects rarely follow the sequential flow that the model proposes. Although the linear model can accommodate iteration, it does so indirectly. As a result, changes can cause confusion as the project team proceeds. It is often difficult for the customer to state all requirements explicitly. The linear sequential model requires this and has difficulty accommodating the natural uncertainty that exists at the beginning of many projects. 19/JNU OLE

30 Software Engineering The customer must have patience. A working version of the program(s) will not be available until late in the project time-span. A major blunder, if undetected until the working program is reviewed, can be disastrous Prototyping Model A customer defines a set of general objectives for software but does not identify detailed input, processing, or output requirements. In other cases, the developer may be unsure of the efficiency of an algorithm, the adaptability of an operating system, or the form that human/machine interaction should take. In these, and many other situations, a prototyping paradigm may offer the best approach. The prototyping paradigm begins with requirements gathering. Developer and customer meet and define the overall objectives for the software, identify whatever requirements are known, and outline areas where further definition is mandatory. A quick design then occurs. The quick design focuses on a representation of those aspects of the software that will be visible to the customer/user e.g., input approaches and output formats. The quick design leads to the construction of a prototype. The prototype is evaluated by the customer/user and used to refine requirements for the software to be developed. Iteration occurs as the prototype is tuned to satisfy the needs of the customer, while at the same time enabling the developer to better understand what needs to be done. Ideally, the prototype serves as a mechanism for identifying software requirements. If a working prototype is built, the developer attempts to use existing program fragments or applies tools e.g., report generators, window managers that enable working programs to be generated quickly. The prototype can serve as the first system. It is true that both customers and developers like the prototyping paradigm. Prototyping can also be problematic for the following reasons: The customer sees what appears to be a working version of the software, unaware that the prototype is held together with chewing gum and baling wire, unaware that in the rush to get it working no one has considered overall software quality or long-term maintainability. When informed that the product must be rebuilt so that high levels of quality can be maintained, the customer cries foul and demands that a few fixes be applied to make the prototype a working product. The developer often makes implementation compromises in order to get a prototype working quickly. An inappropriate operating system or programming language may be used simply because it is available and known; an inefficient algorithm may be implemented simply to demonstrate capability. Although problems can occur, prototyping can be an effective paradigm for software engineering. The key is to define the rules of the game at the beginning; that is, the customer and developer must both agree that the prototype is built to serve as a mechanism for defining requirements. 2.3 RAD Model Rapid application development (RAD) is an incremental software development process model that emphasises an extremely short development cycle. The RAD model is a high-speed adaptation of the linear sequential model in which rapid development is achieved by using component-based construction. Used primarily for information systems applications, the RAD approach encompasses following phases: 20/JNU OLE

31 Business modeling: The information flow among business functions is modelled in a way that answers the following questions: What information drives the business process? What information is generated? Who generates it? Where does the information go? Who processes it? Data modeling: The information flow defined as part of the business modeling phase is refined into a set of data objects that are needed to support the business. The characteristics (called attributes) of each object are identified and the relationships between these objects defined. Team #3 Team #1 Business modeling Team #2 Business modeling Business modeling Business Data modeling Business Data modeling Business Data modeling Business Process modeling Business Process modeling Application Business generation modeling Business Process modeling Application Business generation modeling Testing Business & modeling turnover Application Business generation modeling Testing Business & modeling turnover Testing Business & modeling turnover days Fig. 2.5 RAD model 21/JNU OLE

32 Software Engineering Process modeling: The data objects defined in data modeling phase are transformed to achieve the information flow necessary to implement a business function. Processing descriptions are created for adding, modifying, deleting, or retrieving a data object. Application generation: RAD assumes the use of fourth generation techniques. Rather than creating software using conventional third generation programming languages the RAD process works to reuse existing program components when possible or create reusable components when necessary. In all cases, automated tools are used to facilitate construction of the software. Testing and turnover: Since the RAD process emphasises reuse, many of the program components have already been tested. This reduces overall testing time. Obviously, the time constraints imposed on a RAD project demand scalable scope. If a business application can be modularised in a way that enables each major function to be completed in less than three months using the approach described previously, it is a candidate for RAD. Like all process models, the RAD approach has drawbacks: For large but scalable projects, RAD requires sufficient human resources to create the right number of RAD teams. RAD requires developers and customers who are committed to the rapid-fire activities necessary to get a system complete in a much abbreviated time frame. If commitment is lacking from either constituency, RAD projects will fail. Not all types of applications are appropriate for RAD. If a system cannot be properly modularised, building the components necessary for RAD will be problematic. If high performance is an issue and performance is to be achieved through tuning the interfaces to system components, the RAD approach may not work. RAD is not appropriate when technical risks are high. This occurs when a new application makes heavy use of new technology or when the new software requires a high degree of interoperability with existing computer programs. 2.4 Evolutionary Software Process Model There is growing recognition that software, like all complex systems, evolves over a period of time. Business and product requirements often change as development proceeds, making a straight path to an end product unrealistic; tight market deadlines make completion of a comprehensive software product impossible, but a limited version must be introduced to meet competitive or business pressure. Software engineers need a process model that has been explicitly designed to accommodate a product that evolves over time. Evolutionary models are iterative. They are characterised in a manner that enables software engineers to develop increasingly more complete versions of the software The Incremental Model The incremental model combines elements of the linear sequential model (applied repetitively) with the iterative philosophy of prototyping. Referring to figure below, the incremental model applies linear sequences in a staggered fashion as calendar time progresses. Each linear sequence produces a deliverable increment of the software. 22/JNU OLE

33 For example, word-processing software developed using the incremental paradigm might deliver basic file management, editing, and document production functions in the first increment; more sophisticated editing and document production capabilities in the second increment; spelling and grammar checking in the third increment; and advanced page layout capability in the fourth increment. It should be noted that the process flow for any increment can incorporate the prototyping paradigm. When an incremental model is used, the first increment is often a core product. That is, basic requirements are addressed, but many supplementary features some known, others unknown remain undelivered. The core product is used by the customer or undergoes detailed review. As a result of use and/or evaluation, a plan is developed for the next increment. The plan addresses the modification of the core product to better meet the needs of the customer and the delivery of additional features and functionality. This process is repeated following the delivery of each increment, until the complete product is produced. System/information engineering Increment 1 Analysis Design Code Test Delivery of 1st increment Increment 2 Analysis Design Code Test Delivery of 2nd increment Increment 3 Analysis Design Code Test Delivery of 3rd increment Increment 4 Analysis Design Code Test Delivery of 4th increment Calendar time Fig. 2.6 Incremental model The incremental process model, like prototyping and other evolutionary approaches, is iterative in nature. But unlike prototyping, the incremental model focuses on the delivery of an operational product with each increment. Early increments are stripped down versions of the final product, but they do provide capability that serves the user and also provide a platform for evaluation by the user. Incremental development is particularly useful when staffing is unavailable for a complete implementation by the business deadline that has been established for the project. Early increments can be implemented with fewer people. If the core product is well received, then additional staff (if required) can be added to implement the next increment. In addition, increments can be planned to manage technical risks. For example, a major system might require the availability of new hardware that is under development and whose delivery date is uncertain. It might be possible to plan early increments in a way that avoids the use of this hardware, thereby enabling partial functionality to be delivered to end-users without inordinate delay. 23/JNU OLE

34 Software Engineering The Spiral Model The spiral model, originally proposed by Boehm, is an evolutionary software process model that couples the iterative nature of prototyping with the controlled and systematic aspects of the linear sequential model. It provides the potential for rapid development of incremental versions of software. Using the spiral model, software is developed in a series of incremental releases. During early iterations, the incremental release might be a paper model or prototype. During later iterations, increasingly more complete versions of the engineered system are produced. A spiral model is divided into a number of framework activities, also called task regions. Typically, there are between three and six task regions. Fig. 2.7 depicts a spiral model that contains six task regions: Customer communication Tasks required establishing effective communication between developer and customer. Planning Tasks required defining resources, timelines, and other project related information. Risk analysis Tasks required to assess both technical and management risks. Engineering Tasks required building one or more representations of the application. Construction and release Tasks required constructing, testing, installing, and providing user support (e.g., documentation and training). Customer evaluation Tasks required obtaining customer feedback based on evaluation of the software representations created during the engineering stage and implemented during the installation stage. Customer communication Planning Risk analysis Project entry point axis Engineering Customer evaluation Construction & release Product maintenance projects Product enhancement projects New product development projects Concept development projects Fig. 2.7 A typical spiral model 24/JNU OLE

35 Each of the regions is populated by a set of work tasks, called a task set, that are adapted to the characteristics of the project to be undertaken. For small projects, the number of work tasks and their formality is low. For larger, more critical projects, each task region contains more work tasks that are defined to achieve a higher level of formality. In all cases, the umbrella activities (for example, software configuration management and software quality assurance) noted in is applied. As this evolutionary process begins, the software engineering team moves around the spiral in a clockwise direction, beginning at the center. The first circuit around the spiral might result in the development of a product specification; subsequent passes around the spiral might be used to develop a prototype and then progressively more sophisticated versions of the software. Each pass through the planning region results in adjustments to the project plan. Cost and schedule are adjusted based on feedback derived from customer evaluation. In addition, the project manager adjusts the planned number of iterations required to complete the software. Unlike classical process models that end when software is delivered, the spiral model can be adapted to apply throughout the life of the computer software. An alternative view of the spiral model can be considered by examining the project entry point axis, also shown in fig Each cube placed along the axis can be used to represent the starting point for different types of projects. A concept development project starts at the core of the spiral and will continue (multiple iterations occur along the spiral path that bounds the central shaded region) until concept development is complete. If the concept is to be developed into an actual product, the process proceeds through the next cube (new product development project entry point) and a new development project is initiated. The new product will evolve through a number of iterations around the spiral, following the path that bounds the region that has somewhat lighter shading than the core. In essence, the spiral, when characterised in this way, remains operative until the software is retired. There are times when the process is dormant, but whenever a change is initiated, the process starts at the appropriate entry point (for example, product enhancement). The spiral model is a realistic approach to the development of large-scale systems and software. Because software evolves as the process progresses, the developer and customer better understand and react to risks at each evolutionary level. The spiral model uses prototyping as a risk reduction mechanism but, more important, enables the developer to apply the prototyping approach at any stage in the evolution of the product. It maintains the systematic stepwise approach suggested by the classic life cycle but incorporates it into an iterative framework that more realistically reflects the real world. The spiral model demands a direct consideration of technical risks at all stages of the project and, if properly applied, should reduce risks before they become problematic. But like other paradigms, the spiral model is not a panacea. It may be difficult to convince customers (particularly in contract situations) that the evolutionary approach is controllable. It demands considerable risk assessment expertise and relies on this expertise for success. If a major risk is not uncovered and managed, problems will undoubtedly occur. Finally, the model has not been used as widely as the linear sequential or prototyping paradigms. It will take a number of years before efficacy of this important paradigm can be determined with absolute certainty. 25/JNU OLE

36 Software Engineering The Concurrent Development Model The concurrent development model is sometimes called concurrent engineering. The concurrent process model can be represented schematically as a series of major technical activities, tasks, and their associated states. Figure below provides a schematic representation of one activity with the concurrent process model. The activity analysis may be in any one of the 10 states noted at any given time. Similarly, other activities (for example, design or customer communication) can be represented in an analogous manner. All activities exist concurrently but reside in different states. For example, early in a project the customer communication activity (not shown in the figure) has completed its first iteration and exists in the awaiting changes state. The analysis activity (which existed in the none state while initial customer communication was completed) now makes a transition into the under development state. If, however, the customer indicates that changes in requirements must be made, the analysis activity moves from the under development state into the awaiting changes state. Analysis activity None Under development Awaiting changes Under revision Under review Baselined Done Represents a state of a software engineered activity Fig. 2.8 One element of the concurrent process model 26/JNU OLE

37 The concurrent process model defines a series of events that will trigger transitions from state to state for each of the software engineering activities. For example, during early stages of design, an inconsistency in the analysis model is uncovered. This generates the event analysis model correction which will trigger the analysis activity from the done state into the awaiting changes state. The concurrent process model is often used as the paradigm for development of client/server applications. A client/server system is composed of a set of functional components. When applied to client/server, the concurrent process model defines activities in two dimensions: a system dimension and a component dimension. System level issues are addressed using three activities: design, assembly, and use. The component dimension is addressed with two activities: design and realisation. Concurrency is achieved in two ways: System and component activities occur simultaneously and can be modelled using the state-oriented approach described previously. A typical client/server application is implemented with many components, each of which can be designed and realised concurrently. In reality, the concurrent process model is applicable to all types of software development and provides an accurate picture of the current state of a project. Rather than confining software engineering activities to a sequence of events, it defines a network of activities. Each activity on the network exists simultaneously with other activities. Events generated within a given activity or at some other place in the activity network trigger transitions among the states of an activity. 2.5 Component Based Model Object-oriented technologies provide technical framework for a component-based process model for software engineering. The component-based development (CBD) model incorporates many of the characteristics of the spiral model. It is evolutionary in nature, demanding an iterative approach to the creation of software. However, the componentbased development model composes applications from pre-packaged software components called classes. The engineering activity begins with the identification of candidate classes. This is accomplished by examining the data to be manipulated by the application and the algorithms that will be applied to accomplish the manipulation. Corresponding data and algorithms are packaged into a class. 27/JNU OLE

38 Software Engineering Identify candidate components Customer communication Planning Risk analysis Construct nth iteration of system Look up components in library Put new components in library Extract components if available Customer evaluation Engineering construction & release Build components if unavailable Fig. 2.9 Component based development The component-based development model leads to software reuse, and reusability provides software engineers with a number of measurable benefits. Based on studies of reusability, QSM Associates, Inc., reports component assembly leads to a 70 percent reduction in development cycle time; an 84 percent reduction in project cost, and a productivity index of 26.2, compared to an industry norm of The unified software development process is representative of a number of component-based development models that have been proposed in the industry. Using the Unified Modeling Language (UML), the unified process defines the components that will be used to build the system and the interfaces that will connect the components. The formal methods model The formal methods model encompasses a set of activities that leads to formal mathematical specification of computer software. Formal methods enable a software engineer to specify, develop, and verify a computer-based system by applying a rigorous, mathematical notation. A variation on this approach, called clean room software engineering, is currently applied by some software development organisations. When formal methods are used during development, they provide a mechanism for eliminating many of the problems that are difficult to overcome using other software engineering paradigms. Ambiguity, incompleteness, and inconsistency can be discovered and corrected more easily, not through ad hoc review but through the application of mathematical analysis. When formal methods are used during design, they serve as a basis for program verification and therefore enable the software engineer to discover and correct errors that might go undetected. Although it is not destined to become a mainstream approach, the formal methods model offers the promise of defect-free software. Yet, the following concerns about its applicability in a business environment have been voiced: 28/JNU OLE

39 The development of formal models is currently quite time consuming and expensive. Because few software developers have the necessary background to apply formal methods, extensive training is required. It is difficult to use the models as a communication mechanism for technically unsophisticated customers. These concerns notwithstanding, it is likely that the formal methods approach will gain adherents among software developers who must build safety-critical software (for example, developers of aircraft avionics and medical devices) and among developers that would suffer severe economic hardship should software errors occur. 2.6 Process Technology The process models discussed in the preceding sections must be adapted for use by a software project team. To accomplish this, process technology tools have been developed to help software organisations analyse their current process, organise work tasks, control and monitor progress, and manage technical quality. Process technology tools allow a software organisation to build an automated model of the common process framework, task sets, and umbrella activities. The model, normally represented as a network, can then be analysed to determine typical work flow and examine alternative process structures that might lead to reduced development time or cost. Once an acceptable process has been created, other process technology tools can be used to allocate, monitor, and even control all software engineering tasks defined as part of the process model. Each member of a software project team can use such tools to develop a checklist of work tasks to be performed, work products to be produced, and quality assurance activities to be conducted. The process technology tool can also be used to coordinate the use of other computer-aided software engineering tools that are appropriate for a particular work task. 29/JNU OLE

40 Software Engineering Summary A common process framework is established by defining a small number of framework activities that are applicable to all software projects, regardless of their size or complexity. A number of tasks set each a collection of software engineering work tasks, project milestones, work products, and quality assurance points enable the framework activities to be adapted to the characteristics of the software project and the requirements of the project team. The Software Engineering Institute (SEI) has developed a comprehensive model predicated on a set of software engineering capabilities that should be present as organisations reach different levels of process maturity. To determine an organisation s current state of process maturity, the SEI uses an assessment that results in a five point grading scheme. To solve actual problems in an industry setting, a software engineer or a team of engineers must incorporate a development strategy that encompasses the process, methods, and tools layers and the generic phases. This strategy is often referred to as a process model or a software engineering paradigm. Sometimes called the classic life cycle or the waterfall model, the linear sequential model suggests a systematic, sequential approach5 to software development that begins at the system level and progresses through analysis, design, coding, testing, and support. Software will undoubtedly undergo change after it is delivered to the customer. Change will occur because errors have been encountered, because the software must be adapted to accommodate changes in its external environment e.g., a change required because of a new operating system or peripheral device, or because the customer requires functional or performance enhancements. In other cases, the developer may be unsure of the efficiency of an algorithm, the adaptability of an operating system, or the form that human/machine interaction should take. In these, and many other situations, a prototyping paradigm may offer the best approach. The prototyping paradigm begins with requirements gathering. Developer and customer meet and define the overall objectives for the software, identify whatever requirements are known, and outline areas where further definition is mandatory. The customer sees what appears to be a working version of the software, unaware that the prototype is held together with chewing gum and baling wire, unaware that in the rush to get it working no one has considered overall software quality or long-term maintainability. When informed that the product must be rebuilt so that high levels of quality can be maintained, the customer cries foul and demands that a few fixes be applied to make the prototype a working product. The developer often makes implementation compromises in order to get a prototype working quickly. An inappropriate operating system or programming language may be used simply because it is available and known; an inefficient algorithm may be implemented simply to demonstrate capability. References Leach, J. R., Introduction to software engineering, CRC Press. Jalote, P., A concise introduction to software engineering, Springer. The Software Experts, Software Process Models [Online] Available at: < de/e_dta-sw-process.htm> [Accessed 7 November 2011]. Scribd.Inc, Software Process Models [Online] Available at: < Software-Process-Models> [Accessed 7 November 2011] Software Development Life Cycles: Waterfall Model, V-Model [Video Online] Available at: < [Accessed 7 November 2011]. SelectBusinessSolns The Software Process [Video Online] Available at: < watch?v=ymbadgb6pg8> [Accessed 7 November 2011]. 30/JNU OLE

41 Recommended Reading Jalote, P., Software Project Management in Practice, Pearson Education India. Agarwal, B. B. and Tayal, P. S., Software Engineering, Firewall Media. Puntambekar, A. A., Software Engineering, Technical Publications. 31/JNU OLE

42 Software Engineering Self Assessment 1. The process is characterised as adhoc and occasionally even chaotic. a. design b. software c. hardware d. code Basic management processes are established to track cost, schedule, and functionality. a. project b. c. d. service development machine The software process for both activities is documented, standardised, and integrated into an organisation wide software process. a. organisation and development b. c. d. management and engineering system and software scheduling and budgeting The SEI has associated Key Process Areas with each of the levels. a. managing b. c. d. manipulating maturity manufacturing The Key Process Areas is defined by a set of key practices that contribute to satisfying its. a. goals b. c. d. department sequence objectives System encompass requirements gathering at the system level with a small amount of top level design and analysis. a. management and engineering b. c. d. system and software scheduling and budgeting engineering and analysis Which of the following step performs that the design must be translated into a machine-readable form? a. Code generation b. c. d. Task performance Programming Executing 32/JNU OLE

43 8. 9. The model is the oldest and the most widely used paradigm for software engineering. a. random sequencing b. c. d. linear sequential rapid sequencing data Rapid application development is a/ an software development process model that emphasises an extremely short development cycle. a. incremental b. c. d. decremented linear random 10. Which generation uses the RAD techniques? a. First generation b. Second generation c. Fourth generation d. Third generation 33/JNU OLE

44 Software Engineering Chapter III Software Development Life Cycle Aim The aim of this chapter is to: explain software development life cycle elucidate requirement analysis discuss feasibility study Objectives The objectives of this chapter are to: explain the concept of coding describe the concept of testing enlist various types of maintenance Learning outcome At the end of this chapter, you will be able to: understand system analysis and design highlight the preventive maintenance describe adaptive maintenance 34/JNU OLE

45 3.1 Introduction to Software Development Life Cycle The product developed which achieves customer satisfaction is not done in a single step. It involves series of steps in a software development process. This is needed to develop quality products with error free products to achieve customer satisfaction. There are many models available in the software development process. But majority of software development process follow the model named as software development life cycle. This software develop life cycle has number of steps in it. The below article describes about the software development life cycle and the steps involved into it. Software development life cycle model is also called as waterfall model which is followed by majority of systems. This software development life cycle process has the following seven stages in it namely System requirements analysis Feasibility study Systems analysis and design Code generation Testing Maintenance Implementation Let us discuss each of these to have an overview about each of the following steps in software development life cycle. 3.2 Requirement Analysis Requirements analysis is a software engineering task that bridges the gap between system level requirements engineering and software design. System engineering Software requirements analysis Software design Fig. 3.1 Analysis as a bridge between system engineering and software design 35/JNU OLE

46 Software Engineering Requirements engineering activities result in: The specification of software s operational characteristics function, data, and behaviour. Indicate software s interface with other system elements Establish constraints that software must meet. Requirements analysis allows the software engineer (analyst) to improve the software allocation and build models of the data, functional, and behavioural domains that will be treated by software. Requirements analyses provide the software designer with a representation of information, function, and behaviour that can be interpreted to data, architectural, interface, and component-level designs. The requirements specification gives the developer and the customer with the means to assess quality once software is built. Software requirements analysis may be divided into five areas of effort: Problem recognition Evaluation and synthesis Modeling Specification Review 3.3 Feasibility Study After making an analysis in the system requirement the next step is to make analysis of the software requirement. In other words feasibility study is also called software requirement analysis. In this phase development team has to make communication with customers and make analysis of their requirement and analyse the system. By making analysis this way it would be possible to make a report of identified area of problem. By making a detailed analysis on this area a detailed document or report is prepared in this phase which has details like project plan or schedule of the project, the cost estimated for developing and executing the system, target dates for each phase of delivery of system developed and so on. This phase is the base of software development process since further steps taken in software development life cycle would be based on the analysis made on this phase and so careful analysis has to be made in this phase. 3.4 Coding The system design needs to be implemented to make it a workable system. This demands the coding of design into computer understandable language, i.e., programming language. This is also called the programming phase in which the programmer converts the program specifications into computer instructions, which we refer to as programs. It is an important stage where the defined procedures are transformed into control specifications by the help of a computer language. The programs coordinate the data movements and control the entire process in a system. It is generally felt that the programs must be modular in nature. This helps in fast development, maintenance and future changes, if required. 3.5 Testing Before actually implementing the new system into operation, a test run of the system is done for removing the bugs, if any. It is an important phase of a successful system. After codifying the whole programs of the system, a test plan should be developed and run on a given set of test data. The output of the test run should match the expected results. Sometimes, system testing is considered a part of implementation process. 36/JNU OLE

47 Using the test data following test run are carried out: Program test System test Program test When the programs have been coded, compiled and brought to working conditions, they must be individually tested with the prepared test data. Any undesirable happening must be noted and debugged (error corrections) System test After carrying out the program test for each of the programs of the system and errors removed, then system test is done. At this stage the test is done on actual data. The complete system is executed on the actual data. At each stage of the execution, the results or output of the system is analysed. During the result analysis, it may be found that the outputs are not matching the expected output of the system. In such case, the errors in the particular programs are identified and are fixed and further tested for the expected output. When it is ensured that the system is running error-free, the users are called with their own actual data so that the system could be shown running as per their requirements. 3.6 Integration and Testing During the integration and test stage, the software artifacts, online help, and test data are migrated from the development environment to a separate test environment. At this point, all test cases are run to verify the correctness and completeness of the software. Successful execution of the test suite confirms a robust and complete migration capability. During this stage, reference data is finalised for production use and production users are identified and linked to their appropriate roles. The final reference data or links to reference data source files and production user list are compiled into the Production Initiation Plan. Software Online Help Implementation Map Test Plan Integration & Test Stage Integrated Software Implementation Map Online Help Production Initiation Plan Acceptance Plan Updated Project Plan & Schedule Fig. 3.2 Integration and test stage 37/JNU OLE

48 Software Engineering The outputs of the integration and test stage include An integrated set of software An online help system, an implementation map A production initiation plan that describes reference data and production users An acceptance plan which contains the final suite of test cases An updated project plan 3.7 Maintenance Maintenance is necessary to eliminate errors in the system during its working life and to tune the system to any variations in its working environments. It has been seen that there are always some errors found in the systems that must be noted and corrected. It also means the review of system from time to time. The review of the system is done for: Knowing the full capabilities of the system Knowing the required changes or the additional requirements Studying the performance. If a major change to a system is needed, a new project may have to be set up to carry out the change. The new project will then proceed through all the above life cycle phases. Types of maintenance There is much more to maintenance than fixing bugs. The categories suggested by Swanson2 and extended by Reutter1 are widely accepted. Corrective maintenance The objective of corrective maintenance is to remove errors or bugs from the software, the procedures, the hardware, the network, the data structures, and the documentation. Corrective maintenance activities include both emergency repairs (fire fighting) and preventive (or corrective) repairs. For example, maintenance programmers are concerned with such tasks as removing residual software bugs, improving the integrity and reliability of the programs, streamlining and tightening data validation routines, correcting invalid processing and reporting, and minimising downtime. Maintenance programmers use such traditional debugging tools as static code analysers, on-line debuggers, and dynamic debugging tools. On-line debuggers are used to trace the order in which modules are executed or to exhibit the names and data values of selected variables as they change. Dynamic debugging tools are used to identify all the possible paths to a given statement, to flag all the statements that modify or access a given data element, or to allow the programmer to determine what happens if the value of a given variable is changed. In an ideal world, systems and software are so reliable that the need for corrective maintenance does not exist, but that ideal world does not exist and probably never will. Using tools as given below, can significantly improve software reliability: Database management software Application development systems Program generators Fourth-generation languages Structured techniques Object-oriented techniques 38/JNU OLE

49 Adaptive maintenance The point of adaptive maintenance is to enhance the system by adding features, capabilities, and functions in response to new technology, upgrades, new requirements, or new problems. Note that adaptive maintenance is reactive. The idea is to fix the system when the general business climate, competition, growth, new technology, or new regulations make change necessary. The key to minimising adaptive maintenance costs is to separate system-dependent features. Perfective maintenance The point of perfective maintenance is to enhance the system by improving efficiency, reliability, functionality, or maintainability, often in response to user or system personnel requests. Corrective and adaptive maintenance are reactive. Bugs are fixed as they are discovered. An upgrade to an operating system can necessitate a change to application software. Perfective maintenance, in contract, is proactive. The idea is to fix the system before it breaks. Without changing how the system works or what it does, restructuring efforts are aimed at enhancing performance. The code might be converted to a more efficient language or run through an optimising compiler. Code conversion software might be used to reorganise the code or convert the logic to a more structured form. Note that the code is not rewritten, just restructured. The point of reengineering is to change the system to make it better without affecting its functionality or external behaviour. The idea is to gradually clean up the mess by doing such things as restructuring files and databases and encasing old code in a wrapper of well-structured or object-oriented code. Reengineered software is easier to reverse engineer or to farm out to subcontractors. The objective of reverse engineering is to extract an abstract model from the system s physical documentation and then use the model as a base for creating a functionally equivalent system. For example, an analysis of a set of source code might generate a structure chart, a set of data dictionary entries, or an entity-relationship diagram. Reverse engineering has been applied to software almost as long as software has existed. For example, Microsoft might reverse engineer its Excel spreadsheet program to produce equivalent programs to run on different computers or to create an object-oriented version of Excel. Preventive maintenance Although not explicitly part of the Swanson/Reutter model (except by implication), ongoing preventive maintenance is an important part of any system s standard operating procedures. The objective of preventive maintenance is to anticipate problems and correct them before they occur. Files and databases must be updated, periodically reorganised, and regularly backed up. Control totals must be reset. New software releases must be installed. System performance monitoring is an important key to preventive maintenance. The idea is to conduct periodic audits and to run regular benchmark tests to determine if the system is continuing to perform to expectations. Both hardware and software are monitored to measure system load and system utilisation. 39/JNU OLE

50 Software Engineering The information derived from performance monitoring provides an early warning of potential system problems and often initiates other forms of maintenance. 3.8 Systems Analysis and Design This is an important phase in system development. Here analysis is made on the design of system that is going to be developed. In other words database design, the design of the architecture chosen, functional specification design, low level design documents, high level design documents and so on takes place. Care must be taken to prepare these design documents because the next phases namely the development phase is based on these design documents. If a well structured and analysed design document is prepared it would reduce the time taken in the coming steps namely development and testing phases of the software development life cycle. 40/JNU OLE

51 Summary Software development life cycle model is also called waterfall model which is followed by majority of systems. Requirements analysis is a software engineering task that bridges the gap between system level requirements engineering and software design. Requirements analysis allows the software engineer (analyst) to refine the software allocation and build models of the data, functional, and behavioural domains that will be treated by software. Requirements analysis provides the software designer with a representation of information, function, and behaviour that can be translated to data, architectural, interface, and component-level designs. After making an analysis in the system requirement the next step is to make analysis of the software requirement. In other words feasibility study is also called as software requirement analysis. The system design needs to be implemented to make it a workable system. This demands the coding of design into computer understandable language, i.e., programming language. This is also called the programming phase in which the programmer converts the program specifications into computer instructions, which we refer to as programs. After codifying the whole program of the system, a test plan should be developed and run on a given set of test data. The output of the test run should match the expected results. Sometimes, system testing is considered a part of implementation process. When the programs have been coded, compiled and brought to working conditions, they must be individually tested with the prepared test data. Any undesirable happening must be noted and debugged (error corrections) After carrying out the program test for each of the programs of a system and errors removed, then system test is done. At this stage the test is done on actual data. The complete system is executed on the actual data. Maintenance is necessary to eliminate errors in the system during its working life and to tune the system to any variations in its working environments. It has been seen that there are always some errors found in the systems that must be noted and corrected. Corrective maintenance activities include both emergency repairs (fire fighting) and preventive (or corrective) repairs. Dynamic debugging tools are used to identify all the possible paths to a given statement, to flag all the statements that modify or access a given data element, or to allow the programmer to determine what happens if the value of a given variable is changed. The point of adaptive maintenance is to enhance the system by adding features, capabilities, and functions in response to new technology, upgrades, new requirements, or new problems. Note that adaptive maintenance is reactive. The point of perfective maintenance is to enhance the system by improving efficiency, reliability, functionality, or maintainability, often in response to user or system personnel requests. References Sage, P. A., Systems engineering, Wiley-IEEE. Blanchard, S. B., System engineering management, John Wiley and Sons. exforsys.com, Software Development Life Cycle [Online] Available at: < programming-concepts/software-development-life-cycle.html> [Accessed 07 November 2011]. Tobassam, SDLC Phases [Video Online] Available at: < mg&feature=related> [Accessed 9 October 2011]. Tobassam, SDLC Maintenance Phase [Video Online] Available at: < 3KWVMbyAGCI&feature=related> [Accessed 9 October 2011]. edulevel, Software Development Life Cycle [Video Online] Available at: < watch?v=1zfsnfp3r64> [Accessed 9 October 2011]. 41/JNU OLE

52 Software Engineering Recommended Reading Martin, N. J., Systems engineering guidebook: a process for developing systems and products, CRC Press. Langer, M. A., Analysis and design of information systems, 3rd ed., Springer. Rajaraman, V., Analysis and design of information systems, 2nd ed., PHI Learning Pvt. Ltd. 42/JNU OLE

53 Self Assessment 1. Which of the following is alias of software development life cycle model? a. downfall b. stream c. river d. waterfall Requirements analysis is a software engineering task that bridges the gap between system level requirements engineering and. a. software design b. c. d. tool design data model debugging Requirements specification gives the developer and customer the means to assess quality once is built. a. floor b. c. d. design software hardware Before actually implementing the new system into operation, a test run of the system is done for removing the, if any. a. argument b. c. d. flaws bugs subroutine After codifying the whole programs of the system, a test plan should be developed and run on a given set of. a. tool b. c. d. code test data program During the integration and test stage, the software artifacts, online help, and test data are from the development environment to a separate test environment. a. copied b. c. d. migrated removed added 43/JNU OLE

54 Software Engineering is necessary to eliminate errors in the system during its working life and to tune the system to any variations in its working environments. a. Maintenance b. c. d. Testing Designing Developing The objective of is to remove errors or bugs from the software, the procedures, the hardware, the network, the data structures, and the documentation. a. adaptive maintenance b. c. d. corrective maintenance preventive maintenance perfective maintenance activities include both emergency repairs (fire fighting) and preventive (or corrective) repairs. a. adaptive maintenance b. c. d. corrective maintenance preventive maintenance perfective maintenance Which of the following is true? a. Maintenance programmer s uses traditional debugging tools as static code analysers, on-line debuggers, and dynamic debugging tools. b. c. Testing programmers uses traditional debugging tools as static code analysers, on-line debuggers, and dynamic debugging tools. Designing programmers uses traditional debugging tools as static code analysers, on-line debuggers, and dynamic debugging tools. d. Developing programmers uses traditional debugging tools as static code analysers, on-line debuggers, and dynamic debugging tools. 44/JNU OLE

55 Chapter IV Software Requirement Specification Aim The aim of this chapter is to: explain the concept of waterfall model elucidate project output in a waterfall model discuss prototyping model Objectives The objectives of this chapter are to: explain the concept of iterative model highlight spiral model describe role of management in software development Learning outcome At the end of this chapter, you will be able to: understand the concept of problem analysis identify informal approach describe data flow modeling 45/JNU OLE

56 Software Engineering 4.1 Waterfall Model The simplest software development life cycle model is the waterfall model, which states that the phases are organised in a linear order. A project begins with feasibility analysis. On the successful demonstration of the requirements analysis, feasibility analysis and project planning begins. The design starts after the requirements analysis is done. And coding begins after the design is done. Once the programming is completed, the code is integrated and testing is done. On successful completion of testing, the system is installed. After this the regular operation and maintenance of the system takes place. The following figure demonstrates the steps involved in waterfall life cycle model. Requirement Analysis Planning System Design and specification Coding and verification Testing and integration Maintenance Fig. 4.1 Waterfall model With the waterfall model, the activities performed in a software development project are requirements analysis, project planning, system design, detailed design, coding and unit testing, system integration and testing. Linear ordering of activities has some important consequences. First, to clearly identify the end of a phase and beginning of others. Some certification mechanism has to be employed at the end of each phase. This is usually done by some verification and validation. Validation means confirming the output of a phase is consistent with its input (which is the output of the previous phase) and that the output of the phase is consistent with overall requirements of the system. The consequences of the need of certification are that each phase must have some defined output that can be evaluated and certified. Therefore, when the activities of a phase are completed, there should be an output product of that phase and the goal of a phase is to produce this product. The outputs of the earlier phases are often called intermediate products or design document. For the coding phase, the output is the code. From this point of view, the output of a software project is to justify the final program along with the use of documentation with the requirements document, design document, project plan, test plan and test results. Another implication of the linear ordering of phases is that after each phase is completed and its outputs are certified, these outputs become the inputs to the next phase and should not be changed or modified. However, changing requirements cannot be avoided and must be faced. Since changes performed in the output of one phase affect the later phases that might have been performed. These changes have to be made in a controlled manner after evaluating the effect of each change on the project. This brings us to the need for configuration control or configuration management. 46/JNU OLE

57 The certified output of a phase that is released for the best phase is called baseline. The configuration management ensures that any changes to a baseline are made after careful review, keeping in mind the interests of all parties that are affected by it. There are two basic assumptions for justifying the linear ordering of phase in the manner proposed by the waterfall model. For a successful project resulting in a successful product, all phases listed in the waterfall model must be performed anyway. Any different ordering of the phases will result in a less successful software product. Project output in a waterfall model As we have seen, the output of a project employing the waterfall model is not just the final program along with documentation to use it. There are a number of intermediate outputs, which must be produced in order to produce a successful product. The set of documents that forms the minimum that should be produced in each project are: Requirement document Project plan System design document Detailed design document Test plan and test report Final code Software manuals (user manual, installation manual, etc.) Review reports Except for the last one, these are all the outputs of the phases. In order to certify an output product of a phase before the next phase begins, reviews are often held. Reviews are necessary especially for the requirements and design phases, since other certification means are frequently not available. Reviews are formal meeting to uncover deficiencies in a product. The review reports are the outcome of these reviews. Advantages of waterfall life cycle models Easy to explain to the user Stages and activities are well defined Helps to plan and schedule the project Verification at each stage ensures early detection of errors / misunderstanding Limitations of the waterfall life cycle model The waterfall model assumes that the requirements of a system can be frozen (i.e. basedline) before the design begins. This is possible for systems designed to automate an existing manual system. But for absolutely new system, determining the requirements is difficult, as the user himself does not know the requirements. Therefore, having unchanging (or changing only a few) requirements is unrealistic for such project. Freezing the requirements usually requires choosing the hardware (since it forms a part of the requirement specification). A large project might take a few years to complete. If the hardware is selected early, then due to the speed at which hardware technology is changing, it is quite likely that the final software will employ a hardware technology that is on the verge of becoming obsolete. This is clearly not desirable for such expensive software. 47/JNU OLE

58 Software Engineering The waterfall model stipulates that the requirements should be completely specified before the rest of the development can proceed. In some situations it might be desirable to first develop a part of the system completely, and then later enhance the system in phase. This is often done for software products that are developed not necessarily for a client (where the client plays an important role in requirement specification), but for general marketing, in which the requirements are likely to be determined largely by developers. 4.2 Prototyping Model The goal of prototyping based development is to counter the first two limitations of the waterfall model discussed earlier. The basic idea here is that instead of freezing the requirements before a design or coding can proceed, a throwaway prototype is built to understand the requirements. This prototype is developed based on the currently known requirements. Development of the prototype obviously undergoes design, coding and testing. But each of these phases is not done very formally or thoroughly. By using this prototype, the client can get an actual feel of the system, since the interactions with prototype can enable the client to better understand the requirements of the desired system. Prototyping is an attractive idea for complicated and large systems for which there is no manual process or existing system to help determining the requirements. In such situations letting the client plan with the prototype provides invaluable and intangible inputs which helps in determining the requirements for the system. It is also an effective method to demonstrate the feasibility of a certain approach. This might be needed for novel systems where it is not clear that constraint can be met or that algorithms can be developed to implement the requirements. The process model of the prototyping approach is shown in the figure below. Start Requirement gathering Quick design Building Prototype Stop Engineer Product Refining Prototype Customer Evaluation Fig. 4.2 Prototyping model The basic reason for little common use of prototyping is the cost involved in this built-it-twice approach. However, some argue that prototyping need not be very costly and can actually reduce the overall development cost. The prototype is usually not complete systems and many of the details are not built in the prototype. The goal is to provide a system with overall functionality. In addition, the cost of testing and writing detailed documents are reduced. These factors help to reduce the cost of developing the prototype. On the other hand, the experience of developing the prototype will very useful for developers when developing the final system. This experience helps to reduce the cost of development of the final system and results in a more reliable and better designed system. 48/JNU OLE

59 Advantages of prototyping Users are actively involved in the development. It provides a better system to users, as users have natural tendency to change their mind in specifying requirements and this method of developing systems supports this user tendency. Since in this methodology a working model of the system is provided, the users get a better understanding of the system being developed. Errors can be detected much earlier as the system is mode side by side. Quicker user feedback is available leading to better solutions. Disadvantages Leads to implementing and then repairing way of building systems. Practically, this methodology may increase the complexity of the system as scope of the system may expand beyond original plans. 4.3 Iterative Model The iterative enhancement life cycle model counters the third limitation of the waterfall model and tries to combine the benefits of both prototyping and waterfall model. The basic idea is that the software should be developed in increments, where each increment adds some functional capability to the system until the full system is implemented. At each step extensions and design modifications can be made. An advantage of this approach is that it can result in better testing, since testing each increment is likely to be easier than testing entire system like in the waterfall model. Furthermore, as in prototyping, increment provides feedback to the client which is useful for determining the final requirements of the system. In the first step of iterative enhancement model, a simple initial implementation is done for a subset of the overall problem. This subset is the one that contains some of the key aspects of the problem which are easy to understand and implement, and which forms a useful and usable system. A project control list is created which contains, in an order, all the tasks that must be performed to obtain the final implementation. This project control list gives an idea of how far the project is at any given step from the final system. Each step consists of removing the next step from the list. Designing the implementation for the selected task, coding and testing the implementation, and performing an analysis of the partial system obtained after this step and updating the list as a result of the analysis. These three phases are called the design phase, implementation phase and analysis phase. The process is iterated until the project control list is empty, at the time the final implementation of the system will be available. The process involved in iterative enhancement model is shown in the figure below. Design 0 Implement 0 Analysis 0 Design 1 Implement 1 Analysis 1 Design 2 Implement 2 Analysis 2 Fig. 4.3 The iterative enhancement model 49/JNU OLE

60 Software Engineering The project control list guides the iteration steps and keeps track of all tasks that must be done. The tasks in the list can be including redesign of defective components found during analysis. Each entry in that list is a task that should be performed in one step of the iterative enhancement process, and should be simple enough to be completely understood. Selecting tasks in this manner will minimise the chances of errors and reduce the redesign work. 4.4 Spiral Model Spiral model is a recent model that has been proposed by Boehm. As the name suggests, the activities in this model can be organised like a spiral. The spiral has many cycles. The radial dimension represents the cumulative cost incurred in accomplishing the steps dome so far and the angular dimension represents the progress made in completing each cycle of the spiral. The structure of the spiral model is shown in the figure given below. Each cycle in the spiral begins with the identification of objectives for that cycle and the different alternatives are possible for achieving the objectives and the imposed constraints. The next step in the spiral life cycle model is to evaluate these different alternatives based on the objectives and constraints. This will also involve identifying uncertainties and risks involved. The next step is to develop strategies that resolve the uncertainties and risks. This step may involve activities such as benchmarking, simulation and prototyping. Next, the software is developed by keeping in mind the risks. Finally the next stage is planned. 50/JNU OLE

61 Cumulative cost Commitment Partition Determine objectives, alternatives, constraints Plan next phases Requirement plan life-cycle plan Development Plan Integration and test plan Risk Analysis Progress through steps Prototype 1 Concept of operation Risk Analysis Requirements validation Design validation and verification Software requirements Implementation Acceptance test Risk Analysis Evaluate alternatives identify, resolve risks Prototype 2 Prototype 3 Operation Prototype Simulation models, benchmarks Integration and test Risk Analysis Unit test Detailed design Code Develop verify next-level product Fig. 4.4 Spiral life cycle model The next step is to determine the remaining risks. For example, its performance or user-interface risks are considered more important than the program development risks. The next step may be evolutionary development that involves developing a more detailed prototype for resolving the risks. On the other hand, if the program development risks dominate and previous prototypes have resolved all the user-interface and performance risks; the next step will follow the basic waterfall approach. The risk driven nature of the spiral model allows it to accommodate any mixture of specification-oriented, prototype-oriented, simulation-oriented or some other approach. An important feature of the model is that each cycle of the spiral is completed by a review, which covers all the products developed during that cycle, including plans for the next cycle. The spiral model works for developed as well as enhancement projects. 51/JNU OLE

62 Software Engineering Spiral model description The development spiral consists of four quadrants as shown in the figure above Although a spiral, as depicted, is oriented toward software development, the concept is equally applicable to systems, hardware, and training, for example. To better understand the scope of each spiral development quadrant, let s briefly address each one. Quadrant 1: Determine objectives, alternatives, and constraints Activities performed in this quadrant include: Establish an understanding of the system or product objectives namely performance, functionality, and ability to accommodate change. Investigate implementation alternatives namely procure, design, reuse and procure/ modify Investigate constraints imposed on the alternatives namely technology, cost, schedule, support, and risk. Once the system or product s objectives, alternatives, and constraints are understood, Quadrant 2 (Evaluate alternatives, identify, and resolve risks) is performed. Quadrant 2: Evaluate alternatives, identify, and resolve risks Engineering activities performed in this quadrant select an alternative approach that best satisfies technical, technology, cost, schedule, support, and risk constraints. The focus here is on risk mitigation. Each alternative is investigated and prototyped to reduce the risk associated with the development decisions. Boehm describes these activities as follows: Prototyping Simulation Benchmarking Reference checking Administering user questionnaires Analytic modeling Other risk resolution techniques The outcome of the evaluation determines the next course of action. If critical operational and/or technical issues (COIs/CTIs) such as performance and interoperability (i.e., external and internal) risks remain, more detailed prototyping may need to be added before progressing to the next quadrant. Dr. Boehm notes that if the alternative chosen is operationally useful and robust enough to serve as a low-risk base for future product evolution, the subsequent risk-driven steps would be the evolving series of evolutionary prototypes going toward the right (hand side of the graphic)... the option of writing specifications would be addressed but not exercised. This brings us to Quadrant 3. Quadrant 3: Develop, verify, next-level product If a determination is made that the previous prototyping efforts have resolved the COIs/CTIs, activities to develop, verify, next-level product are performed. As a result, the basic waterfall approach may be employed meaning concept of operations, design, development, integration, and test of the next system or product iteration. If appropriate, incremental development approaches may also be applicable. 52/JNU OLE

63 Quadrant 4: Plan next phases The spiral development model has one characteristic that is common to all models which is the need for advanced technical planning and multidisciplinary reviews at critical staging or control points. Each cycle of the model culminates with a technical review that assesses the status, progress, maturity, merits, risk, of development efforts to date; resolves critical operational and/or technical issues (COIs/CTIs); and reviews plans and identifies COIs/CTIs to be resolved for the next iteration of the spiral. Subsequent implementations of the spiral may involve lower level spirals that follow the same quadrant paths and decision considerations. 4.5 Role of Management in Software Development The management of software development is heavily dependent on four factors: People, Product, Process, and Project. Order of dependency is as shown in figure given below. People 1 Project 4 Dependency order 2 Product 3 Process Fig. 4.5 Factors of management dependency Software development is a people centric activity. Hence, success of the project is on the shoulders of the people who are involved in the development. The people Software development requires good managers. Managers, who can understand the psychology of people and provide good leadership, a good manager cannot ensure the success of the project, but can increase the probability of success. The areas to be given priority are: proper selection, training, compensation, career development, work culture etc. Managers face challenges. It requires mental toughness to endure inner pain. We need to plan for the best, be prepared for the worst, expect surprises, but continue to move forward anyway. Charles Maurice once rightly said I am more afraid of an army of one hundred sheep led by a lion than an army of one hundred lions led by a sheep. Hence, manager selection is most crucial and critical. After having a good manager, project is in safe hands. It is the responsibility of a manager to manage, motivate, encourage, guide and control the people of his/her team. 53/JNU OLE

64 Software Engineering The product Objectives and scope of work should be defined clearly to understand the requirements. Alternate solutions should be discussed. It may help the managers to select a best approach within constraints imposed by delivery deadlines, budgetary restrictions, personnel availability, technical interfaces etc. Without well defined requirements, it may be impossible to define reasonable estimates of the cost, development time and schedule for the project. The process The process is the way in which we produce software. It provides the framework from which a comprehensive plan for software development can be established. If the process is weak, the end product will undoubtedly suffer. There are many life cycle models and process improvements models. Depending on the type of project, a suitable model is to be selected. Now-a-days CMM (Capability Maturity Model) has become almost a standard for process framework. The process priority is after people and product, however, it plays very critical role for the success of the project. The project A proper planning is required to monitor the status of development and to control the complexity. Most of the projects are coming late with cost overruns of more than 100%. In order to manage a successful project, we must understand what can go wrong and how to do it right. We should define concrete requirements although very difficult and freeze these requirements. Changes should not be incorporated to avoid software surprises. Software surprises are always risky and we should minimise them. All four factors People, Product, Process and Project are important for the success of the project. 4.6 Problem Analysis The basic aim of problem analysis is to obtain a clear understanding of the needs of the clients and the users, what exactly is desired from the software. Frequently the client and the users do not understand or know all their needs, because the potential of the new system is often not fully appreciated. The analysts have to ensure that the real needs of the clients and the users are uncovered, even if they don t know them clearly. That is, the analysts are not just collecting and organising information about the client s organisation and its processes, but they also act as consultants who play an active role of helping the clients and users identify their needs. Informal approach The informal approach to analysis is one where no defined methodology is used. Like in any approach, the information about the system is obtained by interaction with the client, end users, questionnaires, study of existing documents, brainstorming, etc. However, with this approach no formal model is built of the system. The problem and the system model are essentially built in the minds of the analysts or the analysts may use some informal notation for this purpose and are directly translated from the minds of the analysts to the SRS. 54/JNU OLE

65 Data flow modeling Data-flow based modeling, often referred to as the structured analysis technique, uses function-based decomposition while modeling the problem. It focuses on the functions performed in the problem domain and the data consumed and produced by these functions. It is a top-down refinement approach, which was originally called structured analysis and specification, and was proposed for producing the specifications. However, we will limit our attention to the analysis aspect of the approach. Before we describe the approach, let us describe the data flow diagram and data dictionary on which the technique relies heavily. An example A restaurant owner feels that some amount of automation will help make her business more efficient. She also believes that an automated system might be an added attraction for the customers. So she wants to automate the operation of her restaurant as much as possible. Here we will perform the analysis for this problem. Details regarding interviews, questionnaires, or how the information was extracted are not described. First let us identify the different parties involved. Client: The restaurant owner Potential Users: Waiters, cash register operator The context diagram for the restaurant is shown in figure given. The inputs and outputs of the restaurant are shown in this diagram. However, no details about the functioning of the restaurant are given here. Using this as a starting point, a logical DFD of the physical system is given in figure below (the physical DFD was avoided for this, as the logical DFD is similar to the physical and there were no special names for the data or the processes in the physical system). Observing the operation of the restaurant and interviewing the owner were the basic means of collecting raw information for this DFD. Now we must draw a DFD that models the new system to be built. After many meetings and discussions with the restaurant owner, the following goals for the new system were established: Automate much of the order processing and billing. Automate accounting. Make supply ordering more accurate so that leftovers at the end of the day are minimised and the orders that cannot be satisfied due to no availability are also minimised. This was being done without a careful analysis of sales. 55/JNU OLE

66 Software Engineering Supplier Order for supplies Supplies Menu Payment Order Restaurants Receipt Final Bill Supplies information Sale Information Customer Served Meals Fig. 4.6 Context diagram for the restaurant The owner also suspects that the staff might be stealing/eating some food/supplies. She wants the new system to help detect and reduce this. The owner would also like to have statistics about sales of different items. 4.7 Requirement Specification The final output is the software requirements specification document (SRS). For smaller problems or problems that can easily be comprehended, the specification activity might come after the entire analysis is complete. However, it is more likely that problem analysis and specification are done concurrently. An analyst typically will analyse some parts of the problem and then write the requirements for that part. In practice, problem analysis and requirements specification activities overlap, with movement from both activities to the other. However, as the information for specification comes from analysis, we can conceptually view the specification activity as following the analysis activity. The first question that arises is: If formal modeling is done during analysis, why are the outputs of modeling the structures that are built (for example, DFD and DD, Object diagrams) not treated as an SRS? The main reason is that modeling generally focuses on the problem structure, not its external behaviour. Consequently, things like user interfaces are rarely modelled, whereas they frequently form a major component of the SRS. Similarly, for ease of modeling, frequently minor issues like erroneous situations (e.g., error in output) are rarely modelled properly, whereas in an SRS, behaviour under such situations also has to be specified. Similarly, performance constraints, design constraints, standards compliance, recovery, etc., are not included in the model, but must be specified clearly in the SRS because the designer must know about these to properly design the system. It should therefore be clear that the outputs of a model cannot form a desirable SRS. 56/JNU OLE

67 For these reasons, the transition from analysis to specification should also not be expected to be straightforward, even if some formal modeling is used during analysis. It is not the case that in specification the structures of modeling are just specified in a more formal manner. A good SRS needs to specify many things, some of which are not satisfactorily handled during modeling. Furthermore, sometimes the structures produced during modeling are not amenable for translation into external behaviour specification (which is what is to be specified in an SRS). For example, the object diagram produced during a 00 analysis is of limited use when specifying the external behaviour of the desired system. Essentially, what passes from requirements analysis activity to the specification activity is the knowledge acquired about the system. The modeling is essentially a tool to help obtain a thorough and complete knowledge about the proposed system. The SRS is written based on the knowledge acquired during analysis. As converting knowledge into a structured document is not straightforward, specification itself is a major task, which is relatively independent. A consequence of this is that it is relatively less important to model completely, compared to specifying completely. As the primary objective of analysis is problem understanding, while the basic objective of the requirements phase is to produce the SRS, the complete and detailed analysis structures are not critical. In fact, it is possible to develop the SRS without using formal modeling techniques. The basic aim of the structures used in modeling is to help in knowledge representation and problem partitioning; the structures are not an end in themselves. With this in mind, let us start our discussion on requirements specification. We start by discussing the desirable characteristics of an SRS. Characteristics of an SRS To properly satisfy the basic goals, an SRS should have certain properties and should contain different types of requirements. In this section, we discuss some of the desirable characteristics of an SRS and components of an SRS. A good SRS is: Correct Complete Unambiguous Verifiable Consistent Ranked for importance and/or stability Modifiable Traceable An SRS is correct if every requirement included in the SRS represents something required in the final system. An SRS is complete if everything the software is supposed to do and the responses of the software to all classes of input data are specified in the SRS. Correctness and completeness go hand-in-hand; whereas correctness ensures that what is specified is done correctly, completeness ensures that everything is indeed specified. Correctness is an easier property to establish than completeness as it basically involves examining each requirement to make sure it represents the user requirement. 57/JNU OLE

68 Software Engineering Completeness, on the other hand, is the most difficult property to establish; to ensure completeness, one has to detect the absence of specifications, and absence is much harder to ascertain than determining that what is present has some property. An SRS is unambiguous if and only if every requirement stated has one and only one interpretation. Requirements are often written in natural language, which are inherently ambiguous. If the requirements are specified in a natural language, the SRS writer has to be especially careful to ensure that there are no ambiguities. One way to avoid ambiguities is to use some formal requirements specification language. The major disadvantage of using formal languages is the large effort required to write an SRS, the high cost of doing so, and the increased difficulty reading and understanding formally stated requirements (particularly by the users and clients). An SRS is verifiable if and only if every stated requirement is verifiable. A requirement is verifiable if there exist some cost-effective process that can check whether the final software meets that requirement. This implies that the requirements should have as little subjectivity as possible because subjective requirements are difficult to verify. Unambiguity is essential for verifiability. As verification of requirements is often done through reviews, it also implies that an SRS is understandable, at least by the developer, the client, and the users. Understand ability is clearly extremely important, as one of the goals of the requirements phase is to produce a document on which the client, the users, and the developers can agree. An SRS is consistent if there is no requirement that conflicts with another. Terminology can cause inconsistencies; for example, different requirements may use different terms to refer to the same object. There may be logical or temporal conflict between requirements that causes inconsistencies. This occurs if the SRS contains two or more requirements whose logical or temporal characteristics cannot be satisfied together by any software system. For example, suppose a requirement states that an event e is to occur before another event f. But then another set of requirements states (directly or indirectly by transitivity) that event f should occur before event e. Inconsistencies in an SRS can reflect of some major problems. Generally, all the requirements for software are not of equal importance. Some are critical, others are important but not critical, and there are some which are desirable but not very important. Similarly, some requirements are core requirements which are not likely to change as time passes, while others are more dependent on time. An SRS is ranked for importance and/or stability if for each requirement the importance and the stability of the requirement are indicated. Stability of a requirement reflects the chances of it changing in future. It can be reflected in terms of the expected change volume. Writing an SRS is an iterative process. Even when the requirements of a system are specified, they are later modified as the needs of the client change. Hence an SRS should be easy to modify. An SRS is modifiable if its structure and style are such that any necessary change can be made easily while preserving completeness and consistency. Presence of redundancy is a major hindrance to modifiability, as it can easily lead to errors. For example, assume that a requirement is stated in two places and that the requirement later needs to be changed. If only one occurrence of the requirement is modified, the resulting SRS will be inconsistent. An SRS is traceable if the origin of each of its requirements is clear and if it facilitates the referencing of each requirement in future development. Forward traceability means that each requirement should be traceable to some design and code elements. Backward traceability requires that it be possible to trace design and code elements to the requirements they support. 58/JNU OLE

69 Traceability aids verification and validation. Of all these characteristics, completeness is perhaps the most important (and hardest to ensure). One of the most common problems in requirements specification is when some of the requirements of the client are not specified. This necessitates additions and modifications to the requirements later in the development cycle, which are often expensive to incorporate. Incompleteness is also a major source of disagreement between the client and the supplier. The importance of having complete requirements cannot be overemphasised will help in completely specifying the requirements. Here we describe some of the system properties that an SRS should specify. The basic issues an SRS must address are: Functionality Performance Design constraints imposed on an implementation External interfaces Conceptually, any SRS should have these components. If the traditional approach to requirement analysis is being followed, then the SRS might even have portions corresponding to these. However, functional requirements might be specified indirectly by specifying the services on the objects or by specifying the use cases. 59/JNU OLE

70 Software Engineering Summary The simplest software development life cycle model is the waterfall model, which states that the phases are organised in a linear order. The output of a project employing the waterfall model is not just the final program along with documentation to use it. There are a number of intermediate outputs, which must be produced in order to produce a successful product. The waterfall model stipulates that the requirements should be completely specified before the rest of the development can proceed. In some situations it might be desirable to first develop a part of the system completely, and then later enhance the system in phase. This is often done for software products that are developed not necessarily for a client (where the client plays an important role in requirement specification), but for general marketing, in which the requirements are likely to be determined largely by developers. The goal of prototyping based development is to counter the first two limitations of the waterfall model discussed earlier. The basic idea here is that instead of freezing the requirements before a design or coding can proceed, a throwaway prototype is built to understand the requirements. This prototype is developed based on the currently known requirements. The iterative enhancement life cycle model counters the third limitation of the waterfall model and tries to combine the benefits of both prototyping and the waterfall model. The spiral has many cycles. The radial dimension represents the cumulative cost incurred in accomplishing the steps dome so far and the angular dimension represents the progress made in completing each cycle of the spiral. Engineering activities performed in this quadrant select an alternative approach that best satisfies technical, technology, cost, schedule, support, and risk constraints. The focus here is on risk mitigation. The spiral development model has one characteristic that is common to all models the need for advanced technical planning and multidisciplinary reviews at critical staging or control points. Software development is a people centric activity. Hence, success of the project is on the shoulders of the people who are involved in the development. The basic aim of problem analysis is to obtain a clear understanding of the needs of the clients and the users, what exactly is desired from the software. The informal approach to analysis is one where no defined methodology is used. Data-flow based modeling, often referred to as the structured analysis technique, uses function-based decomposition while modeling the problem. It focuses on the functions performed in the problem domain and the data consumed and produced by these functions. The final output is the software requirements specification document (SRS). For smaller problems or problems that can easily be comprehended, the specification activity might come after the entire analysis is complete. However, it is more likely that problem analysis and specification are done concurrently. References Wiegers, E. K., Software Requirements, 2nd ed., O Reilly Media, Inc. Lauesen, S., Software requirements: styles and techniques, Addison-Wesley. Freetutes.com, Waterfall Software Development Life Cycle Model [Online] Available at: < com/systemanalysis/sa2-waterfall-software-life-cycle.html> [Accessed 7 October 2011]. OneStopTesting.com, Iterative Model [Online] Available at: < iterative-model.asp> [Accessed 7 November 2011]. LesChambers1, Specifying Software Requirements [Video Online] Available at: < com/watch?v=sbfbq5dmxd8> [Accessed 7 October 2011]. 60/JNU OLE

71 Prof. Bellur, U., Requirements Engineering / Specification [Video Online] Available at: < youtube.com/watch?v=wer6mwquply&feature=related> [Accessed 7 November 2011]. Recommended Reading Kotonya, G. and Sommerville, I., Requirements engineering: processes and techniques, J. Wiley. Sommerville, I. and Sawyer, P., Requirements engineering: a good practice guide, John Wiley & Sons. Young, R. R., The requirements engineering handbook, Artech House. 61/JNU OLE

72 Software Engineering Self Assessment 1. The simplest software development life cycle model is the waterfall model, which states that the phases are organised in a order. a. linear b. c. d. non-linear random casual The enhancement life cycle model counters the limitation of the waterfall model and tries to combine the benefits of both prototyping and the waterfall model. a. Iterative b. c. d. Data modelling SDLC RAP Who proposed the Spiral model? a. Bell b. c. d. Newton Boehm Philip Abelson Which of the following is true? a. The spiral has 2 cycle b. c. d. The spiral has square shape The spiral has many cycles The spiral is pentagon The management of is heavily dependent on four factors which are people, product, process, and project. a. software tool b. c. d. software design software development software coding It is the responsibility of a to manage, motivate, encourage, guide and control the people of his/her team. a. organiser b. c. d. developer designer manager Process is the way in which we produce. a. hardware b. c. d. software design code 62/JNU OLE

73 8. 9. In order to manage a, we must understand what can go wrong and how to do it right. a. successful project b. c. d. coding people product The basic aim of is to obtain a clear understanding of the needs of the clients and the users, what exactly is desired from the software. a. data model b. c. d. software informal approach problem analysis 10. The to analysis is one where no defined methodology is used. a. data model b. software c. informal approach d. problem analysis 63/JNU OLE

74 Software Engineering Chapter V System Design Aim The aim of this chapter is to: explain the concept of problem partitioning elucidate concept of abstraction discuss the top-down design Objectives The objectives of this chapter are to: explain structural approach compare function vs object oriented approach enlist design specification Learning outcome At the end of this chapter, you will be able to: understand the concept of design verification discuss sample checklist of design verification describe bottom-up design 64/JNU OLE

75 5.1 Problem Partitioning When solving a small problem, the entire problem can be tackled at once. The complexity of large problems and the limitations of human minds do not allow large problems to be treated as huge monoliths. For solving larger problems, the basic principle is the time-tested principle of divide and conquer. Clearly, dividing in such a manner that all the divisions have to be conquered together is not the intent of this wisdom. This principle, if elaborated, would mean divide into smaller pieces, so that each piece can be conquered separately. For software design, therefore, the goal is to divide the problem into manageably small pieces that can be solved separately. It is this restriction of being able to solve each part separately that makes dividing into pieces a complex task and that many methodologies for system design aim to address. The basic rationale behind this strategy is the behalf that if the pieces of a problem are solvable separately, the cost of solving the entire problem is more than the sum of the cost of solving all the pieces. However, the different pieces cannot be entirely independent of each other, as they together form the system. The different pieces have to cooperate and communicate to solve the larger problem. This communication adds complexity, which arises due to partitioning and may not have existed in the original problem. As the number of components increases, the cost of partitioning, together with the cost of this added complexity, may become more than the savings achieved by partitioning. It is at this point that no further partitioning needs to be done. The designer has to make the judgment about when to stop partitioning. As discussed earlier, two of the most important quality criteria for software design are simplicity and understand ability. It can be argued that maintenance is minimised if each part in the system can be easily related to the application and each piece can be modified separately. If a piece can be modified separately, we call it independent of other pieces. If module A is independent of module B, then we can modify A without introducing any unanticipated side effects in B. Total independence of modules of one system is not possible, but the design process should support as much independence as possible between modules. Dependence between modules in a software system is one of the reasons for high maintenance costs. Clearly, proper partitioning will make the system easier to maintain by making the design easier to understand. Problem partitioning also aids design verification. Problem partitioning, which is essential for solving a complex problem, leads to hierarchies in the design. That is, the design produced by using problem partitioning can be represented as a hierarchy of components. The relationship between the elements in this hierarchy can vary depending on the method used. For example, the most common is the whole-part of relationship. In this, the system consists of some parts; each part consists of subparts, and so on. This relationship can be naturally represented as a hierarchical structure between various system parts. In general, hierarchical structure makes it much easier to comprehend a complex system. Due to this, all design methodologies aim to produce a design that employs hierarchical structures. 5.2 Abstraction Abstraction is a very powerful concept that is used in all engineering disciplines. It is a tool that permits a designer to consider a component at an abstract level without worrying about the details of the implementation of the component. Any component or system provides some services to its environment. An abstraction of a component describes the external behaviour of that component without bothering with the internal details that produce the behaviour. Presumably, the abstract definition of a component is much simpler than the component itself. 65/JNU OLE

76 Software Engineering Abstraction is an indispensable part of the design process and is essential for problem partitioning. Partitioning essentially is the exercise in determining the components of a system. However, these components are not isolated from each other; they interact with each other, and the designer has to specify how a component interacts with other components. To decide how a component interacts with other components, the designer has to know, at the very least, the external behaviour of other components. If the designer has to understand the details of the other components to determine their external behaviour, we have defeated the purpose of partitioning isolating a component from others. To allow the designer to concentrate on one component at a time, abstraction of other components is used. Abstraction is used for existing components as well as components that are being designed. Abstraction of existing components plays an important role in the maintenance phase. To modify a system, the first step is understanding what the system does and how. The process of comprehending an existing system involves identifying the abstractions of subsystems and components from the details of their implementations. Using these abstractions, the behaviour of the entire system can be understood. This also helps determine how modifying a component affects the system. During the design process, abstractions are used in the reverse manner than in the process of understanding a system. During design, the components do not exist, and in the design the designer specifies only the abstract specifications of the different components. The basic goal of system design is to specify the modules in a system and their abstractions. Once the different modules are specified, during the detailed design the designer can concentrate on one module at a time. The task in detailed design and implementation is essentially to implement the modules so that the abstract specifications of each module are satisfied. There are two common abstraction mechanisms for software systems: functional abstraction and data abstraction. In functional abstraction, a module is specified by the function it performs. For example, a module to compute the log of a value can be abstractly represented by the function log. Similarly, a module to sort an input array can be represented by the specification of sorting. Fictional abstraction is the basis of partitioning in functionoriented approaches. That is, when the problem is being partitioned, the overall transformation function for the system is partitioned into smaller functions that comprise the system function. The decomposition of the system is in terms of functional modules. The second unit for abstraction is data abstraction. Any entity in the real world provides some services to the environment to which it belongs. Often the entities provide some fixed predefined services. The case of data entities is similar. Certain operations are required from a data object, depending on the object and the environment in which it is used. Data abstraction supports this view. Data is not treated simply as objects, but is treated as objects with some predefined operations on them. The operations defined on a data object are the only operations that can be performed on those objects. From outside an object, the internals of the object are hidden; only the operations on the object are visible. In using this abstraction, a system is viewed as a set of objects providing some services. Hence, the decomposition of the system is done with respect to the objects the system contains. 5.3 Top-Down and Bottom-Up Design A system consists of components, which have components of their own; indeed a system is a hierarchy of components. The highest-level component corresponds to the total system. To design such a hierarchy there are two possible approaches: top-down and bottom-up. The top-down approach starts from the highest-level component of the hierarchy and proceeds through to lower levels. By contrast, a bottom-up approach starts with the lowest-level component of the hierarchy and proceeds through progressively higher levels to the top-level component. 66/JNU OLE

77 A top-down design approach starts by identifying the major components of the system, decomposing them into their lower-level components and iterating until the desired level of detail is achieved. Top-down design methods often result in some form of stepwise refinement starting from an abstract design, in each step the design is refined to a more concrete level, until we reach a level where no more refinement is needed and the design can be implemented directly. The top-down approach has been promulgated by many researchers and has been found to be extremely useful for design. Most design methodologies are based on the top-down approach. A bottom-up design approach starts with designing the most basic or primitive components and proceeds to higher-level components that use these lower-level components. Bottom-up methods work with layers of abstraction. Starting from the very bottom, operations that provide a layer of abstraction are implemented. The operations of this layer are then used to implement more powerful operations and a still higher layer of abstraction, until the stage is reached where the operations supported by the layer are those desired by the system. A top-down approach is suitable only if the specifications of the system are clearly known and the system development is from scratch. However, if a system is to be built from an existing system, a bottom-up approach is more suitable, as it starts from some existing components. So, for example, if an iterative enhancement type of process is being followed, in later iterations, the bottom-up approach could be more suitable (in the first iteration a top down approach can be used). Pure top-down or pure bottom-up approaches are often not practical. For a bottom-up approach to be successful, we must have a good notion of the top to which the design should be heading. Without a good idea about the operations needed at the higher layers, it is difficult to determine what operations the current layer should support. Top-down approaches require some idea about the feasibility of the components specified during design. The components specified during design should be implement able, which requires some idea about the feasibility of the lower-level parts of a component. A common approach to combine the two approaches is to provide a layer of abstraction for the application domain of interest through libraries of functions, which contains the functions of interest to the application domain. Then use a top-down approach to determine the modules in the system, assuming that the abstract machine available for implementing the system provides the operations supported by the abstraction layer. This approach is frequently used for developing systems. It can even be claimed that it is almost universally used these days, as most developments now make use of the layer of abstraction supported in a system consisting of the library functions provided by operating systems, programming languages, and special-purpose tools. 5.4 Structured Approach Creating a software system design is a major concern of the design phase. Many design techniques have been proposed over the years to provide some discipline in handling the complexity of designing large systems. The aim of design methodologies is not to reduce the process of design to a sequence of mechanical steps but to provide guidelines to aid the designer during the design process. Here we describe the structured design methodology for developing system designs. Structured design methodology (SDM) views every software system as having some inputs that are converted into the desired outputs by the software system. The software is viewed as a transformation function that transforms the given inputs into the desired outputs, and the central problem of designing software systems is considered to be properly designing this transformation function. Due to this view of software, the structured design methodology is primarily function-oriented and relies heavily on functional abstraction and functional decomposition. 67/JNU OLE

78 Software Engineering The concept of structure of program lies at the heart of a structured design method. During design, structured design methodology aims to control and influence the structure of a final program. The aim is to design a system so that programs implementing the design would have a hierarchical structure, with functionally cohesive modules and as few interconnections between modules as possible. In properly designed systems it is often the case that a module with subordinates does not actually perform much computation. The bulk of actual computation is performed by its subordinates, and the module itself largely coordinates the data flow between the subordinates to get the computation done. The subordinates in turn can get the bulk of their work done by their subordinates until the atomic modules, which have no subordinates, are reached. Factoring is the process of decomposing a module so that the bulk of its work is done by its subordinates. A system is said to be completely factored if all the actual processing is accomplished by bottom-level atomic modules and if non-atomic modules largely perform the jobs of control and coordination. SDM attempts to achieve a structure that is close to being completely factored. The overall strategy is to identify the input and output streams and the primary transformations that have to be performed to produce the output. High-level modules are then created to perform these major activities, which are later refined. There are four major steps in this strategy: Restate the problem as a data flow diagram Identify the input and output data elements First-level factoring Factoring of input, output, and transform branches We will now discuss each of these steps in more detail. The design of the case study using structured design will be given later. For illustrating each step of the methodology as we discuss them, we consider the following problem: There is a text file containing words separated by blanks or new fines. We have to design a software system to determine the number of unique words in the file. 5.5 Function v/s Object Oriented Approach The following are some of the important differences between function-oriented and object oriented design. Unlike function-oriented design methods, in OOD, the basic abstractions are not real world functions such as sort, display, track, etc, but real-world entities such as employee, picture, machine, radar system, etc. For example in OOD, an employee pay-roll software is not developed by designing functions such as update-employee record, get-employee-address, etc. but by designing objects such as employees, departments, etc. In object-oriented design, software is not developed by designing functions such as update-employee-record, get-employee-address, etc., but by designing objects such as employee, department, etc. In OOD, state information is not represented in a centralised shared memory but is distributed among the objects of the system. For example, while developing an employee pay-roll system, the employee data such as the names of the employees, their code numbers, basic salaries, etc. are usually implemented as global data in a traditional programming system; whereas in an object-oriented system these data are distributed among different employee objects of the system. Objects communicate by passing messages. Therefore, one object may discover the state information of another object by interrogating it. Of course, somewhere or the other the real-world functions must be implemented. Function-oriented techniques such as SA/SD group functions together if, as a group, they constitute a higherlevel function. On the other hand, object-oriented techniques group functions together on the basis of the data they operate on. To illustrate the differences between object-oriented and function-oriented design approaches, an example can be considered. 68/JNU OLE

79 5.6 Design Specification and Verification Design specification and verification is explained below: Specification Using some design rules or methodology, a conceptual design of the system can be produced in terms of a structure chart. As seen earlier, in a structure chart each module is represented by a box with a name. The functionality of the module is essentially communicated by the name of the box, and the interface is communicated by the data items labelling the arrows. This is alright while the designer is designing but inadequate when the design is to be communicated. To avoid these problems, a design specification should define the major data structures, modules and their specifications, and design decisions. During system design, the major data structures for software are identified; without these, the system modules cannot be meaningfully defined during design. In the design specification, a formal definition of these data structures should be given. Module specification is the major part of system design specification. All modules in the system should be identified when the system design is complete, and these modules should be specified in the document. During system design only module specification is obtained, because the internal details of modules are defined later. To specify a module, the design document must specify: The interface of module (all data items, their types, and whether they are for input and/or output), The abstract behaviour of module (what the module does) by specifying the module s functionality or its input/output behaviour, and All other modules used by the module being specified this information is quite useful in maintaining and understanding the design. Hence, a design specification will necessarily contain specification of the major data structures and modules in the system. After a design is approved (using some verification mechanism), modules will have to be implemented in the target language. This requires that the module headers for the target language first be created from the design. This translation of design for the target language can introduce errors if it s done manually. To eliminate these translation errors, if the target language is known (as is generally the case after the requirements have been specified), it is better to have a design specification language whose module specifications can be used almost directly in programming. This not only minimises the translation errors that may occur, but also reduces the effort required for translating the design to programs. It also adds incentive for designers to properly specify their design, as the design is no longer a mere document that will be thrown away after review it will now be used directly in coding. In the case study, a design specification language close to C has been used. Prom the design, the module headers for C can easily be created with some simple editing. To aid the comprehensibility of design, all major design decisions made by the designers during the design process should be explained explicitly. The choices that were available and the reasons for making a particular choice should be explained. This makes a design more visible and will help in understanding the design. 69/JNU OLE

80 Software Engineering Verification The output of the system design phase, like the output of other phases in the development process, should be verified before proceeding with the activities of next phase. If the design is expressed in some formal notation for which analysis tools are available, then through tools it can be checked for internal consistency e.g., those modules used by another are defined, the interface of a module is consistent with the way others use it, data usage is consistent with declaration, etc. If the design is not specified in a formal, executable language, it cannot be processed through tools, and other means for verification have to be used. The most common approach for verification is design review or inspections. We discuss this approach here. The purpose of design reviews is to ensure that the design satisfies the requirements and is of good quality. If errors are made during the design process, they will ultimately reflect themselves in the code and the final system. As the cost of removing faults caused by errors that occur during design increases with the delay in detecting the errors, it is best if design errors are detected early, before they manifest themselves in the system. Detecting errors in design is the purpose of design reviews. The system design review process is similar to the inspection process, in that a group of people get together to discuss the design with the aim of revealing design errors or undesirable properties. The review group must include a member of both the system design team and detailed design team, the author of requirements document, the author responsible for maintaining design document, and an independent software quality engineer. As with any review, it should be kept in mind that the aim of the meeting is to uncover design errors not to try to fix them; fixing is done later. The number of ways in which errors can come in a design is limited only by the creativity of the designer. However, there are some forms of errors that are more often observed. Perhaps the most significant design error is omission or misinterpretation of specified requirements. Clearly, if the system designer has misinterpreted or not accounted for some requirement it will be reflected later as a fault in the system. Sometimes, this design error is caused by ambiguities in the requirements. There are some other quality factors that are not strictly design errors but that have implications on the reliability and maintainability of the system. An example of this is weak modularity (that is, weak cohesion and/or strong coupling). During reviews, elements of design that are not conducive to modification and expansion or elements that fail to conform to design standards should also be considered errors. A sample checklist The use of checklists can be extremely useful for any review. The checklist can be used by each member during private study of the design and during review meeting. For best results the checklist should be tailored to the project at hand, to uncover problem-specific errors. Here we list a few general items that can be used to construct a checklist for a design review: Is each of the functional requirements taken into account? Are there analyses to demonstrate that performance requirements can be met? Are all assumptions explicitly stated, and are they acceptable? Are there any limitations or constraints on the design beyond those in the requirements? Are external specifications of each module completely specified? Have exceptional conditions been handled? Are all the data formats consistent with the requirements? Are the operator and user interfaces properly addressed? Is the design modular, and does it conform to local standards? Are the sizes of data structures estimated? Are provisions made to guard against overflow? 70/JNU OLE

81 Summary When solving a small problem, the entire problem can be tackled at once. The complexity of large problems and limitations of human minds do not allow large problems to be treated as huge monoliths. For solving larger problems, the basic principle is the time-tested principle of divide and conquer. This principle, if elaborated, would mean divide into smaller pieces, so that each piece can be conquered separately. For software design, therefore, the goal is to divide the problem into manageably small pieces that can be solved separately. It is this restriction of being able to solve each part separately that makes dividing into pieces a complex task and that many methodologies for system design aim to address. Problem partitioning, which is essential for solving a complex problem, leads to hierarchies in the design. That is, the design produced by using problem partitioning can be represented as a hierarchy of components. Abstraction is a very powerful concept that is used in all engineering disciplines. It is a tool that permits a designer to consider a component at an abstract level without worrying about the details of the implementation of the component. An abstraction of a component describes the external behaviour of that component without bothering with the internal details that produce the behaviour. A system consists of components, which have components of their own; indeed a system is a hierarchy of components. The highest-level component corresponds to the total system. To design such a hierarchy there are two possible approaches: top-down and bottom-up. The top-down approach starts from the highest-level component of the hierarchy and proceeds through to lower levels. Creating the software system design is the major concern of the design phase. Many design techniques have been proposed over the years to provide some discipline in handling the complexity of designing large systems. The aim of design methodologies is not to reduce the process of design to a sequence of mechanical steps but to provide guidelines to aid the designer during the design process. Here we describe the structured design methodology for developing system designs. Unlike function-oriented design methods, in OOD, the basic abstractions are not real world functions such as sort, display, track, etc, but real-world entities such as employee, picture, machine, radar system, etc. In object-oriented design, software is not developed by designing functions such as update-employee-record, get-employee-address, etc., but by designing objects such as employee, department, etc. References Bruegge, B., Object-Oriented Software Engineering: Using Uml, Patterns And Java, 2nd ed., Pearson Education India. Aggarwal, K. K., Software engineering, New Age International. NYS Project Management Guidebook, System Design [pdf] Available at: < guidebook2/systemdesign.pdf> [Accessed 7 November 2011]. George, J., Traditional versus Object-Oriented Approach [Online] Available at: < index.php?option=com_content&view=article&id=60:information-report-traditional-and-object-orientedmethodology-for-system-development> [Accessed 7 November 2011]. Prof. Joshi, K. R., Lecture - 14 Software Design - Primary Consideration [Video Online] Available at: < [Accessed 7 November 2011]. Prof. Biswajit, System Design I [Video Online] Available at: < OM9uJuOtgE4&feature=results_video&playnext=1&list=PL0A0C5C062C10A3E5> [Accessed 7 November 2011]. 71/JNU OLE

82 Software Engineering Recommended Reading Peters, F. J., Software Engineering An Engineering Approach, Wiley-India. Sharma P., Software Engineering, APH Publishing. Kelkar, A. S., Software Engineering: A Concise Study, PHI Learning Pvt. Ltd. 72/JNU OLE

83 Self Assessment 1. is a tool that permits a designer to consider a component at an abstract level without worrying about the details of the implementation of the component. a. Abstraction b. c. d. Problem partitioning Top-down Bottom- up 2. of a component describes the external behaviour of that component without bothering with the internal details that produce the behaviour. a. Problem partitioning b. c. d. Top-down Abstraction Bottom-up Abstraction is an indispensable part of the design process and is essential for. a. problem partitioning b. c. d. top-down abstraction bottom- up is used for existing components as well as components that are being designed. a. Problem partitioning b. c. d. Top-down Abstraction Bottom- up Which of the following sentences is true? a. Abstraction of existing components plays an important role in the design phase. b. c. d. Abstraction of existing components plays an important role in the maintenance phase. Abstraction of existing components plays an important role in the development phase. Abstraction of existing components plays an important role in the managing phase. There are two common mechanisms for software systems that are functional abstraction and data abstraction. a. problem partitioning b. c. d. top-down abstraction bottom- up A approach is suitable only if the specifications of the system are clearly known and the system development is from scratch. a. Problem partitioning b. c. d. Top-down Abstraction Bottom- up 73/JNU OLE

84 Software Engineering Which of the following sentences is true? a. Bottom-up methods work with layers of abstraction. b. c. d. Top-down methods work with layers of abstraction. Partitioning methods work with layers of abstraction. Data modelling methods work with layers of abstraction. Structured design methodology views every as having some inputs that are converted into the desired outputs by the software system. a. hardware design b. c. d. software system data model abstraction 10. In state information is not represented in a centralised shared memory but is distributed among the objects of the system. a. functional data structure b. c. d. object oriented data model software design 74/JNU OLE

85 Chapter VI Coding Aim The aim of this chapter is to: explain the concept of top-down and bottom- up approach elucidate structured programming discuss information hiding Objectives The objectives of this chapter are to: explain programming style discuss internal documentation enlist the rules to make code easy Learning outcome At the end of this chapter, you will be able to: recognise control constructor identify user defined data type understand the concept of naming conversion 75/JNU OLE

86 Software Engineering 6.1 Top-Down and Bottom Up Approach Top-down and bottom-up are strategies of information processing and knowledge ordering, mostly involving software, but also other humanistic and scientific theories. In practice, they can be seen as a style of thinking and teaching. In many cases top-down is used as a synonym of analysis or decomposition, and bottom-up of synthesis. A top-down approach is essentially breaking down a system to gain insight into its compositional subsystems. In a top-down approach an overview of the system is first formulated, specifying but not detailing any first-level subsystems. Each subsystem is then refined in yet greater detail, sometimes in many additional subsystem levels, until the entire specification is reduced to base elements. A top down model is often specified with the assistance of black boxes that make it easier to manipulate. However, black boxes may fail to elucidate elementary mechanisms or be detailed enough to realistically validate the model. A bottom-up approach is essentially piecing together systems to give rise to grander systems, thus making the original systems sub-systems of the emergent system. In a bottom-up approach individual base elements of the system are first specified in great detail. 6.2 Structured Programming As stated earlier the basic objective of coding activity is to produce programs that are easy to understand. It has been argued by many that structured programming practice helps develop programs that are easier to understand. The structured programming movement started in the 1970s, and much has been said and written about it. Now the concept pervades so much that it is generally accepted even implied that programming should be structured. Though a lot of emphasis has been placed on structured programming, the concept and motivation behind structured programming are often not well understood. Structured programming is often regarded as goto-less programming. Although extensive use of gotos is certainly not desirable, structured programs can be written with the use of gotos. Here we provide a brief discussion on what structured programming is. A program has a static structure as well as a dynamic structure. Static structure is the structure of text of a program, which is usually just a linear organisation of statements of the program. The dynamic structure of a program is the sequence of statements executed during the execution of the program. In other words, both the static structure and dynamic behaviour are sequences of statements; where the sequence representing the static structure of a program is fixed, the sequence of statements it executes can change from execution to execution. The general notion of correctness of a program means that when the program executes, it produces the desired behaviour. To show that a program is correct, we need to show that when a program executes, its behaviour is what is expected. Consequently, when we argue about a program, either formally to prove that it is correct or informally to debug it or convince ourselves that it works, we study the static structure of the program (i.e., its code) but try to argue about its dynamic behaviour. In other words, much of the activity of program understanding is to understand the dynamic behaviour of the program from the text of the program. It will clearly be easier to understand the dynamic behaviour if the structure in the dynamic behaviour resembles the static structure. 76/JNU OLE

87 The closer the correspondence between execution and text structure, the easier the program is to understand, and the more different the structure during execution, the harder it will be to argue about the behaviour from the program text. The goal of structured programming is to ensure that the static structure and the dynamic structures are the same. That is, the objective of structured programming is to write programs so that the sequence of statements executed during the execution of a program is the same as the sequence of statements in the text of that program. As the statements in a program text are linearly organised, the objective of structured programming becomes developing programs whose control flow during execution is linearised and follows the linear organisation of the program text. Clearly, no meaningful program can be written as a sequence of simple statements without any branching or repetition (which also involves branching). So, how is the objective of linearsing the control flow to be achieved? By making use of structured constructs. In structured programming, a statement is not a simple assignment statement, it is a structured statement. The key property of a structured statement is that it has a single-entry and a single-exit That is, during execution, the execution of the (structured) statement starts from one defined point and the execution terminates at one defined point. With single-entry and single-exit statements, we can view a program as a sequence of (structured) statements. And if ah statements are structured statements, then during execution, the sequence of execution of these statements will be the same as the sequence in the program text. Hence, by using single-entry and single-exit statements, the correspondence between the static and dynamic structures can be obtained. The most commonly used single-entry and single-exit statements are: Selection: if B then S1 else S2 if B then S1 Iteration: While B do S repeat S until B Sequencing: S1; S2; S3; It can be shown that these three basic constructs are sufficient to program any conceivable algorithm. Modern languages have other such constructs that help linearised the control flow of a program, which, generally speaking, makes it easier to understand a program. Hence, programs should be written so that, as far as possible, single-entry, single-exit control constructs is used. The basic goal, as we have tried to emphasise, is to make the logic of the program simple to understand. No hard-and-fast rule can be formulated that will be applicable under all circumstances. Structured programming practice forms a good basis and guideline for writing programs clearly. 6.3 Information Hiding A software solution to a problem always contains data structures that are meant to represent information in the problem domain. That is, when software is developed to solve a problem, the software uses some data structures to capture the information in the problem domain. In general, only certain operations are performed on some information. That is, a piece of information in the problem domain is used only in a limited number of ways in the problem domain. For example, a ledger in an accountant s office has some much defined uses: debit, credit, check the current balance, etc. An operation where all debits are multiplied together and then divided by the sum of all credits is typically not performed. 77/JNU OLE

88 Software Engineering So, any information in the problem domain typically has a small number of defined operations performed on it. When the information is represented as data structures, the same principle should be applied, and only some defined operations should be performed on the data structures. This, essentially, is the principle of information hiding. The information captured in data structures should be hidden from the rest of the system, and only the access functions on data structures that represent operations performed on information should be visible. In other words, when the information is captured in data structures and then on the data structures that represent some information, for each operation on the information an access function should be provided. And as the rest of the system in the problem domain only performs these defined operations on the information, the rest of the modules in the software should only use these access functions to access and manipulate the data structures. Information hiding can reduce the coupling between modules and make the system more maintainable. Information hiding is also an effective tool for managing the complexity of developing software by using information hiding we have separated the concern of managing the data from the concern of using the data to produce some desired results. Many of the older languages, like Pascal, C, and FORTRAN, do not provide mechanisms to support data abstraction. With such languages, information hiding can be supported only by a disciplined use of the language. That is, the access restrictions will have to be imposed by the programmers; the language does not provide them. Most modern OO languages provide linguistic mechanisms to implement information hiding. 6.4 Programming Style The concepts discussed above can help in writing simple and clear code with few bugs. There are many programming practices that can also help towards that objective. We discuss here a few rules that have been found to make code easier to read as well as avoid some of the errors. Control constructs: As discussed earlier, it is desirable that as much as possible single-entry, single-exit constructs be used. It is also desirable to use a few standard control constructs rather than using a wide variety of constructs, just because they are available in the language. Gotos: Gotos should be used sparingly and in a disciplined manner. Only when the alternative to using gotos is more complex should the gotos be used. In any case, alternatives must be thought of before finally using a goto. If a goto must be used, forward transfers (or a jump to a later statement) are more acceptable than a backward jump. Information hiding: As discussed earlier, information hiding should be supported where possible. Only the access functions for the data structures should be made visible while hiding the data structure behind these functions. User-defined types: Modern languages allow users to define types like the enumerated type. When such facilities are available, they should be exploited where applicable. For example, when working with dates, a type can be defined for the day of the week. Using such a type makes the program much clearer than defining codes for each day and then working with codes. Nesting: If nesting of if-then-else constructs becomes too deep, then the logic become harder to understand. In case of deeply nested if-then-elses, it is often difficult to determine the if statement to which a particular else clause is associated. Where possible, deep nesting should be avoided, even if it means a little inefficiency. For example, consider the following construct of nested if-then-elses: if C1 then S1; else if C2 then S2; else if C3 then S3; else if C4 then S4; 78/JNU OLE

89 If the different conditions are disjoint (as they often are), this structure can be converted into the following structure: if C1 then S1; if C2 then S2; if C3 then S3; if C4 then S4; This sequence of statements will produce the same result as the earlier sequence (if the conditions are disjoint), but it is much easier to understand. The price is a little inefficient. Module Size: We discussed this issue during system design. A programmer should carefully examine any function with too many statements (say more than 100). Large modules often will not be functionally cohesive. There can be no hard-and-fast rule about module sizes the guiding principle should be cohesion and coupling. Module interface: A module with a complex interface should be carefully examined. As a rule of thumb, any module whose interface has more than five parameters should be carefully examined and broken into multiple modules with a simpler interface if possible. Side effects: When a module is invoked, it sometimes has side effects of modifying the program state beyond the modification of parameters listed in the module interface definition, for example, modifying global variables. Such side effects should be avoided where possible, and if a module has side effects, they should be properly documented. Robustness: A program is robust if it does something planned even for exceptional conditions. A program might encounter exceptional conditions in such forms as incorrect input, the incorrect value of some variable, and overflow. If such situations do arise, the program should not just crash or core dump ; it should produce some meaningful message and exit gracefully. Switch case with default: If there is no default case in a switch statement, the behaviour can be unpredictable if that case arises at some point of time which was not predictable at development stage. Such a practice can result in a bug like NULL dereference, memory leak, as well etc other types of serious bugs. It is a good practice to always include a default case. switch (i) { case 0 : { s=malloc(size) } s[0] = y; /* NULL dereference if default occurs*/ Empty Catch Block: An exception is caught, but if there is no action, it may represent a scenario where some of the operations to be done are not performed. Whenever exceptions are caught, it is a good practice to take some default action, even if it is just printing an error message. try { FilelnputStream fis = new FileInput Stream ("Input File" ); > catch (IOException ioe) { } //not a good practice Empty if, while Statement: A condition is checked but nothing is done based on the check. This often occurs due to some mistake and should be caught. Other similar errors include empty finally, try, synchronised, empty static method, etc. Such useless checks should be avoided. 79/JNU OLE

90 Software Engineering if (x == 0) {} /* nothing is done after checking x */ else { > Read Return to be Checked: Often the return value from reads is not checked, assuming that the read returns the desired values. Sometimes the result from a read can be different from what is expected, and this can cause failures later. There may be some cases where neglecting this condition may result in some serious error. For example, if read from scanf() is more than expected, then it may cause a buffer overflow. Hence the value of read should be checked before accessing the data read. Return From Finally Block: One should not return from finally block, as cases it can create false beliefs. For example, consider the code public String foo() { try { throw new Exception ("An Exception" ); } catch (Exception e) { throwe; } finally { return "Some value"; } } In this example, a value is returned both in exception and nonexception scenarios. Hence at the caller site, the user will not be able to distinguish between the two. Another interesting case arises when we have a return from try block. In this case, if there is a return in finally also, then the value from finally is returned instead of the value from try. Correlated Parameters: Often there is an implicit correlation between the parameters. For example, in the code segment given below, length represents the size of BUFFER. If the correlation does not hold, we can run into a serious problem like buffer overflow. Hence, it is a good practice to validate this correlation rather than assuming that it holds. In general, it is desirable to do some counter checks on implicit assumptions about parameters. In this example, a value is returned both in exception and nonexception scenarios. Hence at the caller site, the user will not be able to distinguish between the two. Another interesting case arises when we have a return from try block. In this case, if there is a return in finally also, then the value from finally is returned instead of the value from try. void (char*src, int length, char destn [ ] ) { strcpy (destn, src); /* Can cause buffer overflow if length > MAX.SIZE */ } Trusted Data sources: Counter checks should be made before accessing the input data, particularly if the input data is being provided by the user or is being obtained over the network. For example, while doing the string copy operation, we should check that the source string is null terminated, or that its size is as we expect. Similar is the case with some network data which may be sniffed and prone to some modifications or corruptions. To avoid problems due to these changes, we should put some checks, like parity checks, hashes, etc., to ensure the validity of the incoming data. 80/JNU OLE

91 Give importance to exceptions: Most programmers tend to give less attention to the possible exceptional cases and tend to work with the main flow of events, control, and data. Though the main work is done in the main path, it is the exceptional paths that often cause software systems to fail. To make a software system more reliable, a programmer should consider all possibilities and write suitable exception handlers to prevent failures or loss when such situations occur. 6.5 Internal Documentation Programmers spend far more time reading code than writing code. Over the life of the code, the author spends a considerable time reading it during debugging and enhancement. People other than the author also spend considerable effort in reading code because the code is often maintained by someone other than the author. In short, it is of prime importance to write code in a manner that it is easy to read and understand. Coding standards provide rules and guidelines for some aspects of programming in order to make code easier to read. Most organisations that develop software regularly develop their own standards. In general, coding standards provide guidelines for programmers regarding naming, file organisation, statements and declarations, and layout and comments. To give an idea of coding standards (often called conventions or style guidelines), we discuss some guidelines for Java, based on publicly available standards. Naming conventions Some of the standard naming conventions that are followed often are: Package names should be in lower case (for example, mypackage, edu.iitk.maths) Type names should be nouns and should start with uppercase (for example, Day, DateOfBirth, EventHandler) Variable names should be nouns starting with lower case (for example., name, amount) Constant names should be all uppercase (for example, PI, MAXJTERATIONS) Method names should be verbs starting with lowercase (for example., getvalue()) Private class variables should have the _ suffix (for example, private int value_ ). Variables with a large scope should have long names; variables with a small scope can have short names; loop iterators should be named i, j, k, etc. The prefix is should be used for boolean variables and methods to avoid confusion (for example., isstatus should be used instead of status); negative boolean variable names (for example., isnotcorrect) should be avoided. The term compute can be used for methods where something is being computed; the term find can be used where something is being looked up (for example, computemean(), findmin().) Exception classes should be suffixed with Exception (for example, OutOfBound- Exception.) Files There are conventions on how files should be named, and what files should contain, such that a reader can get some idea about what the file contains. Some examples of these conventions are: Java source files should have the extension.java this is enforced by most compilers and tools. Each file should contain one outer class and the class name should be same as the file name. Line length should be limited to less than 80 columns and special characters should be avoided. If the line is longer, it should be continued and the continuation should be made very clear. 81/JNU OLE

92 Software Engineering Statements These guidelines are for the declaration and executable statements in the source code. Some examples are given below. Note, that not everyone will agree to these. That is why organisations generally develop their own guidelines that can be followed without restricting the flexibility of programmers for the type of work the organisation does. Variables should be initialised where declared, and they should be declared in the smallest possible scope. Declare related variables together in a common statement. Unrelated variables should not be declared in the same statement. Class variables should never be declared public. Use only loop control statements in a for loop. Loop variables should be initialised immediately before the loop. Avoid the use of break and continue in a loop. Avoid the use of do... while construct. Avoid complex conditional expressions introduce temporary Boolean variables instead. Avoid executable statements in conditionals. 82/JNU OLE

93 Summary A top-down approach is essentially breaking down a system to gain insight into its compositional subsystems. In a top-down approach an overview of the system is first formulated, specifying but not detailing any first-level subsystems. Static structure is the structure of text of a program, which is usually just a linear organisation of statements of the program. The dynamic structure of the program is the sequence of statements executed during the execution of the program. In other words, both the static structure and the dynamic. The closer the correspondence between execution and text structure, the easier the program is to understand, and the more different the structure during execution, the harder it will be to argue about the behaviour from the program text. The goal of structured programming is to ensure that the static structure and the dynamic structures are the same. The objective of structured programming is to write programs so that the sequence of statements executed during the execution of a program is the same as the sequence of statements in the text of that program. A software solution to a problem always contains data structures that are meant to represent information in the problem domain. That is, when software is developed to solve a problem, the software uses some data structures to capture the information in the problem domain. Modern languages allow users to define types like the enumerated type. When such facilities are available, they should be exploited where applicable. For example, when working with dates, a type can be defined for the day of the week. Using such a type makes the program much clearer than defining codes for each day and then working with codes. If nesting of if-then-else constructs becomes too deep, then the logic becomes harder to understand. In case of deeply nested if-then-elses, it is often difficult to determine the if statement to which a particular else clause is associated. A module with a complex interface should be carefully examined. As a rule of thumb, any module whose interface has more than five parameters should be carefully examined and broken into multiple modules with a simpler interface if possible. When a module is invoked, it sometimes has side effects of modifying the program state beyond the modification of parameters listed in the module interface definition, for example, modifying global variables. Such side effects should be avoided where possible, and if a module has side effects, they should be properly documented. References Dooley, J., Software Development and Professional Practice, Apress. Tsui, F. F. and Karam, O., Essential of Software Engineering, Jones & Bartlett Learning. Plum, T., General Style and Coding Standards for Software Projects [pdf] Available at: < buffalo.edu/~rapaport/code.documentation.excerpts.pdf> [Accessed 7 November 2011]. IEEE Software, Missing in Action: Information Hiding [Online] Available at: < com/ieeesoftware/bp02.htm> [Accessed 7 November 2011]. StanfordUniversity, Lecture 1 Programming Methodology (Stanford) [Video Online] Available at: < [Accessed 7 November 2011]. Blueoptimasupport, [1 of 5] Coding Effort: Taking Software Development Process Improvement to the Next Level [Video Online] Available at: < [Accessed 7 November 2011]. 83/JNU OLE

94 Software Engineering Recommended Reading Saleh, A. K., Software Engineering, J. Ross Publishing. Vliet, V. H., Software engineering: principles and practice, John Wiley. Blum, I. B., Software engineering: a holistic view, Oxford University Press. 84/JNU OLE

95 Self Assessment 1. A approach is essentially breaking down a system to gain insight into its compositional sub-systems. a. bottom-up b. c. d. top-down structure programming static structure is often regarded as goto-less programming. a. bottom-up b. c. d. top-down structure programming static structure The is the structure of text of a program, which is usually just a linear organisation of statements of the program. a. bottom-up b. c. d. top-down structure programming static structure The of a program is the sequence of statements executed during execution of the program. a. bottom-up b. c. d. dynamic structure structure programming static structure can reduce the coupling between modules and make the system more maintainable. a. Information hiding b. c. d. Modern language Nesting Robustness allow users to define types like the enumerated type. a. Information hiding b. c. d. Modern language Nesting Robustness If of if-then-else constructs becomes too deep, then the logic becomes harder to understand. a. Information hiding b. c. d. Modern language Nesting Robustness 85/JNU OLE

96 Software Engineering Which of the following is true? a. A program is robust if it does something planned even for exceptional conditions. b. A program is static if it does something planned even for exceptional conditions. c. A program is dynamic if it does something planned even for exceptional conditions. d. A program is flexible if it does something planned even for exceptional conditions. Java source files should have the extension this is enforced by most compilers and tools. a..docx b..jpg c..jpeg d..java 10. Which of the following is true? a. Package names should be in sentence case. b. Package names should be in lower case. c. Package names should be in upper case. d. Package names should be in toggle case. 86/JNU OLE

97 Chapter VII Testing Aim The aim of this chapter is to: explain levels of testing elucidate functional testing discuss data verification Objectives The objectives of this chapter are to: explain concept of validation elaborate concept of data integration highlight data field checks Learning outcome At the end of this chapter, you will be able to: understand numeric fields identify alphanumeric fields checks describe structural testing 87/JNU OLE

98 Software Engineering 7.1 Levels of Testing Testing is usually relied upon to detect the faults remaining from earlier stages, in addition to the faults introduced during coding itself. Due to this, different levels of testing are used in the testing process; each level of testing aims to test different aspects of the system. The basic levels are unit testing, integration testing, and system and acceptance testing. These different levels of testing attempt to detect different types of faults. The relation of the faults introduced in different phases. The first level of testing is called unit testing. In this, different modules are tested against the specifications produced during design for the modules. It is typically done by the programmer of the module. A module is considered for integration and use by others only after it has been unit tested satisfactorily Functional Testing Testing web application is certainly different than testing desktop or any other application. Within web applications, there are certain standards which are followed in almost all the applications. Having these standards makes life easier for use, because these standards can be converted into checklist and application can be tested easily against the checklist. Links Check that the link takes you to the page it said it would. Ensure to have no orphan pages (a page that has no links to it) Check all of your links to other websites Are all referenced web sites or addresses hyperlinked? If we have removed some of the pages from our own site, set up a custom 404 page that redirects your visitors to your home page (or a search page) when the user try to access a page that no longer exists. Check all mailto links and whether it reaches properly Forms Acceptance of invalid input Optional versus mandatory fields Input longer than field allows Radio buttons Default values on page load/reload(also terms and conditions should be disabled) If command button can be used for HyperLinks and Continue Links? Is all the data inside the combo/list box are arranged in chronological order? Are all of the parts of a table or form present? Correctly laid out? Can you confirm that selected texts are in the right place? Does a scrollbar appear if required? Data verification and validation Is the privacy policy clearly defined and available for user access? At no point of time, the system should behave awkwardly when an invalid data is fed. Check to see what happens if a user deletes cookies while in site. Check to see what happens if a user deletes cookies after visiting a site. 88/JNU OLE

99 Data integration Check the maximum field lengths to ensure that there are no truncated characters? If numeric fields accept negative values can these be stored correctly on the database and does it make sense for the field to accept negative numbers? If a particular set of data is saved to the database check that each value gets saved fully to the database i.e., beware of truncation (of strings) and rounding of numeric values. Date field checks Assure that leap years are validated correctly and do not cause errors/miscalculations. Assure that Feb. 28, 29, 30 are validated correctly and do not cause errors/ miscalculations. Is copyright for all the sites includes Yahoo co-branded sites are updated. Numeric fields Assure that lowest and highest values are handled correctly. Assure that numeric fields with a blank in position 1 are processed or reported as an error. Assure that fields with a blank in the last position are processed or reported as an error an error. Assure that both + and - values are correctly processed. Assure that division by zero does not occur. Include value zero in all calculations. Assure that upper and lower values in ranges are handled correctly. (Using BVA) Alphanumeric field checks Use blank and non-blank data Include lowest and highest values Include invalid characters and symbols Include valid characters Include data items with first position blank Include data items with last position blank Structural Testing Structural testing encompasses three critical phases of software development and testing; yet, one or more of these phases is often deliberately bypassed, overlooked, or performed in a less than rigorous manner because either the technical advantages are not fully considered or, more often, the cost and schedule benefits are not appreciated. While structural testing is required by the FDA for medical devices of moderate and major level of concern, this testing should be done for all software, regardless of the level of concern. Also, it should be noted that there is no fundamental difference between structural testing of software used in a medical device and that used in a manufacturing process or a manufacturer s quality system (or, for that matter, in any other software). An understanding of these considerations and, thus, the importance of performing structural testing, are discussed below. Definition of software structural testing Software structural testing is meant to challenge the decisions made by the program with test cases based on the structure and logic of the design and source code. Complete structural testing exercises the program s data structures such as configuration tables and its control and procedural logic at the test levels discussed below. 89/JNU OLE

100 Software Engineering Structural testing should be done at the unit, integration, and system levels of testing. Structural testing assures the program s statements and decisions are fully exercised by code execution. For example, it confirms that program loop constructs behave as expected at their data boundaries. For configurable software, the integrity of the data in configuration tables are evaluated for their impact on program behaviour. At the unit level, structural testing also includes the identification of dead code, which is code that cannot be reached for execution by any code pathway. Integration structural testing should be performed after all verification testing of the units involved and before system-level structural testing. Figure shown below illustrates the general relationship between software verification and validation. Software verification confirms that the output of each phase of software development is true to (i.e., is consistent with) the inputs to that phase. Performance qualification testing confirms the final software product, running in its intended hardware and environment, is consistent with the intended product as defined in the product specifications and software requirements. 90/JNU OLE

101 Concept Product Specification Verification Software Requirements Verification Design Performance Qualification Verification Source Code Execute code Hardware Verification (Test) Maintenance Fig. 7.1 Software verification vs software validation testing Figure shown below illustrates a typical configuration for a structural unit test. The unit is compiled and linked with a driver and stubs, as needed. The driver is a substitute for any actual unit that will eventually call the unit-under-test, and if the driver passes data to the unit-under-test, it is set up to pass test case variable values such as maximum, minimum, and other nominal and stress-test values. The stubs are substitutes for any units called by the unit-under-test. As with the driver, if the stubs return data to the unit-under-test, they also pass stress test and nominal data values, as appropriate. The interface of the drivers and stubs, including their names, are the same as the true units interfaces, allowing the set of units to be linked without altering the unit-under-test. 91/JNU OLE

102 Software Engineering Driver Simulating Compute Y Unit- Under Test Get Formatted ABC Stub Simulating Format ABC Stub Simulating Notify operation of invalid data Fig. 7.2 Configuration for a structural unit test Unit-level structural tests can be conducted on the actual target hardware, on an emulator or simulator of the actual hardware, or, if the situation requires, on a totally different processor. The latter case may occur, for example., if the actual hardware has no provision for determining the results of a unit s tests, but where the code is written in a higher order language. Thus, the higher order source code (such as C or C++) can be compiled and linked to run on another computer that supports reading the test results where the target computer (for example, an embedded microprocessor) could not support the tests. Structural testing (a.k.a. white-box testing) is performed with the item-under-test, in this case the unit, being viewed internally for purposes of determining how the item should behave for example, in determining all possible code branches. The primary purpose of unit-level structural testing is to verify the code complies with the design, including logic, algorithms correctness, and accuracy (versus parallel code or hand calculations), and the correctness of the data s engineering units. This requires, for each unit, complete branch tests, complete code tests (including verifying there is no dead code), and stress tests (to detect, for example, overflow and underflow conditions as well as nominal and maximum loop-control data values). The detailed design is used to develop the acceptance criteria. The environment for both integration-level and system-level structural tests It is best to set up the integration and system structural tests using the actual hardware and environment to the extent practical. There are several reasons for this, but the two most significant are The software may have subtle conditions, both good and bad, that will only show up when running on the actual hardware. The final computerised system, including the intended hardware and software, must be qualified running on that hardware, and the structural tests should advance the software development towards that end. However, there are also good and sufficient reasons to perform structural tests partially or wholly in a simulated environment. 92/JNU OLE

103 In considering establishing simulation capabilities, the two most common configurations are to either emulate the computer and simulate the environment used most often when the actual computer is an embedded microprocessor and it is difficult to stimulate known inputs and/or read the outputs of a test) or to simulate both the computer and the environment (used, for example, if an emulator of the target computer is not available. The principal advantages, then, in using a simulation of the environment and, at times, the computer include the following: The ability to set up absolutely known input values, such that the results can be predetermined to establish the acceptance criteria of each test; A simulator makes it easy to establish inputs that are over, under, and at the exact limits of critical data values; It is easy to set up illegal inputs to test all error and failure conditions; and, finally, The results of each test can be readily seen. Integration-level structural testing Integration structural testing combines functionally cohesive units of verified code (which includes unit-level structurally tested code) by compiling and/or assembling and linking the code, along with any drivers and stubs needed. The structure is then loaded into the actual or simulated environment for execution. This allows the tester to focus on that one functional package to confirm its correct operation, including all internal and external interfaces. Following completion of each functional package s test, the next functional package may be either separately tested or added to (i.e., linked with) the previously tested package(s). Regression testing (i.e., running a selected subset of previous, successfully run test cases) must be performed on the previously tested packages to confirm they are not adversely affected by the newly introduced functional package. Figure shown below illustrates a building-block, or incremental approach to structural testing. While the figure s illustration is related to a structured design, the same approach is used for all other design methods, including flowchart and object-oriented design. In figure shown below, the first function needs two stubs, Get Formatted ABC and Output Y to Device X, where a stub is a simple software dummy needed to link successfully, but is not part of the software being tested. The stubs in the figure have no calls to additional units. The second and third functions require no driver because they use the actual Compute Y and they require no stubs because they use the actual Output Error Messages unit. 93/JNU OLE

104 Software Engineering Compute Y Initialize Data Get Formatted ABC Output Y to device X Open File Notify operator of any errors Format ABC Notify operator of invalid data Notify operator if device X is Off-Line 2 nd Function 3 rd Function 1 st Function Output Error messages Fig. 7.3 Incremental integration structural testing The incremental approach to integration-level structural tests is the best for software developers (as opposed to third party, validation testers), especially if the program is large or complex. In this approach, selected small, functionally cohesive portions of the software are compiled, linked, and tested. This approach is used regardless of the software life cycle development method being employed, including any of the following three methods. In the waterfall method, all of the requirements are developed, then the design is completed, and, finally, selected threads are coded and structurally tested. In the spiral method, a major element of the software system is discussed and then the requirements, design, and code are developed, and the element s structural test is performed prior to going on to discuss and develop the next major element. Finally, in the incremental software development method, all of the specifications and requirements may be developed, but the design and implementation are developed one function at a time. In any case, if the operating system was uniquely developed for the system-under-test, that operating system should be structurally tested first. This portion of the code itself should be broken down into functionally cohesive packages if it is large and/or complex; otherwise, it can be structurally tested as an entity. The second portion of the code to be tested is normally the unique input/output section. If there are diverse input/ output devices, these may be structurally tested separately. But it is often best to select a thread that includes both the ability to input data and to see the resulting output for each structural test. The third step is to select and structurally test a functionally cohesive portion of the application and the utilities needed to support that application. Then, select the next functionally cohesive portion of the application software and associated application utilities for the next structural test, and so on. All previously tested functions should be regression tested, as appropriate. 94/JNU OLE

105 7.1.3 Test Plan A software test plan is a document describing the testing scope and activities. It is the basis for formally testing any software/product in a project. ISTQB definition Test plan: A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process. Master test plan: A test plan that typically addresses multiple test levels. Phase test plan: A test plan that typically addresses one test phase. Cash Incorporated Project Name Product Name Product release version Money Generator Suite Coin generator Master_Test_Plan Master Test Plan Document Version: 1.0 Date: 01/02/2059 Prepared by: John Doe Fig. 7.4 Test plan 95/JNU OLE

106 Software Engineering Test plan types One can have the following types of test plans: Master test plan: A single high-level test plan for a project/product that unifies all other test plans. Testing level specific test plans: Plans for each level of testing. Unit test plan Integration test plan System test plan Acceptance test plan Testing type specific test plans: Plans for major types of testing like performance test plan and security test plan. Test plan template The format and content of a software test plan vary depending on the processes, standards, and test management tools being implemented. Nevertheless, the following format, which is based on IEEE standard for software test documentation, provides a summary of what a test plan can/should contain. Test plan identifier Provide a unique identifier for the document. (Adhere to the Configuration Management System if you have one). Introduction Provide an overview of the test plan Specify the goals/objectives Specify any constraints References List the related documents, with links to them if available, including the following: Project plan Configuration management plan Test items List the test items (software/products) and their versions. Features to be tested List the features the software/product to be tested. Provide references to the Requirements and/or Design specifications of the features to be tested Features not to be tested List the features of the software/product which will not be tested. Specify the reasons these features won t be tested. Approach Mention the overall approach to testing. Specify the testing levels, the testing types, and the testing methods. Item pass/fail criteria Specify the criteria that will be used to determine whether each test item (software/product) has passed or failed testing. 96/JNU OLE

107 Suspension criteria and resumption requirements Specify criteria to be used to suspend the testing activity. Specify testing activities which must be redone when testing is resumed. Test deliverables List test deliverables, and links to them if available, including the following: Test plan (this document itself) Test cases Test scripts Defect/enhancement logs Test reports Test environment Specify the properties of test environment: hardware, software, communications etc. List any testing or related tools. Estimate Provide a summary of test estimates (cost or effort) and/or provide a link to the detailed estimation. Schedule Provide a summary of the schedule, specifying key test milestones, and/or provide a link to the detailed schedule. Staffing and training needs Specify staffing needs by role and required skills. Identify training that is necessary to provide those skills, if not already acquired. Responsibilities List the responsibilities of each team/role/individual. Risks List the risks that have been identified. Specify the mitigation plan and the contingency plan for each risk. Assumptions and dependencies List the assumptions that have been made during the preparation of this plan. List the dependencies. Approvals Specify the names and roles of all persons who must approve the plan. Provide space for signatures and dates. (If the document is to be printed.) Test plan guidelines Make the plan concise. Avoid redundancy and superfluousness. If you think you do not need a section that has been mentioned in the template above, go ahead and delete that section in your test plan. Be specific. For example, when you specify an operating system as a property of a test environment, mention the OS Edition/Version as well, not just the OS Name. Make use of lists and tables wherever possible. Avoid lengthy paragraphs. 97/JNU OLE

108 Software Engineering Have the test plan reviewed a number of times prior to base lining it or sending it for approval. The quality of your test plan speaks volumes about the quality of the testing you or your team is going to perform. Update the plan as and when necessary. An out-dated and unused document stinks Test Cases Specifications CASE Spec is a very flexible tool for developing requirements as well as test cases. It provides the accountability and traceability that you need for your projects. Specification flexibility CASE Spec provides the flexibility for specifying your requirements and test cases. Test cases can be specified with steps, tables, images and diagrams. Use CASE Spec s user-friendly interface to specify your test cases effortlessly. Effective testing As technology progresses, so do the customer s expectations for bug-free, fully functional software. This expectation for a requirements-exact product has given rise to a new understanding of the importance of software testing as a critical pre-release activity. With CASE Spec, users can easily trace test cases to requirements and other artifacts. Our traceability feature enables users to easily identify the impact of requirement changes on test cases. User requirements can be effectively validated to increase user acceptance of the system. CASE Spec also provides a unique feature for links. Relationships (links) can be identified with link types that have user-defined attributes. For example, a test condition linked to a feature may be identified with a link type test case, with an attribute Status that indicates the values passed or failed. This capability is very useful in managing and simplifying the test case management process. Links also provide the information needed to trace relationships between artifacts. For example, using links, we can easily determine the following: All test conditions for a given feature All failed test cases for a given feature All features with no executed test cases These results can be viewed in both graphical- and grid-based (matrix) formats. CASE Spec makes it easy for users to implement efficient and effective software testing, thereby guaranteeing the quality and value of the final product for customers. Other useful CASE Spec features for the test-tracking process include automatic versioning of test cases and relationships, notifications, and easy reporting. 98/JNU OLE

109 Fig. 7.5 Effective testing window Traceability Use CASE Spec s award winning tools for traceability and gap analysis. Use graphical tools for linking test cases with other artifacts (for example, design, requirements, use cases, tasks, issues). Establish parent-child and peer-to peer links for traceability. For example, with gap analysis tools you can find test cases that are not linked to requirements and/or use cases. Change management Use CASE Spec s automatic change management tools for versioning, base lining and reverting to previous versions. Use CASE Spec s traceability tools for impact analysis of test case change on project artifacts. Workflow management Manage workflow with CASE spec s built-in workflow feature. Documents and reports Generate documents with embedded objects, tables of contents and cover pages. You can also generate analysis reports with sorting, grouping and filtering. The reports can be exported in various formats (excel, xml, html, pdf, rtf, etc.) Collaboration CASE Spec is a zero configuration and administration tool that can be easily deployed for collaboration of local and globally dispersed teams. Your team can collaborate on test cases effectively by using user access control, change management and concurrency control tools. 99/JNU OLE