Experiences from Lightweight RE Method Evaluations

Size: px
Start display at page:

Download "Experiences from Lightweight RE Method Evaluations"

Transcription

1 Experiences from Lightweight RE Method Evaluations Uolevi Nikula Lappeenranta University of Technology, Department of Information Technology, P.O. Box 20, FIN Lappeenranta, Finland Abstract. A comprehensive requirements engineering (RE) method was evaluated to assess its value for practitioners. A literature search brought up two evaluation frameworks that were used as the basis for both theoretical and empirical evaluations; one of the frameworks was used as such while a lightweight adaptation was developed from the other one. The different assessments proved to complement each other: one focused on the balance the method provides between the three fundamental dimensions of RE, second one on the potential the method has to address key issues in RE, and third one on the actual effect the method has on company RE practices. 1 Introduction Over 75% of all enterprises have been claimed to have deficient requirements engineering (RE) practices [1]. A recent survey on RE practices indicates that in many cases the problem is an absence of practices rather than how they are being achieved [2]. Since lot of literature on good RE practices exists [e.g. 3, 4] it was hypothesized that the problem is in the way RE practices have been tried to introduce into practice. Consequently a ready-to-use RE method was constructed and evaluated [5] to find out whether the method really could help to introduce RE practices into industry. The developed method was subjected to both theoretical and empirical evaluations. The theoretical evaluation was done as a desktop exercise by the author of this paper with two frameworks found in literature: a Three Dimensions of RE [6] and a REAIMS model [4]. The empirical evaluation was conducted as industrial case studies following the guidelines suggested by Yin [7]. The three participating companies were assessed with a modified REAIMS assessment before, in the middle of, and in the end of the 11 month period. The assessments were based on interviews and study of produced requirements documents and few process guides. The evaluated method is a basic RE method BaRE [8]. It is designed for small projects developing administrative and business applications, and includes domain specific document templates, practices (processes and techniques), training material, and suggestions on automated tool support. The novel part of BaRE is that it is designed to be ready-to-use, which is defined to mean fitness-for-need, simplicity, and comprehensiveness. Section 2 of this paper describes research related to this study and Section 3 presents the results of both theoretical and empirical evaluations. Section 4 provides a discussion of the results and Section 5 concludes the paper.

2 Uolevi Nikula 2 Related Research In this section the Three Dimensions of RE [6] and the REAIMS assessment model [4] are described. The Three Dimensions of RE is a general framework that is derived from the main goals of RE, which are the following: Gaining a complete system specification out of opaque views existing at the beginning of the process, according to standard and/or guidelines used; suggested scale is from opaque through fair to complete. Offering different representation formats, supporting transformation between the representations, and keeping the various representations consistent; suggested representation formats are informal, semi-formal, and formal. Allowing various views and supporting evolution from personal views to common agreement on final specification; suggested scale is from personal view to common view. These three goals represent fundamental dimensions of RE specification, representation, and agreement that can be used to classify and clarify the support methods and tools provide to RE. The results of a REAIMS project [4] include both an assessment model and an RE maturity model. This maturity model resembles the software capability maturity model (CMM) of the Software Engineering Institute and, e.g., it has three levels that are comparable with the first three levels of the CMM model: initial, repeatable, and defined. The REAIMS assessment process includes four phases: selecting people to interview, initial scoring, refinement, and final maturity level calculation. In the scoring phase usage of each practice is assessed in four levels (Table 1): Table 1. Different REAIMS usage levels with respective scores and descriptions [4] Usage Level Score Description Standardized 3 Practice is in documented use in the organization, it is followed and checked as a part of quality assurance procedures. Normal use 2 Practice is widely followed in the organization but it is not mandatory. Discretionary 1 Practice is followed by the discretion of each project manager. Never 0 Practice is rarely or never applied. The practices are classified as basic, intermediate, and advanced ones and separate sums are calculated for each category from the numerical scores. The scores are then used to select appropriate maturity level. In addition to the 66 good RE practices Sommerville and Sawyer [4] also suggest top ten practices that every organization should implement. Literature reporting experiences from the use of these frameworks in practice is very limited. There are some public papers describing research work based on the

3 Experiences from Lightweight RE Method Evaluations REAIMS model but only one paper using the assessment model in evaluating a method could be found [9]. In this paper extreme programming (XP) is extended in RE area since the original XP was found to support fully only 7 of the 66 REAIMS practices with an estimated point gain of 31 of the 55 points required for repeatable maturity level. For the Three Dimensions of RE no experience reports could be found. 3 Evaluation Results The BaRE method evaluation consisted of both theoretical and empirical evaluation. The theoretical evaluation was done with the Three Dimensions of RE and an adapted version of the REAIMS assessment. Namely, the REAIMS project targeted to large companies doing critical systems projects, while the BaRE method is focused on small administrative and business application projects. Thus the full blown REAIMS model was too complex and it had to be adapted to the selected application domain. The empirical evaluation utilized only adapted REAIMS assessment as the Three Dimensions model appeared hard to apply in practical work. The reason to do both theoretical and empirical evaluations was twofold. First the theoretical evaluation provided a natural closing point for the first phase in a phased research effort since it provided feedback of the expected value of the method with few hours effort. The empirical evaluation, on the other hand, closed the second phase and the whole constructive research effort. In the present work the empirical evaluation took almost a year to complete, but was needed to assess the real value of the method in practice. The second reason for doing both the evaluations was interest in getting experiences from the found evaluation frameworks in practice to assess their practical value. 3.1 Theoretical Evaluation The Three Dimensions of RE evaluation was done by considering how the BaRE method achieved the three main goals of RE in the suggested scales (see Section 2). The representation formats was the clearest dimension since the BaRE method suggests in many places semi-formal representations and augments them with informal ones. The agreement dimension makes also a fair effort to approach common view since stakeholder identification and collaboration are advocated. Finally the specification dimension involves the most subjective estimates since complete specifications for a business application and an embedded system are quite different. Thus in the selected domain a justified assessment appeared to be between fair and complete specification as the suggested requirements document (RD) templates combined a number of well established templates [e.g. 3, 10]. A proper REAIMS assessment requires knowledge on both the practices and their established use, so it is not optimal for a desktop evaluation. However, evaluating the way a method addresses each guideline is possible, and the BaRE method was assessed in respect to the top ten guidelines. Since these guidelines were known already in the design phase of the method it was natural that all the guidelines were addressed somehow (Table 2). A more interesting point then was that addressing some of the top

4 Uolevi Nikula ten guidelines appeared difficult to support by design and some seemed unnecessary for the selected application domain. Notice also that only the first five practices can be addressed in a method level and the last five can only be supported somehow in a method; their actual implementation dependents on the person doing the work. In the theoretical evaluation the BaRE method addressed all the three dimensions of RE in about the middle range and it implemented most of the REAIMS Top Ten guidelines. Thus it was concluded that following the method should introduce new RE practices in a company and, consequently, improve the quality of RE in general. Table 2. The REAIMS Top Ten guidelines and the BaRE method approach to them REAIMS Top Ten Guideline Define a standard document structure Define standard templates for requirements descriptions Uniquely identify each requirement Define validation checklists Define policies for requirements management (RM) Make the document easy to change Use language simply, consistently and concisely Organize formal requirements inspections Use checklists for requirements analysis Plan for conflicts and conflict resolution The BaRE Method Approach Adopted, a RD template provided Adopted, standard attributes for requirements suggested Adopted, included in template Adopted, initial checklists provided Applicability questionable RM practices suggested instead Applicability questionable Supported by instructions Simple and compact representation format suggested Applicability questionable Informal reviews suggested instead Adopted, initial checklists provided Active user collaboration and a negotiation meeting structure suggested 3.2 Empirical Evaluation Information about the RE practices was collected with the REAIMS Top Ten questions in a slightly modified format to acknowledge the issues noticed in the theoretical evaluation. That is, the requirements management (RM) policies was changed to RM practices and formal inspections to informal reviews. Further more the standard usage was scaled down to mean only documented usage of the practices and no quality assurance checks were required. Due to these changes the assessment was renamed to a Lightweight REAIMS Top Ten assessment.

5 Experiences from Lightweight RE Method Evaluations The assessments were conducted in three companies (A, B, and C) at three points of time: in the beginning (P0), in the middle (P1), and in the end of (P2) the evaluation phase (Table 3). The symbols in the table mean the following (cf. Table 1): - Never applied, Discretion of project manager, Normal use, and Standard use in company. The total point gains are calculated as described for the REAIMS assessment (Section 2); a maximum point gain for ten practices is 30 points. Table 3. Lightweight REAIMS Top Ten assessment results in the three companies at three points of time Practice Case Phase A B C P0 P1 P2 P0 P1 P2 P0 P1 P2 Define practices for RM - Define a standard document structure - - Define unique identifier for each requirement Define standard templates for requirements description Define validation checklists Organize informal requirements inspections - - Use language simply, consistently and concisely Make the document easy to change Use checklists for requirements analysis Plan for conflicts and conflict resolution Total Point Gain The improved infrastructure for RE shows well the Lightweight REAIMS Top Ten assessment. Since the improvement efforts focused clearly to the infrastructure part of this assessment, a more comprehensive list of basic infrastructure was developed and data was collected also for it (Table 4). This list does not focus to RE only, but addresses other areas in software engineering that are inherently intertwined with RE like change management, reviews, and software engineering in general. Both Tables 3 and 4 clearly depict that Company A did its improvement actions in the first half of the evaluation phase, Company B did most of its actions in the second half, and Company C did not change their RE practices by and large. The improved practices are simply summarized by the total point gain providing support for the claim that the BaRE method did help introduce RE practices in these companies.

6 Uolevi Nikula Table 4. Infrastructure in the different companies in the different phases Elements Templates Case Phase A B C P0 P1 P2 P0 P1 P2 P0 P1 P2 Requirements description - - Requirements document - - Change requests Processes Requirements development Change management - Review - - Techniques Requirements development - - Change management - Checklists Requirements engineering Change management Training Requirements engineering Software engineering Application domain Methods Requirements engineering Software engineering Tools Requirements management Change management Discussion Assessing the BaRE method with the Three Dimensions of RE framework was done very quickly but it did not provide much information either. In the lack of a reference model the feedback on a single method is limited to an estimate of the balance the method provides between the three dimensions. The REAIMS assessment model provides a comprehensive and ready-to-use framework for RE practice evaluation. All the necessary elements are there to conduct

7 Experiences from Lightweight RE Method Evaluations an assessment and even maturity model is suggested to provide a reference point for results. However, using the model to evaluate a single method brought up three limitations in it: first the model is designed to assess practices in established used, second it is targeted for dependable and safety related systems, and third it does not separate infrastructure and working practices. These limitations were bypassed with simple adaptations so that working results were achieved. This approach has two disadvantages, though. First, in the middle of a process improvement effort no practice can be claimed to be in established use so the possibly calculated RE maturity is not reliable. Second, the REAIMS assessment model appears like a validated one and changes to it annul the validation at some point. The Lightweight REAIMS Top Ten assessment is not a thoroughly validated model. It embodies a number of changes to the original REAIMS model to adapt it to a quite different application domain. The most important changes to the original model are limiting to the top ten practices only, and grouping the practices into infrastructure and working practices. Categorization of the practices was inspired by an observation that infrastructure is basically deduced from organization, application domain, and project specific needs while working practices depend on person doing the work. An infrastructure assessment is also fairly objective and changes can be detected even during improvement efforts, while working practice assessments are more subjective in nature and changes in practices are likely to happen slower. Limiting an assessment to only ten practices may appear odd at first but experiences from industry co-operation suggest that the most common problem with RE is a lack of basic practices. Thus a quick survey on practices provides an indication of the general level of RE in a company. Fig. 1 summarizes the conducted Lightweight REAIMS Top Ten assessment point gains in 17 companies showing the overall state of the RE practices in surveyed companies. Point Gain Company Fig. 1. Lightweight REAIMS Top Ten point gains for 17 companies. The maximum points gain was 30; three companies did not get any points 5 Conclusion This paper reported experiences from evaluating a developed RE method with two frameworks found in literature. The Three Dimensions framework appears to suit for

8 Uolevi Nikula theoretical evaluation and focuses on the balance the method provides between the three fundamental dimensions of RE. The REAIMS assessment framework goes a step further suggesting a RE maturity framework that serves as a reference model for evaluation results. Thus it provides a good means to evaluate how a method addresses different RE practices both in theory and in practice. The main problem is that the framework was developed for critical systems and less stringent application domains require some adaptations. The conducted three assessments proved to complement each other. The theoretical evaluations can only provide estimates of the practical potential of a method, while evaluating the real value requires empirical work. As the empirical evaluations take considerably longer to conduct, both theoretical and empirical evaluation methods are needed. Based on the evaluations of the BaRE method, the conducted theoretical evaluations provided a realistic estimate of the method potential in practice. It is also important to notice that the assessment results were consistent with the direct feedback from the participating companies all the three companies found the method valuable in practice and reported it to have helped them to improve their RE practices. It is clear that more research is needed with evaluation frameworks. For example the evaluation frameworks appear to be best implemented as domain specific ones but what are natural boundaries for such domains? Another issue concerns the conducted process improvement efforts that focused so far only on whether practices are followed or not; future research should consider also the way practices are actually implemented. References 1. Jones, T.C.: Applied Software Measurement: Assuring Productivity and Quality. McGraw- Hill, (1996) 2. Nikula, U., Sajaniemi, J., and Kälviäinen, H.: "Management View on Current Requirements Engineering Practices in Small and Medium Enterprises". In: The Fifth Australian Workshop on Requirements Engineering. Faculty of Information Technology, University of Sydney (UTS) (2000) 3. Wiegers, K.E.: Software Requirements. Microsoft Press, Redmond, Washington (1999) 4. Sommerville, I. and Sawyer, P.: Requirements Engineering: A Good Practice Guide. John Wiley & Sons, Chichester, England (1997) 5. March, S.T. and Smith, G.F.: Design and Natural Science Research on Information Technology. Decision Support Systems 4 (1995) Pohl, K.: The Three Dimensions of Requirements Engineering: A Framework and Its Applications. Information Systems 3 (1994) Yin, R.K.: Case Study Research: Design and Methods. 2 edition. Thousand Oaks (CA): SAGE Publications, (1994) 8. Nikula, U.: The BaRE Method Version 1.0. Lappeenranta University of Technology, Lappeenranta (2002) 9. Nawrocki, J., Jasinski, M., Walter, B., and Wojciechowski, A.: "Extreme Programming Modified: Embrace Requirements Engineering Practices". In: IEEE Joint International Conference on Requirements Engineering. IEEE Computer Society (2002) 10. ANSI/IEEE Standard 830: ANSI/IEEE Standard , In: IEEE Recommended Practice for Software Requirements Specifications. IEEE Computer Society Press, New York, NY (1998)