This paper appeared in the proceedings of OTC 95 The First Conference on Object Technology Centers Stone Mountain, Georgia, pp

Similar documents
GEARING FACTORS. The A FLEXIBLE SIZING APPROACH

Using Pilots to Assess the Value and Approach of CMMI Implementation

SOFTWARE MEASUREMENT GUIDEBOOK. Revision 1

3 Planning the Measurement Process

3 Planning the Measurement Process

OPT: An Approach to Organizational and Process Improvement

Benchmarking Functional Verification by Mike Bartley and Mike Benjamin, Test and Verification Solutions

Baselining Software Processes as a Starting Point for Research and Improvement

INSIGHTS. Demand Planner for Microsoft Dynamics. Product Overview. Date: November,

PLANNING AND ESTIMATING

Bridging the Gap between Business Strategy and Software Development

Development of an OO Energy Management System using the ami Approach

ABB Month DD, YYYY Slide 1

Supply Chain MICROSOFT BUSINESS SOLUTIONS DEMAND PLANNER

Project Plan. CxOne Guide

7. What is planning? It is an act of formulating a program for a definite course of action. Planning is to decide what is to be done.

How Cisco IT Manages IT Service Costs

Requirements Analysis and Design Definition. Chapter Study Group Learning Materials

IN COMPLEX PROCESS APPLICATION DEVELOPMENT

Software Measurement Pitfalls & @jstvssr

Capability Maturity Model for

Learning Curve Issues in Enterprise Application Integration Projects

Measuring and Improving Process Capabilities Best Practices

Cambridge University Press Agile Testing: How to Succeed in an Extreme Testing Environment John Watkins Excerpt More information

Software Metrics & Software Metrology. Alain Abran. Chapter 14 Design of Standard Etalons: The Next Frontier in Software Measurement

Self-Study Guide Administrative Program Review

Module 5: Project Evaluation in

MBA BADM559 Enterprise IT Governance 12/15/2008. Enterprise Architecture is a holistic view of an enterprise s processes, information and

A Method for Assessing Legacy Systems for Evolution

Estimating Duration and Cost. CS 390 Lecture 26 Chapter 9: Planning and Estimating. Planning and the Software Process

OTSO: A Systematic Process for Reusable Software Component Selection

How I Learned to Stop Worrying and Love Benchmarking Functional Verification!

AN OVERVIEW OF THE SOFTWARE ENGINEERING LABORATORY

Evaluating and Building Portfolio Management Maturity

Siebel CRM On Demand Administrator Rollout Guide

Utilizing Goal-Question-Metric (GQM) to Build Out Business Intelligence for the Enterprise

MEASURING PROCESS CAPABILITY VERSUS ORGANIZATIONAL PROCESS MATURITY

A Visual Exploration Approach to Project Portfolio Management

TDWI Analytics Principles and Practices

Tassc:Estimator technical briefing

Software Development Software Development Activities

A Systematic Approach to Performance Evaluation

Chapter 3 Prescriptive Process Models

Improving Acquisition in Government Requirements Management Leading Practices: CMMI-ACQ Visualization

Aligning Strategic and Project Measurement Systems

Using the Systems Engineering Capability Maturity Model to Reduce Acquisition Risk

Before You Start Modelling

Creating a Sustainable PMO for Achieving Effective Business Results By Dennis L. Bolles, PMP DLB Associates, LLC

CHAPTER 8 PERFORMANCE APPRAISAL OF A TRAINING PROGRAMME 8.1. INTRODUCTION

Transactions on Information and Communications Technologies vol 11, 1995 WIT Press, ISSN

Chapter 15. Supporting Practices Service Profiles 15.2 Vocabularies 15.3 Organizational Roles. SOA Principles of Service Design

Training and Professional Development Training Needs Analysis

SOFTWARE DEVELOPMENT PROGRAM CHARACTERISTICS

THE FUTURE OF VFM. A consideration of the challenges and potential solutions for improving its measurement and application.

AGILITY QUOTIENT (AQ) without specifying the nature of the mission or the circumstances under which it will be conducted?

Crafting A Measurement Framework Using A goal- Question-Metric Approach

A Software Metrics Primer

PROJECT MANAGEMENT OVERVIEW

The Business Case for SOA. Rationalizing the Benefits of Service-Oriented Architecture. Business White Paper

WORLD INTELLECTUAL PROPERTY ORGANIZATION GENEVA INTERNAL AUDIT AND OVERSIGHT DIVISION INTERNAL REVIEW ON PROGRAM PERFORMANCE REPORTING PROCESS

Measuring Performance: Evidence about the Results of CMMI

Evidence Management for the COBIT 5 Assessment Programme By Jorge E. Barrera N., CISA, CGEIT, CRISC, COBIT (F), ITIL V3F, PMP

Tutorial Segmentation and Classification

i FACILITATOR S GUIDE

SOA Maturity Model - Guiding and Accelerating SOA Success. An Oracle White Paper September 2008

Software Engineering

The 10th INternational Conference on Software Process Improvement Research into Education and training, INSPIRE 2005, March 2005,

GUIDE TO THE CHANGES IN PMP simpl learn i

Guided Study Program in System Dynamics System Dynamics in Education Project System Dynamics Group MIT Sloan School of Management 1

Guided Study Program in System Dynamics System Dynamics in Education Project System Dynamics Group MIT Sloan School of Management 1

Passit4Sure.OG Questions. TOGAF 9 Combined Part 1 and Part 2

PERFORMANCE APPRAISAL

Software Metric Design: Issues, Guidelines and Process

ACC: Review of the Weekly Compensation Duration Model

Public Disclosure Authorized. Public Disclosure Authorized. Public Disclosure Authorized. Public Disclosure Authorized

Presented at the 2009 ISPA/SCEA Joint Annual Conference and Training Workshop - Making the Case for SOA Arlene F.

HeRAMS Health Resources Availability Mapping System

Chapter 2 Lecture Notes Strategic Marketing Planning. Chapter 2: Strategic Marketing Planning

COSYSMO: A Systems Engineering Cost Model

A Taxonomy-Based Model for Identifying Risks

Project Risk Management (PRM)

The Communications Audit NEVER MORE RELEVANT, NEVER MORE VALUABLE:

NATO Alternative Analysis (Red Teaming)Concept

Predicting ratings of peer-generated content with personalized metrics

Metrics-Based Approach to Achieving Sustainable Energy Conservation

The Perceptions of Past and Current UCEDD Directors on Transitioning in and out of the Role of UCEDD Director. Fred P. Orelove, Ph.D.

McKinsey BPR Approach

Adapting software project estimation to the reality of changing development technologies

Testing. CxOne Standard

CMMI Version 1.2. Model Changes

Training on Project Cycle Management for Researchers Ethiopian Academy of Sciences May 2016, Ghion Hotel, Addis Ababa

Defining accounting structures for ecosystems and ecosystem services Discussion Paper

CMMI-DEV v1.3 Good? Bad? Ugly?

Project Time Management

Chapter One PROJECT MANAGEMENT OVERVIEW

Determining Consumer Intent for Automotive Dealership Visits

Project Management Scope Management e WBS

Capability Framework Introduction

INTRODUCTION TO BENEFITS REALIZATION MANAGEMENT

Transcription:

This paper appeared in the proceedings of OTC 95 The First Conference on Object Technology Centers Stone Mountain, Georgia, pp 137-149 Selecting An OO Metrics Suite: Lessons Learned Dr. Vijay Vaishnavi Rajendra Bandi Bill Kuechler OTC 95 (April 15-17, 1995) Stone Mountain, GA

Abstract: Introducing metrics to a software development organization is a complex process that begins with the difficult task of selecting a high pay-off basic suite of metrics to be implemented. The selection process is made more difficult by the fact that information on OO metrics and metrification in general is still found primarily in abstract research journals. The journals are not always accessible to industry developers, and the formats of the papers frequently reduce their immediate utility to practicing software engineers. This paper introduces the authors compendium of OO metric research and developments, the Georgia State University Object Oriented Metrics Resource Book (Resource Book), and a novel method for the selection of metrics for industrial software development environments. The method evolved in the course of the development of practice focused handbooks from the Resource Book. The method is based on the proven Goal-Question- Metric (GQM) paradigm, augmented to use metrics frameworks. Frameworks assist in the key area of modeling aspects of the product or process of interest. Once an appropriate model has been constructed, it becomes apparent which factors are amenable to numeric measure. In addition to the selection process model, we present a classification of some of the more widely useful metrics frameworks. A discussion of the use of the method in developing our handbooks is given, and then extended to show its broader utility.

Introduction: In many cases an Object Technology Center (OTC) is formed to introduce object-oriented system development methods to an organization. In some instances top management commitment is high, and the Center functions in a technology transfer and support mode. In other cases, the Center is formed as the pilot program to explore the new technology. In either case the demonstration of the value of the technology is a high priority. The primary method of demonstration of value is through objective measurement, that is, metrics, which provide credible evidence that object-orientation (OO) benefits either the development process, or its resultant products. Choice of the appropriate metric(s) is not always readily apparent. In addition to the different functions an OTC may serve (testing a technology vs. supporting its introduction), there are different audiences for the results of the measurement processes. The measures that will demonstrate the value of a new technology to top management will have a different focus typically, than the measures that will convince existing development groups, currently using alternate technologies, of the benefits of OO. It is not uncommon for the group responsible for implementing a metrification program to be neither drawn from nor report directly to the groups that will gather metric data and benefit from the analysis that metrics can provide (Grady, 1990; Sunazuka, et al, 1985). This places these groups in exactly the position we found ourselves as we attempted to choose a metrics suite for our handbooks. The problem is compounded by the fact that Object Technology is a new and growing field. Much valuable information is found only in journals which are not commonly referenced by software engineers in industry, and the formats of academic journal articles sometimes make it difficult to put the information to immediate use. The authors attempted to remedy the problem of information availability by gathering together OO metric information from every source available to them. This resulted in what came to be known as the Resource Book, which consisted of four volumes and over 100 metrics. Even the most ambitious metrics program can make use of only a small fraction of these. The information access problem had been eased, but the selection problem remained, exacerbated by a number of factors. Of the many object-oriented metrics that exist, many partially overlap in what they measure so that the best metric for a purpose is not always clear. The Software Engineering Institute at Carnegie-Mellon University has demonstrated that the appropriateness and applicability of metrics depends on the process maturity level of the development group. Finally, measurement is expensive in general, and metrics differ in their effectiveness and effort required to collect, verify and analyze information. Handbooks: In an attempt to ease the metric selection we conceived the idea of handbooks. The idea was simple: we would filter the information in the Resource Book using the perspective of an industry OO software development organization. Then we would repackage the much smaller amount of information in a common format (recall that it was originally from multiple authors, published in multiple sources), and give examples of use based on a

common example project. We ran into the selection problem almost immediately, and we believe it was more severe for us than for most organizations, since our perspective was so broad. The authors have had experience both as software developers and as software project managers, however we exercised caution in projecting our specific experiences onto the project lest our necessarily limited experience be absolutized and render the handbook only narrowly useful. A Common Problem One of the problems was one we believe is shared by many organizations: we had never worked with metrics, except in a limited sense for estimating project resources. Work done at the Software Engineering Institute (SEI) at Carnagie Mellon University has categorized development sites into maturity levels (the Capability Maturity Model - CMM) which are ranked from 1 (least mature) to 5 (most mature). The use of metrics to improve the development process is one of the criteria used to determine the maturity level. The study found that the majority of the corporations surveyed were at levels 1 or 2 and were not familiar with the use of metrics in any formal way. Later surveys confirm this, even among large, multi-national corporations. Thus, most existing organizations will face the same problems we faced in selecting a metrics suite, due to the same factors. Neither of the authors had ever worked at a shop that would be categorized at greater than level 2 in the CMM. At the beginning of the project therefore, we had no experience in metric selection, and were beginning from the same point as many industry development organizations seeking to implement a formal metrics program. We began by working with a methodology that we had become familiar with from our work on the Resource Book. The Goal-Quality-Metric (GQM) Paradigm One of the most effective high level models for the application of metrics to the development process is the Goal-Question-Metric (GQM) paradigm, developed by Victor Basili for NASA. As refined by Basili and colleagues at the University of Maryland, GQM directly addresses the issues of perspective we have discussed, that is, the different needs by different communities, for metrics (Basili, 1992). It does this through the application of templates that direct the software engineer s attention to the purpose, perspectiveand environment of the measurement. GQM is nominally a three stage process as the name implies. Starting from a goal, the method user develops a set of questions which, if answered, would indicate whether or not the goal had been achieved. Expressing the questions in a quantifiable form leads to choice of metrics. Figure 1, below, illustrates the flow of the method. However, some of the feedback from developers in industry, has led to the understanding that the development of quantifiable questions from goals, or the selection of the appropriate metric(s) from questions, is not always straightforward (Bush, 1990). In this paper we discuss the use of the GQM technique in its original form, during the early phase of our handbook development, and also with the addition of an additional step, the selection of a framework of analysis, to provide the additional information that is sometimes necessary for informed choice of measurement.

Goals Immediate Project Control Information Gathering for Future Project Estimation Tracking Estimating Questions Schedule Resources Metrics Metric 1 Metric 2 Metric 3 Metric 5 Metric 4 Figure 1. GQM Illustrating a Management Perspective Problems Using GQM in Handbook Development The original intent of the handbook development was to focus the information in the Resource Book, that is, to make it more accessible to software developers in industry by pruning away metrics that were experimental, and information that was not immediately useful in a production environment. After several meetings, and several attempts at outlining such a document, it became apparent that a general purpose industrial OO metrics handbook would be almost as unwieldy as the Resource Book itself. Since our starting perspective was so broad, the set of questions we initially arrived at bracketed the entire development process and the complete set of software product attributes! Thus, although focus was seen to be the key to utility, it was easier said than done. We had as part of the information from our industry partners, a survey of the metrics actually used by object oriented software project managers in a very large, multi-national corporation. The document showed that the predominant interest for this group was in scheduling and tracking projects. Interestingly, this survey indicated little process standardization among development groups and product managers, but rather consensus, that is, convergence on types of metrics from varying management techniques. This study suggested that our efforts to focus should begin at a fairly high level, such as personnel roles in the development process. Using our own experience and the results of this study, we established two perspectives: manager, and developer. Our generic manager is concerned almost exclusively with process factors related to bringing projects in on time and under budget, and our generic developer is concerned with product attributes at a more technical level.

However, even with two more specific role perspectives, we had a problem with convergence. An example may help to illustrate: a typical goal from the use of GQM templates would be: Analyze the progress of the project with respect to schedule conformancefrom the point of view of the customer group. A number of questions can be drawn readily from a managers point of view, that will suggest quantification of the goal: How far along is the project, as a percentage of scheduled calendar time and in terms of percent of project completion? What percentage of the personnel budget has been used to date? How many change orders have been received and how do they impact schedule. How do they impact resources? How much risk, i.e. opportunity for slippage is in the remaining development. That is, have we done the hard part (as we understand it) first, or are we saving it for last? How should the schedule be revised? Do we need to throw additional resources at the project to meet schedule? etc. After some consideration we found this to be due to lack of an adequate model of the processes we wished to measure. GQM specifically calls for the construction of a process or product attribute model to be used to quantify the general questions that initially emerge from goal analysis. Deriving adequate models for, say, management of the object oriented development process, or even a small subset of the implied activities, is a significant research goal in itself. Fortunately much of the information we required had already been researched by others, and we had already compiled it for the section of the Resource Book termed Frameworks. What is a Framework? In the context of software metrics, frameworks present a holistic picture about metrics and measurement. Frameworks may or may not refer to specific metrics. However, they are primarily helpful in identifying the context of measurement and therefore help in identifying the type of metrics useful in a particular context to accomplish a specific task. The primary purpose of these frameworks is not to determine a priori, which metrics are suitable for which task, but to discuss how metrics can and should be used to achieve specific tasks. Most of the frameworks focus on a specific activity in the software development process, and discuss the role of metrics in that activity, how to choose metrics and how to use them to achieve the goals of measurement. Since metrics serve various purposes, the need for metrics arises based on different perspectives of different people at different times in different situations. These differences in perspectives must be considered before deciding or recommending on what metrics are

to be considered or used. Several factors contribute to the different perspectives of metrics users. For example, a project manager would be interested in measuring the process attributes of the development project, while the maintenance programmer is likely to be more interested in the maintainability issues of the product. Perspectives can vary depending on the level at which the metrics are used. Metrics can be used at the organizational level, the project level or the team level. At the organizational level one has to consider the possibility of dealing with heterogeneous development environments and projects, while at the team level things are more homogeneous. So if the metrics are to be used at the organizational level, the metrics chosen should be useful across multiple environments. On the other if the metrics are to be used at the team level or the project level, one can choose metrics and tools specific to the hardware and software platform. Pfleeger has suggested that the process maturity levels of the software development projects also influence the choice of metrics. The concept of process maturity level is also utilized in the ami (application of metrics in industry) approach, result of a collaborative ESPIRIT project involving six European countries. Based on this process maturity approach, a project at the lowest maturity level chooses metrics to measure effort and project duration to establish a baseline against which to compare improvements. On the other hand, at the highest maturity level, process measures and feedback are used to change the process dynamically as development progresses. At this level, measurements guide process change and control. Essentially the frameworks help the metrics user to understand these different perspectives. They enable the user to zoom in on the perspective relevant to them and based on that decide on the choice of metrics. However, these frameworks seldom explicitly identify the specific metrics. The users have to choose the metrics based on their understanding of the framework. The authors have developed a resource book on objectoriented metrics which has a chapter on frameworks for metrics. This chapter discusses eighteen different frameworks out of which only two of them refer directly to specific metrics. Some of the frameworks included in this section are: framework on metrics for object-oriented design, framework for estimation of size of object-oriented Systems, framework on applicability of traditional software metrics for object-oriented systems, framework on evaluation of software quality, evaluation of software methodology, resource estimation, etc. The frameworks are classified into Meta-Level Frameworks- which focus on incorporating metrics into the software development process. Focused Frameworks - which focus on a specific activity in the software development process Requirements Analysis Frameworks- which focus on the requirements specifications phase. Object Oriented Design Metrics Frameworks- which focus on design phase Estimation Frameworks- which focus on the resource estimation Software Evaluation Frameworks- which focus on evaluating the final product software

Coming back to the Framework Assisted Metric Selection Process, having determined the goals to be achieved from the metrification process, the next step is to identify the questions that need to be answered to be able to determine if the goals are being achieved. Quite often it is not clear if the questions that have been framed are appropriate to the goals they are supposed to monitor. Also, the choice of metrics to be computed to answer these questions is also not clear. This is because the metrics are generally very malleable, and any given metric can be used for many different things. At the same time, any given question can be answered by many different metrics. There is also a great amount of overlapping among the different metrics, which adds to the confusion in choosing the metrics. The frameworks can help in refining the questions and in choosing the appropriate metrics. Framework augmented GQM The framework augmented GQM is a result of our experiences in developing focused handbooks from the generic Resource book. A graphic depiction of the GQM process with the addition of framework selection is shown in Figure 2, below. Frameworks for metrics are formal investigations and models of specific aspects of software or the development process. In addition to actual experiences with metrics, frameworks provide information on the metrics that are available in a specific area, and serve as a lens to focus attention on that area.

Framework Selection Establish perspective High Level Frameworks Constrain Strategic Goal Setting Focused Framework 1 Question Development GQM Focused Framework 2 Suggest Metric Selection Metric Assessment Focused Framework n Implement Measurement Plan Figure 2. Framework Assisted Metric Selection Proces (section in dashed lines adds framework consideration to GQM) As shown in the above figure the frameworks provide useful information at all the three levels in the GQM paradigm. These levels are setting Goals, formulating Questions to monitor the goals, and choosing Metrics to answer the questions. Essentially there are two types of frameworks: High Level Frameworks or meta-level frameworks, and Focused frameworks. The high level frameworks provide guidance in setting meaningful and realistic goals, and in formulating the right questions. The high level frameworks discuss very broad areas. For example, the GQM paradigm itself is a high level framework talking about introducing a metrification program in the organization. The focused frameworks, as the name implies, focus on a specific activity in the software development process like software reuse, software complexity etc. These frameworks are helpful in choosing the metrics suite and also in refining the questions. More details on frameworks are provided in the section on frameworks. Thus one of the key roles of frameworks in this augmented model is enable a smooth transition from Goals to Questions and then finally to Metrics. More details on frameworks is provided in the next section. With reference to Figure 1, the augmented GQM process proceeds as follows:

A strategic goal setting session establishes which of the corporation s mission statements can benefit in some way from metrification. Next, the questions that need to be asked to provide information to realize those goals are determined. This is a straightforward application of the GQM method, as expanded in an earlier section of the paper. If sufficient information is available at that point to the staff charged with establishing the metrics program, then we proceed to the next phase - identifying and selecting metrics to accomplish the purpose. However, if the team is uncertain at any phase, help is sought from the frameworks component of the model. Typically if the uncertainty is at the goal setting or question formulation phases the meta-level frameworks should be useful. For example the Process Maturity Framework states that meaningful and realistic goals can be set only when the organization understands its level of maturity in the software development process. If the software development process of an organization is at a very low maturity level, the type and amount of information available is very limited. It therefore, is not feasible to set very detail oriented goals because of lack of information and therefore lack of ability to monitor the goals. On the other hand, if the uncertainty is at the question formulating phase or metric selection phase, then the focused frameworks should be helpful. For example, if the goals are related to software reuse, then the framework on software reuse addresses issues like the questions that one can ask in monitoring the level of reuse and the metrics that can be used to measure the level of reuse. Similarly other focused frameworks can be selected based on the nature of goals set. When the framework has been understood, and the information it, and the references it contains have been incorporated into the team s understanding, the Question Development step is revisited, and the process repeated. Sometimes the additional information and focus of the framework will allow metric selection at this point; other times the team will see the need to reframe one or more of the original questions, and then proceed. In the next section of this paper, the way in which metrics frameworks assisted in the development of our handbooks is discussed. For organizations with more specific goals (most will fall into this category), the use of framework augmented GQM with the preselected group of metrics in the Handbooks, is suggested. Metric Selection Using Frameworks The perspectives of the Handbooks are very broad. We attempted to view the object oriented software development process as would generic a manager and a generic developer, and so we were required to become familiar with all the frameworks. This will usually not be the case when selecting a metrics suite for an organization. The Process Maturity Frameworkyielded general caviats for both handbooks. From it comes an awareness (that our experience and that of the SEI indicates not is intuitive), that metrics programs can easily be too ambitious. Metrics require measurement, measurement requires an open, standardized process, that is, one with the intermediate work products available, and produced in each measurement by identical processes. If the required maturity level has not been achieved, many metrics will either be too expensive to

collect, or meaningless because of non standardization, or both. This perspective will benefit any metrics selection process, as it works to constrain selection. The maturity level of the organization is assessed, and the metrics evaluated on whether or not the quantities measured are available in the current processes of the organization. If not, the metric, however attractive and oriented to the goal, is not applicable. The Generalized Software Process-integrated Metrics Framework is another general, high level framework that we used for both handbooks, and which should probably be incorporated into any metrics selection process. It provides a general process model (a software development specific version of the general systems model) that assists in decomposing what may appear as a monolithic process into sub-processes. Without the ability to model an organizations software development activities as a series of subprocess, a great many metrics are can not be used. Whether or not the organization is at a point to benefit from this type of modeling should be evident from the Process Maturity Framework. When a development process is modeled according to this framework, the intermediate products available at the inputs and outputs of the sub-processes will be obvious, and metrics can be selected or excluded based on whether or not the measured quantity is available. The frameworks we term focused were divided into two groups depending on the perspective of the handbook we were working on, Managers or Developers. The frameworks classes: Requirements Analysis, Object-Oriented Design, and Software Evaluation were incorporated into the Developers Handbook. The Estimation Framework, and the Project Managementmeta level framework were useful for the selection of metrics for the Managers handbook. The framework Metrics Suite for Object Oriented Designwas especially useful, since it suggested six partially validated metrics along with discussions of their applicability and partial validations. The conditions for which the metrics were developed can be matched to conditions within the specific organization to select or exclude metrics. The more general Design Measurement Framework introduces the distinction between architectural or high level design and algorithmic, or low level design. This provides an excellent selection criteria, that can yield very specific metrics once the specific development process has been modeled. Traditional Software Metrics for Object-Oriented Systems is a framework that is we found especially valuable in suggesting metrics for groups new to object-orientation. Several traditional metrics, such as Cyclomatic Complexity and Source Lines of Code are shown to correlated with complexity in Object-Oriented designs just as they are in more traditional designs. We uses this framework to select Cyclomatic Complexity for use in the Developers Handbook, but excluded Source Lines of Code because of the possible misuse of this metric in an OO context. Recall that we assumed (from survey data) a predominant schedule and estimation focus for our Managers perspective. Metric selection criteria for this handbook was directly

derivable from the framework Software Size Estimation of Object Oriented Systems. This framework gives a method for the size estimation of an entire system, starting from the lowest level of granularity, and summing through the system level. In conjunction with the CMM evaluation discussed earlier, metrics can be selected or excluded for estimation of process resources depending on the intermediate work products available from the process. The more granular the intermediate product, the more accurate the estimate. We have personally derived a great deal of help from two frameworks in developing the handbooks. They are the GQM framework and the Process Maturity framework. In our ongoing research on validating object oriented design complexity metrics, we have obtained guidance from a framework on Methodology for Validating Software Metrics. Conclusion: In this paper we have presented a method for selecting a situation specific suite of metrics from among the large number of available metrics. The method is especially useful for organizations which are unfamiliar with the use of metrics for continuous process improvement - those who would rank at levels 1 or 2 in the SEI Capability Maturity Model. The method is based on the proven GQM paradigm for determining which metrics are applicable to a given situation. GQM alone is inadequate for organizations without prior metrics experience. Academic research and industry experience can be drawn into the process through the use of frameworks. The method is most effective when used in conjunction with several other resources, the COMSOFT OO Metrics Resource Book, which gathers together the vast amount of information available on OO metrics, and the COMSOFT Managers and Developers OO Metrics Handbooks, which focus the information from the Resource Book. When the environment of an organization is similar to the assumptions of the Handbooks, then the selection method can be applied against the metrics pool in the appropriate Handbook, greatly simplifying the process of metric selection.

References Basili, V. R., Software Modeling and Measurement: The Goal/Question/Metric Paradigm, Computer Science Technical Report: CS-TR-2956, University of Maryland, September, 1992 Bush, M W., Getting Started on Metrics - Jet Propulsion Laboratory Productivity and Quality in Proceedings of the 12th Annual Conference on Software Engineering, 1990. Grady, R. B., Work Product Analysis: The Philosopher s Stone of Software?, IEEE Software, March, 1990 Pfleeger, S. L. and McGowan, C., Software Metrics in the Process maturity Framework, Journal of Systems and Software, 1990: 12, pp. 255-261 Sunazuka, T., Azuma, M., Yamagishi, N., Software Quality Assessment Technology, Software Product Engineering Laboratory, NEC Corporation, 1985