A lifecycle approach to systems quality: because you can t test in quality at the end.

Similar documents
Address system-on-chip development challenges with enterprise verification management.

IBM Continuous Engineering augmenting PLM with ALM and Systems Engineering

Actionable enterprise architecture management

Building smart products: best practices for multicore software development

Transforming software delivery with cloud

IBM Service Management solutions To support your IT objectives. Create and manage value throughout the entire service management life cycle.

IBM Software Rational. Five tips for improving the ROI of your software investments

A new approach to verifying and validating medical device development.

Best practices for delivering better software faster with ALM.

Ten steps to effective requirements management

Rethinking the way personal computers are deployed in your organization

Effective SOA governance.

IBM Rational Systems Developer, Version 7.0

Value-driven quality management for complex systems

IBM Service Management Buyer s guide: purchasing criteria. Choose a service management solution that integrates business and IT innovation.

IBM Innovate A Change Management Request Portal. Robert Kennedy Geoffrey Clemm

Jaguar Land Rover cuts software validation time by up to 90 percent

IBM Sterling Order Management drop ship capabilities

Watson Internet of Things. Agile Development Why requirements matter

IBM Global Business Services Microsoft Dynamics AX solutions from IBM

Leading provider of telecommunications equipment calls in IBM and IBM Advanced Business Partner Pathfinder Solutions to help improve code quality.

IBM Rational RequisitePro

IBM Emptoris Rivermine Telecom Expense Management solutions

Improving the business process of software delivery for Financial Services

Planning and design for smarter cities

IBM Sterling B2B Integrator

Adaptive work environments

How ready are you for operational SOA?

IBM Software Group White Paper. IBM Cognos Business Intelligence Query and Reporting: Gain insight and outperform

An Enterprise Resource Planning Solution for Mill Products Companies

IBM Rational Extensions for SAP Applications Application lifecycle management for consistent governance

INTEGRATED APPLICATION LIFECYCLE MANAGEMENT

The journey to procurement excellence

IBM Maximo Asset Management solutions for the oil and gas industry

2013 Rational Software Open Labs

Fast Forward Shareholder Value from your SAP Investment using IBM Rational

Making intelligent decisions about identities and their access

Accelerate and assure wireless services with intelligent solutions for wireless network and service management.

Integration and infrastructure software Executive brief May The business value of deploying WebSphere Portal software in an SOA environment.

Transforming business processes and information by aligning BPM and MDM

Preparing your organization for a Human Resource Outsourcing implementation

invest in leveraging mobility, not in managing it Solution Brief Mobility Lifecycle Management

IBM Software IBM Business Process Manager

Increasing Bid Success Through Integrated Knowledge Management

The Smart SOA approach: Innovate, accelerate, differentiate To support your business objectives. Smart SOA: The experienced approach.

Systems and software product line engineering with SysML, UML and the IBM Rational Rhapsody BigLever Gears Bridge.

Dramatically improve the way work gets done with IBM Business Process Manager

COPYRIGHTED MATERIAL WHAT S IN THIS CHAPTER?

A technical discussion of performance and availability December IBM Tivoli Monitoring solutions for performance and availability

Improve the buying experience of configurable product and service bundles

MBSE and the Business of Engineering. The Case for Integrating MBSE and PLM

Agile Architecture And Design

Development Environment Definition

SEB. Crossing a threshold to more agility and smarter payments. Smart is...

IBM Software July 2011 Thought Leadership White Paper. What is MRM, and why are marketers investing in this technology?

Achieving Balance: The New Pivotal Points of Software Development

How enabling Supply Chain visibility can improve the bottom line

IBM Sterling B2B Integrator for B2B Collaboration

Gain strategic insight into business services to help optimize IT.

White paper June Managing the tidal wave of data with IBM Tivoli storage management solutions

You can plan and execute tests across multiple concurrent projects and people by sharing and scheduling software/hardware resources.

SYSTEMS MODELING AND SIMULATION (SMS) A Brief Introduction

IBM Sterling Gentran:Server for Windows

Tackling the complexities of geographically distributed development with the IBM Software Development Platform

Achieve greater efficiency in asset management by managing all your asset types on a single platform.

Telelogic DOORS Web Access

Service oriented architecture solutions White paper. IBM SOA Foundation: providing what you need to get started with SOA.

white paper Towards the modern mainframe How Enterprise Developer can meet the future challenges of mainframe application development

Security intelligence for service providers

IBM Tivoli Endpoint Manager for Software Use Analysis

The Information Agenda Guide for communications service providers

You can plan and execute tests across multiple concurrent projects and people by sharing and scheduling software/hardware resources.

An Oracle White Paper May A Strategy for Governing IT Projects, Programs and Portfolios Throughout the Enterprise

The SAP BusinessObjects. Supply Chain Performance

Quantifying the Value of Investments in Micro Focus Quality Center Solutions

A 6-step approach for ITSM and ITOM to work better together Steps 1 and 2

Fast, Lean Enterprise Software Delivery Through IBM DevOps

CENTRAL PROCESS REPOSITORY

IBM Kenexa BrassRing on Cloud

Tracking Payments and Securities with IBM Financial Transaction Manager V2 IBM Redbooks Solution Guide

White paper. Watson Virtual Agent: The Shortcut to Great Customer Service

Visualize, Analyze, and Transform Processes for Intelligent Procurement Operations

Automate, manage and optimize business processes in the cloud

IBM Planning Analytics

Embracing SaaS: A Blueprint for IT Success

Achieving Application Readiness Maturity The key to accelerated service delivery and faster adoption of new application technologies

WORKSOFT AUTOMATED BUSINESS PROCESS DISCOVERY & VALIDATION

IBM Intelligent Operations Center for Smarter Cities

IBM and SAS: The Intelligence to Grow

IBM Global Technology Services. Weaving the solution Dharanibalan Gurunathan 1 st August, Mumbai

Managing the Business of IT - Integrating Enterprise Architecture and Application Portfolio Management

Enterprise Modeling to Measure, Analyze, and Optimize Your Business Processes

IBM Tivoli Endpoint Manager for Lifecycle Management

IBM Cognos Analytics on Cloud Operate and succeed at a new business speed

Drive down costs with better asset lifecycle management

The SAM Optimization Model. Control. Optimize. Grow SAM SOFTWARE ASSET MANAGEMENT

October 16-17, Omni Shoreham 2500 Calvert Street NW Embassy Conference Room Washington, DC 20008

Introduction to Disciplined Agile Delivery

IBM Software WebSphere Achieve agility and profitable growth

How do we assure service availability at levels that make the IT infrastructure function so well it becomes transparent to our business?

Transcription:

Systems quality management White paper December 2009 A lifecycle approach to systems quality: because you can t test in quality at the end. Moshe S. Cohen, market/offering manager, quality management, IBM Software Group Jon Chard, Rational Systems marketing manager, IBM Software Group

Page 2 Contents 2 Executive summary 2 Smarter products and the evolving role of quality assurance 5 The changing quality assurance paradigm 8 Understanding lifecycle quality management 12 The bottom line: enhanced quality with cost savings 15 Conclusion Executive summary Success in ever more competitive worldwide marketplaces demands continually smarter products and systems, which in turn lead to compounding complexity. Furthermore, while advanced functionality is an important competitive differentiator, quality has become part of the price of entry into the marketplace. Quality can no longer be considered an attribute that can be added into systems at the end of the development lifecycle. Controlling risks and costs needs to be a guiding principle that drives virtually all stages of the lifecycle, starting at the concept stage; building during analysis, design, deployment and acceptance; and continuing through service until end-of-life retirement. As a result, the management of quality must consider all key systems engineering disciplines, including requirements engineering, systems design and testing, and change and configuration management. This white paper examines the changing role of quality assurance and looks at approaches to effectively extend application lifecycle management (ALM) to include quality management and how this improves project outcomes. Smarter products and the evolving role of quality assurance Since the industrial revolution, the world has seen numerous changes in the way products are produced and how quality is assessed. From basic inspections to total quality management and total quality control approaches, methods have evolved to address weaknesses in development and production processes. Today, as we work to create a smarter, more interconnected world, we must consider how to address the quality challenges that inevitably arise out of revolutionary products and development approaches.

Page 3 Traditionally, development teams have first focused on requirements analysis, design and development and then tested quality at the end of the development process. Let s face it for many years in many companies, a de facto wall has existed between development and testing organizations. Requirements analysis, design and development have happened first, with testing done in a silo at the end of the development process. This approach has never been ideal, but it has usually worked for getting passable products to the marketplace. When products get smarter, not only do the opportunities for quality problems multiply, but customer expectations in all respects including quality, timely delivery and cost tend to increase. As you go from specifications through analysis, architecture, design and implementation, the cost of finding and fixing a problem grows. Finding an issue in the field can be especially costly to the bottom line and the brand. Just consider some high-profile news stories: A bug in an onboard guidance system caused a prototype rocket to self-destruct, costing an aerospace agency more than US$1 billion. An automobile manufacturer had to recall 75,000 cars after determining that a software bug caused them to stall at high speeds. A medical equipment manufacturer had to recall more than 40,000 defibrillator devices because of software issues. Smarter products and complex development teams are driving up the cost and effort of the traditional development and testing workflow. At the same time that products get smarter, organizations must continually evolve to drive maximum value from their operations. Activities such as mergers and acquisitions are commonplace, resulting in project team structures that are increasingly complex. Moreover, complex products often involve collaboration among multiple parties. Projects often are no longer managed by a team at a single location. Rather, teams are distributed across organizational and geographic boundaries, working across different time zones, languages, cultures and companies. In some cases, this can mean subcontracting, offshoring or the reuse of third-party components. In others, groups of companies,

Page 4 When distributed teams handle different technologies and parts of a project, they inevitably must confront organizational barriers. Companies have been improving productivity to offer more complex and competitive products in less time, but testing productivity in many organizations has not kept up. or consortia, work closely to put products together. Infrastructure barriers including incompatible tools and repositories, inflexible tooling integrations and unreliable access to development artifacts also create challenges, impeding quality and slowing time to market. In many cases, teams face challenges related to integrating legacy or in-service components and systems with new capabilities until all parts of the system are upgraded. When distributed teams handle different technologies and parts of a project, they inevitably must confront organizational barriers. For example, individual organizations may have varying perspectives and approaches to development, each of which might be perfectly good in isolation but may cause complications when combined. Consider one high-profile space agency project, where geographically dispersed project teams used different units of measure in their processes (one team used Imperial units and the other metric units). Because the teams were unaware of the process differences, they did not make necessary adjustments in some of the navigation software, leading to mission failure. Productivity among teams is another issue. Companies are continually improving their productivity to offer more complex and competitive products in less time, but testing productivity in many organizations has not kept up. Part of the reason for the lag may be political, with testing being seen as overhead; therefore, it receives inadequate funding to keep up with changes. Additionally, the increasing sophistication of products has forced a refinement in development techniques to keep up with more complex requirements, but these efforts are not always accompanied by a refinement in testing practices.

Page 5 As a result of the challenges and complexities of today s development teams, organizations must modernize their quality assurance approaches and break down silos to create quality products. The changing quality assurance paradigm All of the factors discussed in the previous section point to the need to end status quo quality assurance approaches and break down the barriers between all of the internal organizations that are instrumental to creating quality products. Leaving testing and quality assurance as an afterthought today is a ticking financial bomb. The increasing costs of fixing a defect Defect removal activity Expected defects Cost multiplier (US$120) distribution (valid and invalid, best in class) Requirements review 4 percent 1 High-level design review 7 percent 4 Detailed requirements review 9 percent 2 Detailed design review 6 percent 7 Unit test 12 percent 10 System test 23 percent 16 (expensive) User acceptance test 36 percent 70 (very expensive) Production 3 percent 140 (outrageously expensive) Source: IBM Global Business Services The first erroneous assumption is that quality means goodness, or luxury, or shininess, or weight. [W]e must define quality as conformance to requirements if we are to manage it. Philip Crosby, quality management thought leader Given the implications that quality has on the business, it can no longer be considered only a technical function or add-on that is in the realm of engineering. Rather, it s time to consider quality a strategic competence. Philip Crosby, a quality management thought leader and pioneer of many innovative ideas said, The first erroneous assumption is that quality means goodness, or luxury, or shininess, or weight. [W]e must define quality as conformance to requirements if we are to manage it. 1 This means that quality isn t just about testing; it s about ensuring that requirements are met through the entire

Page 6 A lifecycle approach to quality is necessarily proactive compared to traditional reactive testing processes, and it starts with careful consideration of associated process. design lifecycle. A lifecycle approach to quality is necessarily proactive compared to traditional reactive testing processes. It must be approached like other strategic parts of the business, starting with careful consideration of associated processes and how to make them efficient while satisfying requirements. A proactive approach for quality management across the lifecycle includes the following: Associating business risk and other risk factors with requirements Building traceability across the lifecycle to ensure that verification and vali- dation activities are requirements driven Developing a test plan with inputs from all the stakeholders (including domain experts, development and quality assurance), regardless of roles and geographies Building confidence in the design through early-stage analysis and modeling Taking risk into account as part of test planning and execution Offering real-time visibility into the status of quality-related activities to all stakeholders Integrating testing, defect and change management These considerations are important not just from a quality perspective but also to control the costs of development, the majority of which are typically incurred identifying and correcting defects.

Page 7 Analysis and design Efforts to control risks and costs need to be part of the entire product lifecycle. Requirements definition and management Change and configuration management Construction Build and release management Quality management Analysis and design Quality management Requirements definition and management Change and configuration management Construction Quality management Build and release management Testing Figure 1: Today, quality management is typically a specific activity in the software lifecycle, as seen in the top example. To optimize results, however, efforts to control risks and costs need to be part of the entire product lifecycle, as seen in the bottom example.

Page 8 Achieving consistency, efficiency and predictability in systems software delivery is a complex balancing act; however, lifecycle quality management can help organizations get it right. Understanding lifecycle quality management Systems development organizations have one primary need from the development process: high-quality, innovative solutions that fulfill customers needs and help the organization stand out in the marketplace. Of course, the challenge is to create these solutions while controlling risks and costs and speeding time to market. Achieving consistency, efficiency and predictability in systems software delivery is a complex balancing act. However, by involving a series of core activities and the proactive approach described above, lifecycle quality management can help organizations get it right. To accelerate time to market and improve efficiency, teams need improved project guidance and they need to identify issues earlier. Requirements-driven quality The reason that systems development project teams struggle to realize desired business outcomes is largely related to disconnects among teams and process shortcomings. Development teams need to respond quickly to customer demands regardless of project complexity. At the same time, quality managers, who are tasked with ensuring that processes adhere to relevant standards and addressing organizational quality objectives as well as customer expectations, need adequate time to complete their work. Testers struggle to keep up with requirement changes and test priorities. Furthermore, requirements engineers may lack the capabilities to prove through testing that requirements have been satisfied. Given the differing pressures on team members, it s critical to streamline testing and quality assurance activities whenever possible. Processes can be better optimized with the help of up-front analysis and simulation work, resulting in more effective and less testing. To accelerate time to market and improve efficiency, teams need improved project guidance and they need to identify issues earlier. However, requirements and test management tools typically don t support the full lifecycle and can t manage the close relationship between requirements and testing.

Page 9 Through traceability, teams can more accurately track the relative importance of each test and relate the results back to the requirements throughout the entire lifecycle. Traceability, which is a means of documenting the relationships between artifacts during development, provides a way to identify and address disconnects, process shortcomings and make effective project management decisions. It is through traceability that you can answer important questions such as Which requirements are affected and hence what effect does a test failure have on the functionality of a delivered system? and What effect on downstream development and test planning does a change in requirements pose? In the end, as Phillip Crosby declared, quality boils down to conformance to requirements. As a result, a tight linkage between requirements and testing is a must. With the appropriate traceability in place, teams can more accurately track the relative importance of each test and relate the results back to the requirements throughout the entire lifecycle. They can also more efficiently design tests, resulting in fewer sets of tests. Model-driven development has proven to be a highly effective means of helping development organizations address the challenge of delivering more complex and intelligent designs in less time. Model-based testing: the missing link in a model-based development approach A key challenge organizations face during the systems development process is finding ways to optimize development productivity in the face of ever-increasing complexity. Model-driven development has proven to be a highly effective means of helping development organizations address the challenge of delivering more complex and intelligent designs in less time. Modeling helps by raising the level of abstraction from code to semantically rich graphical models. It can lead to better quality through improved collaboration and understanding and provide a basis for process automation and design reuse. Modeling tools can enable model execution through simulation to verify design concepts early in the development lifecycle. They can automatically transform models into code and make it easier to define

Page 10 reusable components within a design with the help of languages such as the Unified Modeling Language (UML) or the Systems Modeling Language (SysML). Overall, models provide an efficient way to help systems engineers understand and analyze requirements, define design specifications, verify systems concepts using simulation, and automatically generate code for direct deployment on hardware. The software testing process, on the other hand, is still largely code based, and systems testing is commonly manual. As a result, testing teams are less productive, and the testing process ends up being a bottleneck that can t keep up with new designs. Given the increased risk of design errors related to growing complexity, traditional approaches of verifying designs after development are costly. Rather, with complex systems, testing needs to occur early and continually in the lifecycle through modeling and simulation. Using a more proactive, planned approach, teams can develop high-risk features earlier to allow time for refinement. The software testing process is commonly manual, resulting in less-productive teams and process bottlenecks. However, model-based testing tools integrated with quality assurance can help save time and improve quality. Model-based testing tools can help save time and improve quality. They do so by enabling testing at the model level, in the same language as design. At the same time, the testers can use modeling to design their testing frameworks and test cases. By starting with scenarios derived from requirements, modelbased testing tools can help confirm that tests address customer expectations. For model-based testing tools to be effective, they must be integrated with quality assurance. Quality assurance professionals typically are not modeling specialists. So what s needed is an approach that can support model-based testing by disciplines such as quality assurance while providing transparency of status and results to other stakeholders.

Page 11 To ensure that quality management is being handled correctly, organizations need to make sure that their CCM approach complements quality management actions. Change and configuration management and quality management: unifying the workflow It s a given: the amount of change increases with increasing complexity. As a result, to ensure that quality management is being handled correctly, organizations need to make sure that their change and configuration management (CCM) approach complements quality management actions. Today, CCM and quality management are usually handled using two parallel systems, and when they are not aligned, issues can arise. Practitioners, project managers and other stakeholders need to know how many changes need to be acted on regardless of their source. Ultimately, defects result in changes, so they need to be jointly managed with all change requests by transparently feeding defect management data into the change management environment. The need to align CCM and quality management becomes more critical as projects become more complex. The need to align CCM and quality management becomes more critical as projects become more complex (for example, when multiple configurations must be defined to address different variants of the product, when you have differing product specifications for varying marketplaces or when different versions of the product are produced over time). Varying product configurations may require different testing configurations, so test management is intrinsically linked to CCM. That s why it s important that teams use CCM to manage quality assets, such as test plans. As requirements change, the testing needs will change, and this represents another CCM challenge. Automation of the linkage between defect management and CCM and also the configuration management of quality assets is essential to avoid inefficiency and errors that result from maintaining parallel information.

Page 12 It is important to establish close linkages between quality assurance and the rest of the organization while maintaining test independence. Linking to the testing environment Although it is important to break down the wall between testing and other development tasks, the testing organization remains integral to developing quality systems and is still needed. Rather, because systems can involve so many different technologies and domains such as embedded systems, electronic and electrical components, mechanical devices, human-interactive elements, and hybrid systems of systems it is important to establish close linkages between quality assurance and the rest of the organization while maintaining test independence. An effective quality management solution needs to provide broad support for testing tools as well as for manual testing through a suitable user interface and data capture facilities. Each technology in a smart product requires different testing techniques that must be mapped into the quality management process. An effective quality management solution needs to provide broad support for testing tools as well as for manual testing. For example, running tests on an aircraft eventually requires a pilot. As a result, the ability to orchestrate testing including defining tests, executing tests and capturing results across all environments is essential. Only then can quality management be a single-point-of-reference repository of quality information for all stakeholders. Open interfaces can support connections to different testing tools for different technologies. In addition, quality management tools need to support manual testing directly through a suitable user interface and data capture facilities. The bottom line: enhanced quality with cost savings While there may be little doubt that improved quality management delivers benefits, it is important to quantify those benefits in financial terms. Only then can you answer important questions such as Is now the right time to expand the testing process into a quality management solution across the lifecycle? or What is the return on this investment?

Page 13 The following are simple examples for the potential financial benefits the transition could have on an organization. They use high-level calculations to provide an order of magnitude for benefits that can be realized. Industry standards are used whenever applicable. With enhanced collaboration capabilities in a testing environment, testers could save about 20 percent of their time spent on highly collaborative activities, such as test planning and defect resolution. Savings related to enhanced collaboration Based on a large number of interviews with testing practitioners and their managers, we found that, on average, testers spend about 60 percent of their time on pure testing activities. The remaining 40 percent is spent on activities, such as test planning and defect resolution, that require a high level of collaboration. Conservatively, with enhanced collaboration capabilities, testers could save about 20 percent of their time spent on collaborative activities. Industry data shows that a move up one CMMI level can help reduce the number of defects that escape functional testing by 40 percent. Savings related to early defect detection Industry data shows that a move up one Capability Maturity Model Integration (CMMI) level can help reduce the number of defects that escape functional testing by 40 percent. Industry data also shows that the cost of fixing defects is lower the earlier they are detected. Consider an organization at CMMI level 2. Let s assume a case where 1,000 hardware and software defects were detected from requirements review, to design review, to unit testing, to functional testing. Based on the CMMI level, industry data implies that 2,230 defects escaped functional testing. However, 892 of them (40 percent of 2,230) could have been detected earlier if the organization were at level 3.

Page 14 Fixing defects in user acceptance testing or production is 7 to 14 times more expensive than fixing them in unit testing. According to The increasing costs of fixing a defect table (page 5), fixing defects in user acceptance testing or production is 7 to 14 times more expensive than fixing them in unit testing. For this example, we will use seven times. Assuming a cost of US$120 for fixing a defect found in functional testing, fixing 892 defects in functional testing rather than later could generate more than US$500,000 in savings. Considering that in large projects, the number of defects that are detected by functional testing could be much higher than 1,000, the potential savings are substantial. On average, about 20 percent of the defects are duplicates, and most duplicate defects have more than one duplicate instance by automating duplicate detection, organizations can save significant time and money over manually detecting duplicates. Savings related to early detection of duplicate defects For this example, let s again consider a case where 1,000 defects are detected by functional testing. On average, about 20 percent of the defects are duplicates (to simplify the example, assume that all duplicates are only duplicated once). If 200 defects are duplicates, that means that testers and developers already handled them earlier. To be conservative, assume that it takes developers an average of one hour to detect that a duplicate has already been fixed (note that for large organizations, with globally distributed development teams, this can take much more time). This translates into 200 hours of developers time or 25 full-time workdays saved simply by detecting duplicates. Moreover, in reality, most duplicate defects have more than one duplicate instance, so again the potential savings are substantial.

Page 15 Organizations must orchestrate development and testing to create an approach that can help ensure quality while controlling costs. Conclusion Smarter products and greater complexity go hand in hand, and as your organization works to create products differentiated by their software and functionality, quality must be considered at every step in the lifecycle, from concept through disposal. The challenge is orchestrating development and testing to create an approach that can ensure quality while controlling costs. Software build Lab management Requirements definition Test planning Change and configuration management Test execution Reporting Resolve defects Identify defects Figure 2: Key activities in a robust quality management lifecycle.

IBM Rational Software Platform for Systems solutions provide a comprehensive portfolio of integrated systems engineering and embedded software development solutions designed to support the development of high-quality systems and products, including: IBM Rational Quality Manager software a lifecycle quality management solution for test planning, workflow control and metrics reporting. IBM Rational DOORS software a family of requirements management solutions with traceability and impact analysis capabilities for formal, rigorous requirements engineering purposes. IBM Rational Rhapsody software a model-driven development (MDD) environment for systems engineering, real-time and embedded software development and testing based on UML and SysML. IBM Rational Team Concert software integrated version control, automated workflows and build capabilities in a single product for real-time visibility and comprehensive project collaboration. IBM Rational Test RealTime software for embedded software a full-featured testing solution that supports component testing and run-time analysis, from single functions to entire systems, all initiated from a single GUI. For more information To learn more about the lifecycle approach to systems quality and the IBM Rational products that support it, contact your IBM sales representative or IBM Business Partner, or visit: ibm.com/software/rational/solutions/systems Copyright IBM Corporation 2009 IBM Corporation Software Group Route 100 Somers, NY 10589 U.S.A. Produced in the United States of America December 2009 All Rights Reserved IBM, the IBM logo, ibm.com, and Rational are trademarks or registered trademarks of International Business Machines Corporation in the United States, other countries, or both. If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol ( or ), these symbols indicate U.S. registered or common law trademarks owned by IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. A current list of IBM trademarks is available on the Web at Copyright and trademark information at ibm.com/legal/copytrade.shtml Other company, product, or service names may be trademarks or service marks of others. References in this publication to IBM products or services do not imply that IBM intends to make them available in all countries in which IBM operates. The information contained in this documentation is provided for informational purposes only. While efforts were made to verify the completeness and accuracy of the information contained in this documentation, it is provided as is without warranty of any kind, express or implied. In addition, this information is based on IBM s current product plans and strategy, which are subject to change by IBM without notice. IBM shall not be responsible for any damages arising out of the use of, or otherwise related to, this documentation or any other documentation. Nothing contained in this documentation is intended to, nor shall have the effect of, creating any warranties or representations from IBM (or its suppliers or licensors), or altering the terms and conditions of the applicable license agreement governing the use of IBM software. 1 Philip B. Crosby, Quality is Free, (McGraw-Hill, 1979), 17. RAW14166-USEN-00